Over the weekend, a viral Twitter thread exposed several issues in the credit lending decisioning process for Apple’s new payment card, underwritten by Goldman Sachs.
For some context, Apple and Goldman Sachs were involved in alleged gender discrimination in credit card limits caused by biased algorithms powering Apple Card’s credit lending decisioning process. There was widespread social media instances confirming this discrimination, including Apple’s very own co-founder, Steve Wozniak and his spouse. The primary issue here is with the Black Box algorithm that generated Apple’s credit lending decisions. As laid out in the Twitter thread, Apple Card’s customer service reps were rendered powerless to the algorithm’s decision. Not only did they have no insight into why certain decisions were made, they were they unable to override it.
The Apple Card & Goldman Sachs
We are entering into an era where our lives are being dictated by algorithms. While we generally feel that machine intelligence should be embraced because of the objective nature of algorithms, the reality is that this machine intelligence can skew our decision making if not properly checked.
Last year, Apple and Goldman Sachs announced a rather unique partnership to transform the digital currency landscape. Apple, a pioneer in consumer tech and services, coupled with Goldman Sachs, a behemoth in investment banking, had mutual benefit in driving new consumer offerings in the banking sector. Apple brought understanding of mobile computing and data while Goldman Sachs brought understanding of underwriting, risk modeling and the launch of Marcus, their digital bank, which is seeking to provide millennials with a new way of banking.
In this new card experience focused on “simplicity, transparency and privacy,” users get immediate response to their application, along with their credit limit.
Acquiring Customers Using Intelligent Automation
The balance of growth while reducing credit losses is a difficult proposition for banks. Lenders want to acquire new customers but must manage the risk of operational loss. Banks are searching for an intelligence system that learns from input data and uses best in class AI/ML (Artificial Intelligence and Machine Learning) to make informed decisions on the right model or series of models and sources, and drives a credit review that manages risk and rewards for the Bank over time with improved Life Time Value Creditors.
Apple and Goldman Sachs are driving innovation in credit application review and approval to use the right data from the right sources, lower their approval costs, drive better upfront approvals, limit risk of default and deliver to partners a better lifetime value of that new creditor.
The lending decisioning process most likely uses several well-known approaches for underwriting with the key limitation being a lack of a significant amount of real-time data to drive decisioning. There are significant efforts across the credit card industry to transform their present credit origination system to incorporate a variety of external data sources, which has now created a substantial data store that they will grow, leverage to design advanced analytics for insights, and drive an annuity of intelligence to drive growth and reduce losses in that growth portfolio. As a result, partnerships like Apple and Goldman Sachs make sense as the analytics and data stored is expected to grow significantly onward.
As they designed a new approach for credit approval, Goldman Sachs knew that present systems rely on a more a business rules process for credit check with the bureaus that is very costly. Apple and Goldman Sachs most likely collaborated on AI/ML models that are trained on bureau and non-traditional data, and the conventional informed review is “changed” towards full autonomous decisioning. Data comes from the three different bureaus of Equifax, TransUnion and Experian, but each have different value (strengths and weaknesses) and many overlap. Non-traditional data comes from other commercial sources such as LexisNexus, Dunn & Bradstreet, as well as publicly-available data sets like court records and voter registration lists that are aggregated by a 3rd party provider.
Regulation, Interpretability & Bias
Goldman Sachs is now being investigated by New York State regulators for gender bias. The FDIC for Risk Management Examination Manual for Credit Card Activities describes the following:
Automated underwriting and loan approval processes are increasingly popular and vary greatly in complexity. In an automated system, credit is generally granted based on the cut-off score and the desired loss rate. These systems are often based on statistical models and apply automated decision-making where possible. Banks sometimes establish auto-decline or auto-approve ranges where the system either automatically approves or declines the applicant based on established criteria, such as scores. The automated systems may also incorporate criteria other than scores (such as rules or overlays) into the credit decision. For example, the presence of certain credit bureau attributes (such as bankruptcy) outside of the credit score itself could be a contributing factor in the decision-making process. Examiners should gauge management’s practices for validating the scoring system and other set parameters within automated systems as well as for verifying the accurateness of data entry for those systems.
Cost is not the only driver as building trust using a disciplined framework for model risk management to address issues such as bias and explainability is critical. Eliminating bias and ensuring explainability are essential to trusting machine intelligence while remaining compliant with regulation such as the FCRA (Fair Credit Reporting Act). Goldman Sachs knows that it is critical that they get the approval process right. With the approval of a bad actor, it is incredibly difficult to remedy that active relationship. At scale, Goldman Sachs need to be able to develop, train, validate and deploy tens or hundreds of these AI/ML models into production. However, with data growing exponentially, managing both the model lifecycle as well as the model risk management is key.
- Its ability to generalize to an unseen dataset (predictive power);
- Our ability to understand why it generalizes (interpretability).
Here, we focus on local interpretability for a new creditor. We have identified two distinct interpretations of interpretability in the scientific literature that can be posed as answers to two different questions for a particular prediction:
- Perturbation Interpretation: which bureau or non-traditional features change the prediction the most when changed the least?
- Holistic Interpretation: which bureau of non-traditional features caused the prediction?
20 years in a row: Recognized as
테라데이트의 블로그를 구독하여 주간 통찰력을 얻을 수 있습니다