202001.15
Off
0

When Artificial Intelligence Creates Adverse Impacts Based Upon Gender

Robot holding human in virtual display, Artificial intelligence in futuristic technology concept. 3d illustration

What Apple Card and Goldman Sachs Alleged Gender Discrimination Means for the Future of AI  

When Apple launched its new credit card it partnered with Goldman Sachs to provide the underwriting for applicants.  Shortly after the launch, allegations of credit-limit discrimination based upon gender were made by couples including David Hanson and Apple co-founder Steve Wozniak who both indicated that they received much higher credit limits for the Apple cards then their wives.  Hanson claimed that his credit limit was 20 times that offered to his wife despite the fact that his wife’s credit score was better than his.  Wozniak claimed that his credit limit was 10 times that of his wife’s despite the fact that they file joint tax returns, share bank accounts, and credit cards.  Following these two reports, many other couples came forward alleging disparate treatment of women vs. men.

Goldman Sachs responded indicating that it is careful not to discriminate on the basis of gender and indicated that they do not even know the gender of the applications it processes.  It further indicated that it worked with a third-party to review the decision process to guard against unintended bias and outcomes.  In a statement to Forbes, a Goldman Sachs spokesperson, Patrick Lenihan said there is no “black box.” He said, “For credit decisions we make, we can identify which factors from an individual’s credit bureau issued credit report or stated income contribute to the outcome. We welcome a discussion of this topic with policymakers and regulators.”  The reference to a “black box” describes the use of Artificial Intelligence and Machine Learning that is not explainable to humans.  If such a system was used in connection with Apple Card, it might be very difficult for Goldman to respond to the recently announced investigation by the New York State Department of Financial Services.

As a lawyer, representing clients on matters involving AI, I have taken a keen interest in this case.  Under New York law, it is unlawful to:

To discriminate in the granting, withholding, extending or renewing, or in the fixing of the rates, terms or conditions of, any form of credit, on the basis of race, creed, color, national origin, sexual orientation, gender identity or expression, military status, age, sex, marital status, disability, or familial status; McKinney’s Executive Law § 296-a.

The law defines discrimination as including separation and segregation.  So how could a system that does not know the gender of an applicant discriminate based upon gender?  Some have speculated that the Goldman Sachs underwriting process is using an AI algorithm that has “learned” to look at other data points as a proxy for gender.  For example, an AI solution could be trained that applicants named Karen are more likely to be women than applicants named Robert, or that applicants with open credit accounts at Victoria Secret are more likely to be women than applicants with open credit accounts at the Men’s Warehouse.

Goldman Sachs has rejected the notion that gender bias has been learned by an AI solution and has stated that it believes the likely cause of the credit limit differences is the result of credit history differences between the husband and wife resulting from the fact that the wife’s previously issued credit cards were set up as supplemental accounts under a husband’s primary card.

If true Goldman should be able to establish compliance with the law which makes it clear that:

It shall not be considered discriminatory if credit differentiations or decisions are based upon factually supportable, objective differences in applicants’ overall credit worthiness, which may include reference to such factors as current income, assets and prior credit history of such applicants, as well as reference to any other relevant factually supportable data; McKinney’s Executive Law § 296-a(3).

Accordingly, if Goldman can objectively show the credit limit differences between a husband and wife were based upon differences in credit history, it should be able to satisfy the regulator’s inquiry. However, to the extend that the Goldman’s underwriting process relied upon “aggregate statistics or assumptions relating to … gender identity or expression, sex, or marital status” Goldman would have a problem under section § 296-a(3).

This case demonstrates the regulatory and reputational risk associated with the use of artificial intelligence and confirms the need for explainability in the use of AI when it involves regulated data or making of business decisions that are subject to regulatory oversight or create the potential for civil liability.  I am not surprised that the Godman spokesperson was quick to dispel any rumors of the use of black-box AI.  Imagine how this case might go if Goldman could not clearly articulate exactly how its tools evaluated the credit applications.  Stated alternatively, if Goldman was not able to demonstrate the factually supportable objective differences between a male applicant on the one hand and female applicant on the other, it would be very difficult to defend itself in the investigation by the Department of Financial Services.

The lesson here is that use of AI for decision making carries with it a great deal of legal risk, both in the form of regulatory compliance and civil liability.  As companies increase reliance upon AI for decision making, disparate impacts that result from the use of AI tools will likely be questioned in court.  Careful attention to the regulatory implications of AI adoption as well as the requirements of explainability and compliance with laws need to be drafted into AI services contracts.  While the allegation against Apple and Goldman Sachs may prove to be unfounded, this case raises important questions that will become more and more prevalent as the digital transformation proceeds.

About the author:  Mr. Scott is the managing partner of Scott & Scott, LLP a technology law firm focusing on Artificial Intelligence.  He can be reached at rjscott@scottandscottllp.com.