The CFPB’s Warnings About AI Bias Could Frighten Lenders

As director of the Consumer Financial Protection Bureau, Rohit Chopra has used practically every public opportunity to warn corporations about the possible misuse of artificial intelligence in loan decisions.

According to Chopra, algorithms can never be “bias-free” and may result in credit decisions that are unfair to individuals. He contends that machine learning can be anti-competitive, leading to “digital redlining” and “robo discrimination”.

Photo Source

The implication for banks and fast-moving fintechs is clear: AI-related enforcement actions are on the way, as is prospective guidance on what makes alternative data such as utility and rent payments hazardous when utilized in marketing, pricing, and underwriting products.

“The focus on artificial intelligence and machine learning is explicit. Because of the apparent usefulness of AI, corporations have sometimes employed this technology in ways that have gotten ahead of legal and compliance folks.”

Stephen Hayes, Partner at Relman Colfax PLLC & Former CFPB Senior Counsel

Chopra tweeted earlier this month, “The @CFPB will be taking a deeper look at how lenders use artificial intelligence or algorithmic decision tools.” Chopra was alluding to a case in which a lender violated fair-lending standards by unfairly questioning churches seeking small-business loans; the danger is that religious bias could be encoded into AI technologies.

He also made a point of mentioning AI at a joint press conference with the Justice Department last year in Jackson, Mississippi, about the federal government’s redlining settlement with $17 billion-asset Trustmark National Bank.

“When consumers and regulators do not know how decisions are made by the algorithms, consumers are unable to participate in a fair and competitive market free from bias.

Rohit Chopra, Director of CFPB

Other CFPB officials have also spoken out on AI.

Lorelei Salas, the CFPB’s associate director for supervisory policy, cautioned in a blog post earlier this month that the bureau intends “to more closely monitor the use of algorithmic decision tools, given that they are frequently ‘black boxes’ with little transparency.

In addition, the CFPB’s new enforcement chief, Eric Halperin, suggested in a speech to the Consumer Federation of America in December that some uses of AI may represent “unfair, deceptive, and abusive acts and practices,” or UDAAP violations.

Nonetheless, Chopra’s and his subordinates’ rhetoric contradicts the CFPB’s previous efforts. Chopra is a Biden nominee who was sworn in October. The CFPB aimed to stimulate innovation and address regulatory uncertainty in enforcing the Equal Credit Opportunity Act, which bars discrimination in credit and lending decisions, during the Trump administration.

CFPB officials advised lenders to “think imaginatively about how to take advantage of AI’s potential benefits” in a 2020 blog post. The CFPB also provided examples of how it would allow “flexibility” in how lenders may utilize AI models.

While there is some evidence that “alternative data” – information not often found in credit files and not commonly submitted by customers on credit applications – can assist lenders in making appropriate underwriting and loan pricing choices, there is also evidence of prejudice.

Even as banks and fintechs push ahead with AI and machine-learning projects that they believe will increase credit availability to more individuals, many anticipate a crackdown.

“The CFPB will be digging hard on documentation of AI models. There is an effort to warn the industry – and particularly the fintech side of the industry – not to get too creative with the data they ingest and use in making credit decisions.”

Christopher Willis, Partner at Ballard Spahr

Critics see hazards in the usage of AI in terms of the quality of alternative data, as well as how it may be used to weed out particular potential applications or screen applicants for specific items.

Proponents of AI and machine learning believe that widespread adoption is unavoidable because computers can replace humans and learn from the programs and data that are supplied to them. Fintech and other AI supporters believe that with more data, computers will eventually make better, more accurate credit underwriting choices.

“The bureau is right to focus on the risks from AI and machine learning, but they should also focus on the potential benefits.”

Eric Mogilnicki, Partner at Covington & Burling LLP

Chopra’s interest in AI coincides with a federal regulator’s endeavor to determine if it is being utilized safely and securely, and whether financial institutions are in conformity with consumer protection rules. The Federal Reserve, Federal Deposit Insurance Corp., Office of the Comptroller of the Currency, National Credit Union Administration, and the Consumer Financial Protection Bureau collaborated on the request.

Many consumer advocates applaud the increased availability of credit to unbanked or underbanked individuals. However, there are fears that borrowers would be subjected to targeted marketing campaigns based on AI models.

Many anticipate that the CFPB would request that corporations offer a fair-lending study of AI to guarantee that it is transparent and easily explained.

“Any use of newer algorithm technologies like machine learning needs to be thoroughly documented in terms of its business purpose, accuracy and transparency, and it needs to be subjected to a rigorous fair-lending analysis.”

Christopher Willis

Some analysts believe that financial institutions should also examine how traditional data is used.

“Part of [Chopra’s] point is that even more traditional data can be used and manipulated in ways that perpetuate existing discrimination,” Hayes said. “This is a shot across the bow that the CFPB wants institutions to take these issues seriously, provide some transparency about how models are being used, and guard against discrimination risks.”

Stephen Hayes

Information from American Banker

Click Here for Full Article

Leave a Reply

%d bloggers like this: