Will the Worries of Regulators Put a Stop to Lenders’ Use of AI?

There are two ways to interpret this. Banks and fintechs might determine that the hazard of deploying more sophisticated decision-making models is not warranted regulatory attention. Alternately, they can view the cautionary statements as proof that authorities are establishing specific guidelines for what is acceptable and what is not because they recognize that the utilization of AI in lending is essential. Early signs point to that as well, according to industry observers.

Concern of Regulators

“Companies are not absolved of their legal responsibilities when they let a black-box model make lending decisions. The law gives every applicant the right to a specific explanation if their application for credit was denied, and that right is not diminished simply because a company uses a complex algorithm that it doesn’t understand.”

CFPB Director Rohit Chopra, News Release on May 26th

The CFPB highlighted that lenders, irrespective of the nature of technology utilized, must adhere to all federal customer financial protection rules, which include the Equal Credit Opportunity Act and that they “cannot justify non-compliance with ECOA based on the mere fact that the technology they use to evaluate credit applications is too complicated, too opaque in its decision-making, or too new.” Creditors are required by ECOA to give the applicant notice, and that notice has to include precise and accurate justifications for the action.

Chi Chi Wu, a staff attorney at the National Consumer Law Center, pointed out that the Equal Credit Opportunity Act, Regulation B, which administers it, and the necessity for unfavorable action notices that describe why consumers are refused all exist since the 1970s.

“What is new is that there’s this technology that makes it a lot harder to provide the reasons why an adverse action was taken. That’s artificial intelligence and machine learning. The legacy systems for credit are built so that they can produce these reason codes that could be translated into reasons given why a credit score is the way that it is, and then that can be used as part of the adverse action notice.”

Wu

According to Wu, AI software is more difficult to explain.

“It’s a lot harder when you let the machine go and make these decisions and use thousands or tens of thousands of variables.”

Wu

Although it could be harder, online lenders claim it is still possible.

Debtosh Banerjee claims his company has always complied with ECOA and other consumer protection rules. Avant, a Chicago-based online lender, has been utilizing machine learning in lending since 2014.

According to Banerjee, senior vice president and head of card and banking at Avant and a former employee of U.S. Bank and HSBC says, “one of the biggest problems we had was, how do we explain why we declined someone, because we still had to comply with all the rules.”

The business developed algorithms that explain why credit is declined to applicants. In front of regulators, it had to justify those models.

“The fundamental rules are the same as they were 20 years back; nothing has changed. We are highly regulated. Customers come to us and we have to give them reasons why they are declined. That’s business as usual.” 

Banerjee

The models utilized, according to other lenders who use AI and AI-based lending software suppliers like Zest and Upstart, have accurateness built in from the start. They assert that these programs produce reports that more thoroughly explain loan decisions than do conventional models. They also claim that their software has safeguards and checks for fair lending and disproportionate impact built right into it.

Deloitte partner Alexey Surkov, who oversees the model risk management group here, views such assertions with skepticism.

“Some of the larger institutions have teams of developers building and testing and deploying models of this kind all day long.”

Surkov

Also, he claimed that they do utilize model risk management controls, just like third-party vendors have done, in order to cover documentation, interpretability, monitoring, and other protections. The controls are not always put in place to catch every issue.

“I would stop short of saying that they are all perfect on those scores. Sometimes there is a little bit of a gap between the marketing and the reality that we see when we go and actually test models and open up the hood and see just how transparent the model is and how well monitored it is.”

Surkov

He chose not to provide any concrete examples. From a documentation and control standpoint, a model might clear off a number of the boxes, but it might also need more work to address issues like explainability, he added.

“This is not a new concept. It’s not a new requirement. But it is certainly not a fully solved issue, either.”

Surkov

Are regulators becoming more at ease with the use of AI by banks?

According to Surkov, the regulators’ cautions on the use of AI in financing are an admission that licensed banks are already utilizing these methods.

“Historically the regulators have not been very supportive of the use of AI or machine learning or any sort of a black-box technology for anything. The positive thing here is that they are getting more and more comfortable and are basically saying to the banks, listen, we know that you’re going to be using these more advanced models. So let’s make sure that as we enter this new era, that we’re doing it thoughtfully from a governance perspective and a risk perspective and that we are thinking about all of the risks.”

Surkov

“They’re meant to enable the use of this technology, like the seat belts and airbags and antilock brakes that will enable us to go much faster on this new highway. Without those technologies, we’d be going 15 miles an hour, like we did a hundred years ago. So having regulations that clearly delineate what is okay, what is not okay and what institutions should have if they are to use these new technologies will enable the use of these technologies as opposed to reducing and shutting them down.” 

Surkov

“Banks have prudential regulators as well as the CFPB as their regulator. Also, that’s just their culture. They don’t move quickly when it comes down to these issues. We hear complaints that banks that are credit card lenders haven’t even moved to FICO 9, they’re still on FICO 8, so having them go from that to alternative data to AI algorithms, those are big leaps.”

Wu

According to Wu, banks continue to be more cautious about employing AI in lending than fintechs.

“Some of the larger institutions have teams of developers building and testing and deploying models of this kind all day long. They do use model risk management controls, as third party vendors do, to handle documentation, explainability, monitoring and other safeguards, he said, but the controls aren’t always implemented to filter out all problems. 

Wu

“I would stop short of saying that they are all perfect on those scores. Sometimes there is a little bit of a gap between the marketing and the reality that we see when we go and actually test models and open up the hood and see just how transparent the model is and how well monitored it is.” He declined to give specific examples.”

Surkov

From a documentation and control standpoint, a model might initially check off a number of the boxes, but it might also need more work to address issues like explainability, he added.

“This is not a new concept. It’s not a new requirement. But it is certainly not a fully solved issue, either.”

Surkov

Banks are contacting Deloitte more frequently with questions concerning AI governance, especially explainability. In order to assist businesses with this, the company developed what it refers to as a reliable AI foundation.

Fintechs are prepared to say they have all the necessary explainability and move on. Wu urged everyone to exercise cautiously.

“The promise of AI is that it will be able to better judge whether people are good borrowers or not and be much more accurate than credit scores that are really blunt. That’s the promise, but we’re not going to get there without intentionality and strong focus on ensuring fairness and not what you’d call woke washing.”

Wu

Information from American Banker

Click Here for Article

Leave a Reply

%d bloggers like this: