AI Introduces a New Risk Model for Banks

When depending on algorithms that absorb, evaluate, and ultimately provide a verdict on enormous volumes of data in the age of machine learning, it can be difficult to grasp and describe what is going on behind the scenes, and why the result is what it is. 

According to Featurespace Product Manager Richard Graham, model explainability is crucial in financial services, as well as any other vertical that relies on modern technologies to make daily decisions that touch people’s lives. Model explainability is the ability to comprehend what is happening as inputs are transformed into outputs. 

A lot of artificial intelligence (AI) solutions make a choice, but they rarely explain why, according to Graham. 

Consider applying for a job. You publish an online CV with a variety of details that show a picture of your work progression and objectives. Then follows an email from HR stating flatly that the application will not be considered. But that is all there is to it; there is no explanation for why the outcome was reached.  

“Model explainability takes inputs, processes the data, and then is able to give outputs about how it came to the conclusion. This is the useful and valuable information businesses can then use to justify the accuracy of the technology’s decision making.” 

Richard Graham, Featurespace Product Manager

This level of openness – understanding the “why” as well as the “what” – is crucial in financial services, where banks and other financial institutions (FIs) collect billions of data points from millions of consumers and connect them to other transactions. 

A Reliable Safety Net 

Because of the increased attempts at fraud and money laundering, this is more vital than ever. Graham believes that legacy technology is fantastic for spewing out explainable regulations, but that in the midst of the big digital revolution, all financial institutions are attempting to introduce more machine learning and behavioral analytics to their consumers.  

“Some of the barriers that I’ve seen for FIs as they are adopting new technology are tied to trust, Can you trust these new models and algorithms to derive meaningful insights from the significant information coming in – and importantly, can you trust that they’re better than the existing rules that are already in the legacy technology?” 

Richard Graham

Explainability, he claims, can raise confidence levels, and eliminate some of the false positives that plague traditional procedures.  

Model Explainability Objectives 

According to Graham, a well-designed model will display the user all of the data it utilized to reach a decision. For example, it will look to see if a bank app user has checked in several times from different places – and if so, whether their purchasing habits have changed. In order to delve deeper into the avalanche of pandemic-era online payments, financial institutions must carefully analyze data associated with red flags.  

“[That] will give the investigator lower false positives, and they are going to better understand the different types of fraudulent activity that is coming though, and they are getting more complete information to bae their decisions on.”

Richard Graham

This will benefit end users as well. A bank, for example, can specify to that end client – in addition to highlighting account behavior that may be “many standard deviations” over what might be considered usual expenditure – that someone posing as them has logged in from a new place from which they’ve never traded. 

Banks will be able to use risk and fraud management as a competitive and strategic advantage in 2022 and beyond, he said. 

“Fraud will force every single bank to fight for its reputation, and FIs are already starting to convey that they have the fraud controls in place to better protect customers. Fraud prevention technology is going to be a differentiator in 2022.” 

Richard Graham

Information from PYMNTS

Click Here for Full Article

Leave a Reply

%d