FinServ in the age of AI – Can the FCA keep the machines under check?

Zz0yZGVlNWFjNzUyNjgwYjFmMDc2NzMyNWM0MGQyZTYzMA==

Image Source

I landed in the UK about 14 years ago. I remember my initial months in the UK, when I struggled to get a credit card. This was because, the previous tenant in my address had unpaid loans. As a result, credit agencies had somehow linked my address to credit defaults.

It took me sometime to understand why my requests for a post paid mobile, a decent bank account and a credit card were all rejected. It took me longer to turn around my credit score and build a decent credit file.

I wrote a letter to Barclays every month, explaining the situation until one fine day they rang my desk phone at work to tell me that my credit card had been approved. It was ironical because, I was a Barclays employee at that time. I started on the lowest rungs of the credit ladder for no fault of mine. Times (should) have changed.

Artificial Intelligence, Machine Learning, Deep Learning, Neural Networks and a whole suite of methodologies to make clever use of customer data have been on the rise. Many of these techniques have been around for several decades. However, only in recent times have they become more mainstream.

The social media boom has created data at an unforeseen scale and pace that the algorithms have been able to identify patterns and get better at prediction. Without the vast amount of data we create on a daily basis, machines lack the intelligence to serve us. However, machines rely on high quality data to produce accurate results. As they say, Garbage in Garbage out.

Several Fintechs these days are exploring ways to use AI to provide more contextual, relevant and quick services to consumers. Gone are the days when AI was considered emerging/deep tech. A strong data intelligence capability is nowadays a default feature of every company that pitches to VCs.

As AI investments in Fintech hit record highs, it’s time the regulators started thinking about the on-the-ground challenges of using AI for financial services. The UK’s FCA have partnered with Alan Turing Institute to study explainability and transparency while using AI.

Three key scenarios come up, when I think about what could go wrong in the marriage of Humans and Machines in financial services.

  • First, when a customer wants a service from a Bank (say a loan), and a complex AI algorithm comes back with a “NO”, what happens?
    • Will the bank need to explain to the customer why their loan application was not approved?
    • Will the customer services person understand the algorithm enough to explain the rationale for the decision to the customer?
    • What should banks do to train their staff to work with machines?
    • If a machine’s decision in a critical scenario needs to be challenged, what is the exception process that the staff needs to use?
    • How will such exception process be reported to the regulators to avoid malpractice from banks’ staff?
  • Second, as AI depends massively on data, what happens if the data that is used to train the machines is bad. By bad, I mean biased. Data used to train machines should not only be accurate, but also representative of real data. If a machine that is trained by bad data makes wrong decisions, who will be held accountable?
  • Third, Checks and controls need to be in place to ensure that regulators understand a complex algorithm used by banks. This understanding is absolutely essential to ensure technology doesn’t create systemic risks.

From a consumer’s perspective, the explainability of an algorithm deciding their credit worthiness is critical. For example, some banks are looking at simplifying the AI models used to make lending decisions. This would certainly help bank staff understand and help consumers appreciate decisions made by machines.

There are banks who are also looking at reverse engineering the explainability when the AI algorithm is complex.  The FCA and the Bank of England have tried this approach too. A complex model using several decision trees to identify high risk mortgages had to be explained. The solution was to create an explainability algorithm to present the decisions of the black box machine.

The pace at which startups are creating new solutions makes it harder for service providers. In recent times I have come across two firms who help banks with credit decisions. The first firm collected 1000s of data points about the consumer requesting for a loan.

One of the points was the fonts installed on the borrowers laptop. If the fonts were used in gambling websites, the credit worthiness of the borrower took a hit. As the font installed indicated gambling habits, the user demonstrated habits that could lead to poor money management.

The second firm had a chatbot that had a conversation with the borrower and using psychometric analysis came up with a score. The score would indicate the “intention to repay” of the customer. This could be a big opportunity for banks to use in emerging markets.

Despite the opportunities at hand, algorithms of both these firms are black boxes. May be it’s time regulators ruled that technology making critical financial decisions need to follow some rules of simplicity or transparency. From the business of creating complex financial products, banks could now be creating complex machines that make unexplainable decisions. Can we keep the machines under check?


Arunkumar Krishnakumar is a Venture Capital investor at Green Shores Capital focusing on Inclusion and a podcast host.

I have no positions or commercial relationships with the companies or people mentioned. I am not receiving compensation for this post.

Subscribe by email to join Fintech leaders who read our research daily to stay ahead of the curve. Check out our advisory services (how we pay for this free original research).


 

 

 

The post FinServ in the age of AI – Can the FCA keep the machines under check? appeared first on Daily Fintech.