Is it Artificially Intelligent or Naturally Stupid? Let’s ask Apple

Earlier this week, there was an allegation that the credit scoring engine behind Apple card was biased. It emerged from the twitter account of David Heinemeier Hansson (@dhh). He raised the issue that his wife had been given a credit limit 20 times lower than his. David has about 360K followers on twitter, and the […]

The post Is it Artificially Intelligent or Naturally Stupid? Let’s ask Apple appeared first on Daily Fintech.

FCA pioneers digitising regulatory reporting using DLT and NLP

Too many TLAs (Three Letter Acronyms), I agree. Earlier this week the Financial Conduct Authority (FCA) published the results of a pilot programme called Digital Regulatory Reporting. It was an exploratory effort to understand the feasibility of using Distributed Ledger Technology (DLT) and Natural Language Processing (NLP) to automate regulatory reporting at scale.

Image Source

Let me describe the regulatory reporting process that banks and regulators go through. That will help understand the challenges (hence the opportunities) with regulatory reporting.

  1. Generally, on a pre-agreed date, the regulators release templates of the reports that banks need to provide them.
  2. Banks have an army of analysts going through these templates, documenting the data items required in the reports, and then mapping them to internal data systems.
  3. These analysts also work out how the bank’s internal data can be transformed to arrive at the report as the end result.
  4. These reports are then developed by the technology teams, and then submitted to the regulators after stringent testing of the infrastructure and the numbers.
  5. Everytime the regulators change the structure or the data required on the report, the analysis and the build process have to be repeated.

I have super simplified the process, so it would help to identify areas where things could go wrong in this process.

  1. Regulatory reporting requirements are often quite generic and high level. So interpreting and breaking them down into terms that Bank’s internal data experts and IT teams understand is quite a challenge, and often error prone.
  2. Even if the interpretation is right, data quality in Banks is so poor that, analysts and data experts struggle to identify the right internal data.
  3. Banks’ systems and processes are so legacy that even the smallest change to these reports, once developed, takes a long time.
  4. Regulatory projects invariably have time and budget constraints, which means, they are just built with one purpose – getting the reports out of the door. Functional scalability of the regulatory reporting system is not a priority of the decision makers in banks. So, when a new, yet related reporting requirement comes in from the regulators, banks end up redoing the entire process.
  5. Manual involvement introduces errors, and firms often incur punitive regulatory fines if they get their reports wrong.
  6. From a regulator’s perspective, it is hard to make sure that the reports coming in from different banks have the right data. There are no inter-bank verification that happens on the data quality of the report.

Now, to the exciting bits. FCA conducted a pilot called “Digital Regulatory Reporting” with six banks, Barclays, Credit-Suisse, Lloyds, Nationwide, Natwest and Santander. The pilot involved the following,

  1. Developing a prototype of a machine executable reporting system – this would mitigate risks of manual involvement.
  2. A standardised set of financial data definitions across all banks, to ensure consistency and enable automation.
  3. Creating machine executable regulation – a special set of semantics called Domain Specific Language (DSL) were tried to achieve this. This functionality was aimed at rewriting regulatory texts into stripped down, structured, machine readable formats. A small subset of the regulatory text was also converted to executable code, from regulatory texts based on this framework.
  4. Coding the logic of the regulation in Javascript and executed using DLT based smart contracts.
  5. Using NLP to parse through regulatory texts and automatically populate databases that regulatory reports run on.

If the above streams of efforts had been completely successful, we would have a world of regulators creating regulations using DSL standards. This would be automatically converted to machine executable code, and using smart contracts be executed on a Blockchain. NLP algorithms input data into the reporting data base, which will be ready with the data when the smart contracts were executed. On execution, the reports will be sent from the banks to the regulators in a standardized format.

This would have meant a few Billions in savings for UK banks. On average, UK banks spend £5 Billion per year on regulatory programmes. However, like most pilots, only part of the programme could be terms as successful. Bank’s didn’t have the resources to complete all the above aspects of the pilot successfully. They identified the following drawbacks.

  1. Creating regulatory text in DSL, so that machines can automatically create and execute code, may not be scalable enough for the regulators. Also, if the creation of code is defective, it would be hard to hold someone accountable for error prone reports.
  2. NLP required a lot of human oversight to get to the desired level of accuracy in understanding regulatory texts. So, human intervention is required to convert it to code.
  3. Standardising data elements specific to a regulator was not a viable option, and the costs involved in doing so is prohibitive.
  4. While the pilot had quite a few positive outcomes and learnings, moving from pilot to production would be expensive.

The pilot demonstrated that,

  1. A system where regulators could just change some parameters at their end and re-purpose a report would enable automated regulatory reporting.
  2. Centralizing processes that banks currently carry out locally, create significant efficiencies.
  3. Dramatic reduction in the time and cost of regulatory reporting change.
  4. Using DLT could reduce the amount of data being transferred across parties, and create a secured infrastructure.
  5. When data is standardised into machine readable formats, it removes ambiguity and the need for human interpretation, effectively improving quality of data and the reports.

In a recent article on Robo-Regulators, I highlighted the possibilities of AI taking over the job of a regulator. That was perhaps more radical blue-sky thinking. However, using NLP and DLT to create automated regulatory reporting definitely sounds achievable. Will banks and the regulators be willing to take the next steps in moving to such a system? Watch this space.


Arunkumar Krishnakumar is a Venture Capital investor at Green Shores Capital focusing on Inclusion and a podcast host.

Get fresh daily insights from an amazing team of Fintech thought leaders around the world. Ride the Fintech wave by reading us daily in your email


Wolf Wolf!! Recession is coming – but can AI help?

These are interesting times with Brexit around the corner, an Indo-Pak (China) war looming, and a disastrous trade relationship between the two largest economies of the world.

This week I delivered a speech at Cass Business School on how and if AI could help in dealing with recessions. There is so much noise about the next recession, that I wonder, if people prefer a recession to cool down the economy a bit.

Image Source

And the expectations on Constantinople is pushing up crypto prices again – although I don’t believe for a second that, the crypto market is big enough yet, to trigger recessions.

Assume you are driving a Ford Fiesta, can the speed indicator on your dashboard keep you from having an accident? Upgrading your car to a more sophisticated, intelligent one would definitely help. But that doesn’t prevent you from having an accident either. Even self driving cars could be hacked, or could have a bug that causes accidents.

AI/Machine learning or any variation of data driven intelligence, as we know them today, can provide us suggestions – and clever ones.

But if a market filled with irrational exuberance from humans have to be fixed by rational machines, it is a tall ask.

The dot com bubble burst and the subprime mortgage crash happened because of too much liquidity in the market leading to bad lending and spending decisions. And it only took a trigger like a policy change or a crash of Lehman Brothers to sap liquidity off the market. So, what are the signs now?

Image Source

How can we get intelligent with the data around us and spot recessions? An analysis of Consumer data should provide us with a view on consumer behaviour, and almost predict where inflation would be heading. One of the firms I recently met, used open banking to collect consumers data, enrich it, and help them manage their finances. But the intelligence they gather from millions of transaction level data are used by their institutional clients to understand customer sentiments towards a brand.

Those insights combined with macro economic data should give these institutions the intelligence to choose their investments. The applications of open banking have largely been focused around selling services to customers in a personalized fashion. However, open banking data should help us understand where the economy is heading too.

Risk management functions in banks/FIs have been beefed up since the recession. About £5 Billion is spent in the UK alone on risk and regulatory projects every year. The ability to perform scalable simulations in a Quantum computing ready world will help banks provide near real time risk management solutions.

In capital markets, we model the risk of a position by applying several risk factors to it. Often these risk factors are correlated to each other. To be able to model the effect of a dozen or more correlated risk factors on a firm’s position is hard for conventional computers. And as the number of these correlated risk factors increase, the computational power required to calculate risks increase exponentially. This is one of the key issues of simulations (not just in financial services) that Quantum computers are capable of solving.

11 years ago, when the recession happened, regulators were ill-equipped to react due to the lack of real time insights. Today they have regular reports from banks on transactions, and better ways to understand consumers’ behaviour. That clubbed with macro economic data trends, should provide enough indicators for regulators to set policies. So, when there is a tax law that would trigger a collapse is being proposed, they should come up with strategies to bring the law into effect with minimal damage to the economy.

In the machine learning world, there are two different approaches – supervised and unsupervised models. If you understand the problem well, you typically go for the supervised model and see how the dependent variable is affected by the independent variables.

However, I believe, recessions often have the habit of hitting us from a blind spot. We don’t know what we don’t know.

It’s important for regulators and central banks to run exploratory analysis – unsupervised models, and assess the patterns and anomalies that the algorithms throw.

Data from consumer behaviour, geo-political events, macro economics and the market should give these algorithms enough to identify patterns that bring about recessions. This may not necessarily help us avoid a recession, but could definitely reduce the impact of a sudden recession, or help us engineer a controlled recession when we want a cool down of the economy.


Arunkumar Krishnakumar is a Venture Capital investor at Green Shores Capital focusing on Inclusion and a podcast host.

Get fresh daily insights from an amazing team of Fintech thought leaders around the world. Ride the Fintech wave by reading us daily in your email