Earlier this week, there was an allegation that the credit scoring engine behind Apple card was biased. It emerged from the twitter account of David Heinemeier Hansson (@dhh). He raised the issue that his wife had been given a credit limit 20 times lower than his. David has about 360K followers on twitter, and the […]
Too many TLAs (Three Letter Acronyms), I agree. Earlier this week the Financial Conduct Authority (FCA) published the results of a pilot programme called Digital Regulatory Reporting. It was an exploratory effort to understand the feasibility of using Distributed Ledger Technology (DLT) and Natural Language Processing (NLP) to automate regulatory reporting at scale.
Let me describe the regulatory reporting process that banks and regulators go through. That will help understand the challenges (hence the opportunities) with regulatory reporting.
Generally, on a pre-agreed date, the regulators release templates of the reports that banks need to provide them.
Banks have an army of analysts going through these templates, documenting the data items required in the reports, and then mapping them to internal data systems.
These analysts also work out how the bank’s internal data can be transformed to arrive at the report as the end result.
These reports are then developed by the technology teams, and then submitted to the regulators after stringent testing of the infrastructure and the numbers.
Everytime the regulators change the structure or the data required on the report, the analysis and the build process have to be repeated.
I have super simplified the process, so it would help to identify areas where things could go wrong in this process.
Regulatory reporting requirements are often quite generic and high level. So interpreting and breaking them down into terms that Bank’s internal data experts and IT teams understand is quite a challenge, and often error prone.
Even if the interpretation is right, data quality in Banks is so poor that, analysts and data experts struggle to identify the right internal data.
Banks’ systems and processes are so legacy that even the smallest change to these reports, once developed, takes a long time.
Regulatory projects invariably have time and budget constraints, which means, they are just built with one purpose – getting the reports out of the door. Functional scalability of the regulatory reporting system is not a priority of the decision makers in banks. So, when a new, yet related reporting requirement comes in from the regulators, banks end up redoing the entire process.
Manual involvement introduces errors, and firms often incur punitive regulatory fines if they get their reports wrong.
From a regulator’s perspective, it is hard to make sure that the reports coming in from different banks have the right data. There are no inter-bank verification that happens on the data quality of the report.
Now, to the exciting bits. FCA conducted a pilot called “Digital Regulatory Reporting” with six banks, Barclays, Credit-Suisse, Lloyds, Nationwide, Natwest and Santander. The pilot involved the following,
Developing a prototype of a machine executable reporting system – this would mitigate risks of manual involvement.
A standardised set of financial data definitions across all banks, to ensure consistency and enable automation.
Creating machine executable regulation – a special set of semantics called Domain Specific Language (DSL) were tried to achieve this. This functionality was aimed at rewriting regulatory texts into stripped down, structured, machine readable formats. A small subset of the regulatory text was also converted to executable code, from regulatory texts based on this framework.
Coding the logic of the regulation in Javascript and executed using DLT based smart contracts.
Using NLP to parse through regulatory texts and automatically populate databases that regulatory reports run on.
If the above streams of efforts had been completely successful, we would have a world of regulators creating regulations using DSL standards. This would be automatically converted to machine executable code, and using smart contracts be executed on a Blockchain. NLP algorithms input data into the reporting data base, which will be ready with the data when the smart contracts were executed. On execution, the reports will be sent from the banks to the regulators in a standardized format.
This would have meant a few Billions in savings for UK banks. On average, UK banks spend £5 Billion per year on regulatory programmes. However, like most pilots, only part of the programme could be terms as successful. Bank’s didn’t have the resources to complete all the above aspects of the pilot successfully. They identified the following drawbacks.
Creating regulatory text in DSL, so that machines can automatically create and execute code, may not be scalable enough for the regulators. Also, if the creation of code is defective, it would be hard to hold someone accountable for error prone reports.
NLP required a lot of human oversight to get to the desired level of accuracy in understanding regulatory texts. So, human intervention is required to convert it to code.
Standardising data elements specific to a regulator was not a viable option, and the costs involved in doing so is prohibitive.
While the pilot had quite a few positive outcomes and learnings, moving from pilot to production would be expensive.
The pilot demonstrated that,
A system where regulators could just change some parameters at their end and re-purpose a report would enable automated regulatory reporting.
Centralizing processes that banks currently carry out locally, create significant efficiencies.
Dramatic reduction in the time and cost of regulatory reporting change.
Using DLT could reduce the amount of data being transferred across parties, and create a secured infrastructure.
When data is standardised into machine readable formats, it removes ambiguity and the need for human interpretation, effectively improving quality of data and the reports.
In a recent article on Robo-Regulators, I highlighted the possibilities of AI taking over the job of a regulator. That was perhaps more radical blue-sky thinking. However, using NLP and DLT to create automated regulatory reporting definitely sounds achievable. Will banks and the regulators be willing to take the next steps in moving to such a system? Watch this space.
These are interesting times with Brexit around the corner, an Indo-Pak (China) war looming, and a disastrous trade relationship between the two largest economies of the world.
This week I delivered a speech at Cass Business School on how and if AI could help in dealing with recessions. There is so much noise about the next recession, that I wonder, if people prefer a recession to cool down the economy a bit.
And the expectations on Constantinople is pushing up crypto prices again – although I don’t believe for a second that, the crypto market is big enough yet, to trigger recessions.
Assume you are driving a Ford Fiesta, can the speed indicator on your dashboard keep you from having an accident? Upgrading your car to a more sophisticated, intelligent one would definitely help. But that doesn’t prevent you from having an accident either. Even self driving cars could be hacked, or could have a bug that causes accidents.
AI/Machine learning or any variation of data driven intelligence, as we know them today, can provide us suggestions – and clever ones.
But if a market filled with irrational exuberance from humans have to be fixed by rational machines, it is a tall ask.
The dot com bubble burst and the subprime mortgage crash happened because of too much liquidity in the market leading to bad lending and spending decisions. And it only took a trigger like a policy change or a crash of Lehman Brothers to sap liquidity off the market. So, what are the signs now?
How can we get intelligent with the data around us and spot recessions? An analysis of Consumer data should provide us with a view on consumer behaviour, and almost predict where inflation would be heading. One of the firms I recently met, used open banking to collect consumers data, enrich it, and help them manage their finances. But the intelligence they gather from millions of transaction level data are used by their institutional clients to understand customer sentiments towards a brand.
Those insights combined with macro economic data should give these institutions the intelligence to choose their investments. The applications of open banking have largely been focused around selling services to customers in a personalized fashion. However, open banking data should help us understand where the economy is heading too.
Risk management functions in banks/FIs have been beefed up since the recession. About £5 Billion is spent in the UK alone on risk and regulatory projects every year. The ability to perform scalable simulations in a Quantum computing ready world will help banks provide near real time risk management solutions.
In capital markets, we model the risk of a position by applying several risk factors to it. Often these risk factors are correlated to each other. To be able to model the effect of a dozen or more correlated risk factors on a firm’s position is hard for conventional computers. And as the number of these correlated risk factors increase, the computational power required to calculate risks increase exponentially. This is one of the key issues of simulations (not just in financial services) that Quantum computers are capable of solving.
11 years ago, when the recession happened, regulators were ill-equipped to react due to the lack of real time insights. Today they have regular reports from banks on transactions, and better ways to understand consumers’ behaviour. That clubbed with macro economic data trends, should provide enough indicators for regulators to set policies. So, when there is a tax law that would trigger a collapse is being proposed, they should come up with strategies to bring the law into effect with minimal damage to the economy.
In the machine learning world, there are two different approaches – supervised and unsupervised models. If you understand the problem well, you typically go for the supervised model and see how the dependent variable is affected by the independent variables.
However, I believe, recessions often have the habit of hitting us from a blind spot. We don’t know what we don’t know.
It’s important for regulators and central banks to run exploratory analysis – unsupervised models, and assess the patterns and anomalies that the algorithms throw.
Data from consumer behaviour, geo-political events, macro economics and the market should give these algorithms enough to identify patterns that bring about recessions. This may not necessarily help us avoid a recession, but could definitely reduce the impact of a sudden recession, or help us engineer a controlled recession when we want a cool down of the economy.
Get fresh daily insights from an amazing team of Fintech thought leaders around the world. Ride the Fintech wave by reading us daily in your email
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish.AcceptRejectRead More
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.