FinServ in the age of AI – Can the FCA keep the machines under check?

Zz0yZGVlNWFjNzUyNjgwYjFmMDc2NzMyNWM0MGQyZTYzMA==

Image Source

I landed in the UK about 14 years ago. I remember my initial months in the UK, when I struggled to get a credit card. This was because, the previous tenant in my address had unpaid loans. As a result, credit agencies had somehow linked my address to credit defaults.

It took me sometime to understand why my requests for a post paid mobile, a decent bank account and a credit card were all rejected. It took me longer to turn around my credit score and build a decent credit file.

I wrote a letter to Barclays every month, explaining the situation until one fine day they rang my desk phone at work to tell me that my credit card had been approved. It was ironical because, I was a Barclays employee at that time. I started on the lowest rungs of the credit ladder for no fault of mine. Times (should) have changed.

Artificial Intelligence, Machine Learning, Deep Learning, Neural Networks and a whole suite of methodologies to make clever use of customer data have been on the rise. Many of these techniques have been around for several decades. However, only in recent times have they become more mainstream.

The social media boom has created data at an unforeseen scale and pace that the algorithms have been able to identify patterns and get better at prediction. Without the vast amount of data we create on a daily basis, machines lack the intelligence to serve us. However, machines rely on high quality data to produce accurate results. As they say, Garbage in Garbage out.

Several Fintechs these days are exploring ways to use AI to provide more contextual, relevant and quick services to consumers. Gone are the days when AI was considered emerging/deep tech. A strong data intelligence capability is nowadays a default feature of every company that pitches to VCs.

As AI investments in Fintech hit record highs, it’s time the regulators started thinking about the on-the-ground challenges of using AI for financial services. The UK’s FCA have partnered with Alan Turing Institute to study explainability and transparency while using AI.

Three key scenarios come up, when I think about what could go wrong in the marriage of Humans and Machines in financial services.

  • First, when a customer wants a service from a Bank (say a loan), and a complex AI algorithm comes back with a “NO”, what happens?
    • Will the bank need to explain to the customer why their loan application was not approved?
    • Will the customer services person understand the algorithm enough to explain the rationale for the decision to the customer?
    • What should banks do to train their staff to work with machines?
    • If a machine’s decision in a critical scenario needs to be challenged, what is the exception process that the staff needs to use?
    • How will such exception process be reported to the regulators to avoid malpractice from banks’ staff?
  • Second, as AI depends massively on data, what happens if the data that is used to train the machines is bad. By bad, I mean biased. Data used to train machines should not only be accurate, but also representative of real data. If a machine that is trained by bad data makes wrong decisions, who will be held accountable?
  • Third, Checks and controls need to be in place to ensure that regulators understand a complex algorithm used by banks. This understanding is absolutely essential to ensure technology doesn’t create systemic risks.

From a consumer’s perspective, the explainability of an algorithm deciding their credit worthiness is critical. For example, some banks are looking at simplifying the AI models used to make lending decisions. This would certainly help bank staff understand and help consumers appreciate decisions made by machines.

There are banks who are also looking at reverse engineering the explainability when the AI algorithm is complex.  The FCA and the Bank of England have tried this approach too. A complex model using several decision trees to identify high risk mortgages had to be explained. The solution was to create an explainability algorithm to present the decisions of the black box machine.

The pace at which startups are creating new solutions makes it harder for service providers. In recent times I have come across two firms who help banks with credit decisions. The first firm collected 1000s of data points about the consumer requesting for a loan.

One of the points was the fonts installed on the borrowers laptop. If the fonts were used in gambling websites, the credit worthiness of the borrower took a hit. As the font installed indicated gambling habits, the user demonstrated habits that could lead to poor money management.

The second firm had a chatbot that had a conversation with the borrower and using psychometric analysis came up with a score. The score would indicate the “intention to repay” of the customer. This could be a big opportunity for banks to use in emerging markets.

Despite the opportunities at hand, algorithms of both these firms are black boxes. May be it’s time regulators ruled that technology making critical financial decisions need to follow some rules of simplicity or transparency. From the business of creating complex financial products, banks could now be creating complex machines that make unexplainable decisions. Can we keep the machines under check?


Arunkumar Krishnakumar is a Venture Capital investor at Green Shores Capital focusing on Inclusion and a podcast host.

I have no positions or commercial relationships with the companies or people mentioned. I am not receiving compensation for this post.

Subscribe by email to join Fintech leaders who read our research daily to stay ahead of the curve. Check out our advisory services (how we pay for this free original research).


 

 

 

The post FinServ in the age of AI – Can the FCA keep the machines under check? appeared first on Daily Fintech.

Not so fast, InsurTech- long-tailed and unique claims are the Kryptonite to your innovation super power

Nothing to fear, InsurTech Man! It’s just a busy claim!

Artificial intelligence, machine learning, data analysis,
ecosystem insurance purchases, online claim handling, application-based insurance
policies, claim handling in seconds, and so on. 
There’s even instant parametric travel cover that reimburses costs-
immediately- when one’s planned air flight is delayed.  There are clever new risk assessment tools
that are derived from black box algorithms, but you know what?  Those risk data are better than the industry
has ever had!  Super insurance, InsurTech
heroes!  But ask many insureds or claim
handlers, and they’ll tell you all about InsurTech’s weakness, the kryptonite
for insurance innovation’s superheroes (I don’t mean Insurance Nerd Tony Cañas)- those being-   long-tailed or unique claims.

If insurance was easy you wouldn’t be reading this.  That is simple; much of insurance is
not.  Determining risk profiles for
thefts of bicycles in a metro area- easy. 
Same for auto/motor collision frequency/severity, water leaks, loss of
use amounts, cost of chest x-rays, roof replacement costs, and burial costs in most
jurisdictions.  Really great fodder for
clever adherents of InsurTech- high frequency, low cost cover and claims.  Even more complex risks are becoming easier
to assess, underwrite and price due to the huge volume of available data
points, and the burgeoning volume of analysis tools.  I just read today that a clever group of UK-based
InsurTech folks have found success providing comprehensive risk analysis
profiles to some large insurance companies-  Cytora
that continues to build its presence.  A
firm that didn’t exist until 2014 now is seen as a market leader in risk data
analysis and whose products are helping firms who have been around for a lot
longer than 5 years (XL Catlin, QBE, and Starr Companies)  Seemingly a perfect fit of innovation and
incumbency, leveraging data for efficient operations.  InsurTech.

But ask those who work behind the scenes at the firms, ask
those who manage the claims, serve the customers, and address the many
claim-servicing challenges at the carriers- is it possible that a risk that is
analyzed and underwritten within a few minutes can be a five or more year
undertaking when a claim occurs?  Yes, of
course it is.  The lion’s share of
auto/motor claim severity is not found within the settlement of auto damage, it’s
the bodily injury/casualty part of the claim. 
Direct auto damage assessment is the province of AI; personal injury
protection and liability decisions belong in most part to human interaction.  Sure, the systems within which those actions
are taken can be made efficient, but the decisions and negotiations remain outside
of game theory and machine learning (at least for now).    There have been (and continue to be)
systems utilized by auto carriers in an attempt to make uniform more complex
casualty portions of claims ( see for example Mitchell) but lingering ‘burnt fingers’
from class action suits in the 1980’s and 1990’s make these arms’ length tools trusted
but again, in need of verification.

Property insurance is not immune from the effects of
innovation expectations; there are plenty of tools that have come to the market
in the past few years- drones, risk data aggregators/scorers, and predictive
algorithms that help assess and price risk and recovery.  That’s all good until the huge network of
repair participants become involved, and John and Mary Doe GC prices a rebuild
using their experienced and lump sum pricing tool that does not match the
carrier’s measure to the inch and 19% supporting events adapted component-based
pricing tool.  At that intersection of ideas,
the customer is left as the primary and often frustrated arbiter of the claim
resolution.  Prudent carriers then revert
to analog, human interaction resolution.  Is it possible that a $100K water loss can
explode into a $500K plus mishandled asbestos abatement nightmare?  Yes, it’s very possible.  Will a homeowner’s policy customer in Kent be
disappointed because an emergency services provider that should be available
per a system list is not, and the homeowner is left to fend for himself? The
industry must consider these not as outlier cases, but as reminders that not
all can be predicted, not all data are being considered, and as intellectual
capital exits the insurance world not all claim staff will have the requisite
experience to ensure that which was predicted is what happens.

The best data point analysis cannot fully anticipate how
businesses operate, nor how unpredictable human actions can lead to claims that
have long tails and large expense.  Consider
the recent tragedy in Paris with the fire at the Cathedral of Notre Dame.  Certainly any carriers that may be involved
with contractor coverage have the same concerns as all with the terrible loss,
but they also must have concerns that not only are there potential liability coverage
limits at risk, but unlike cover limits, there will be legal expenses
associated with the claim investigation and defense that will most probably
make the cover limits small in comparison. 
How can data analysis predict that exposure disparity, when every claim
case can be wildly unique?

It seems as underwriting and pricing are under continued
adaptation to AI and improved data analysis it is even more incumbent on companies
(and analysis ‘subcontractors’) to be cognizant of the effects of unique claims’
cycle times and ongoing costs.  In
addition, carriers must continue to work with service providers to recognize
the need for uniform innovation, or at least an agreed common denominator tech
level.

The industry surely will continue to innovate and encourage those InsurTech superheroes who are flying high, analyzing, calculating and selling faster than a speeding bullet.  New methods are critical to the long-term growth needed in the industry and the expectation that previously underserved markets will benefit from the efforts of InsurTech firms.  The innovators cannot forget that there is situational kryptonite in the market that must be anticipated and planned for, including the continuing need for analog methods and analog skills. 

image source

Patrick Kelahan is a CX, engineering & insurance professional, working with Insurers, Attorneys & Owners. He also serves the insurance and Fintech world as the ‘Insurance Elephant’.

I have no positions or commercial relationships with the companies or people mentioned. I am not receiving compensation for this post.

Subscribe by email to join the 25,000 other Fintech leaders who read our research daily to stay ahead of the curve. Check out our advisory services (how we pay for this free original research).

How does One Consume an Ocean of Data? A Meaningful Sip at a Time

So many data, so many ways to use it, ignore it, misapply it, co-opt, brag, and lament about it.  It’s the new oil as suggested not long ago by Clive Humby, data scientist, and has been written of recently by authorities such as Bernard Marr in  Forbes wherein he discusses the apt and not so apt comparison of data and oil.  Data are, or data is?  Can’t even fully agree on that application of the plural (I’m in the ‘are’ camp.)  There’s an ongoing and serious debate on who ‘owns’ data- is possession 9/10 of the law?  Not if one considers the regs of GDPR, and since few industries possess, use, leverage and monetize data more than the insurance industry forward-thinking industry players need to have a well-considered plan for working with data, for, at the end of the day it’s not having the oil, but having the refined byproduct of it, correct?

Tim Stack of technologies solutions company, Cisco, has blogged that 5 quintillion bytes of data are produced daily by IoT devices.  That’s 5,000,000,000,000,000,000 bytes of data; if each were a gallon of oil the volume would more than fill the Atlantic Ocean.  Just IoT generated bits and bytes.  Yes, we have data, we are flush with it.  One can’t drink the ocean, but must deal with it, yes?

I was fortunate to be able to broach the topic of data availability with two smart technologists who are also involved with the insurance industry, Lakshan De Silva, CTO of Intellect SEEC, and Christopher Frankland , Head of Strategic Partnerships, ReSource Pro and Founder, InsurTech 360″.  Turns out there is so much to discuss that the volume of information would more than fill this column- not by an IoT quintillions’ factor but a by a lot. 

With so much data to consider, it’s agreed between the two that
understanding the need of data usage guides the pursuit.  Machine Learning (ML) is a popular and
meaningful application of data, and “can bring with it incredible opportunity around
innovation and automation. It is however, indeed a Brave New World,” comments
Mr. Frankland.  Continuing, “Unless you
have a deep grasp or working knowledge of the industry you are targeting and a
thorough understanding of the end-to-end process, the risk and potential for hidden technical debt is real.” 

What?  Too much data, ML methods to
help, but now there’s ‘hidden technical debt’ issues?  Oil is not that complicated- extract, refine,
use.  (Of course as Bernard Marr reminds
us there are many other concerns with use of natural resources.)  Data- plug it into algorithms, get refined ML
results.  But as noted in Hidden
Technical Debt in Machine Learning Systems
, ML brings challenges of which
data users/analyzers must be aware- compounding of complex issues.  ML can’t be allowed to play without adult
supervision, else ML will stray from the yard.

From a different perspective Mr. De Silva notes that the explosion of
data (and availability of those data) is, “another example of disruption within
the insurance industry.”  Traditional methods
of data use (actuarial practices) are one form of analysis to solve risk problems,
but there is now a tradeoff of “what risk you understand upfront”, and “what
you will understand through the life of a policy.”  Those IoT (or, IoE- Internet of Everything,
per Mr. De Silva) data that accumulate in such volume can, if managed/assessed efficiently,
open up ‘pay as you go’ insurance products and fraud tool opportunities.

Another caution from Mr. De Silva- assume all data are wrong unless you prove it otherwise. This isn’t as threatening a challenge as it sounds- with the vast quantity and sourcing of data- triangulation methods can be applied to provide a tighter reliability to the data, and (somewhat counterintuitively,) with the analysis of unstructured data with structured across multiple providers and data connectors one can be helped to achieve ‘cleaner’ (reliable) data.  Intellect SEEC’s US data set alone has 10,000 connectors (most don’t even agree with each other on material risk factors) with 1,000s of elements per connector, then multiply that by up to 30-35 million companies, then by the locations per company and then directors/officer of the company. That’s just the start before one considers effects of IoE.

In other words- existing linear modeling remains meaningful, but with the instant volume of data now available through less traditional sources carriers will remain competitive only with purposeful approaches to that volume of data.  Again, understand the challenge, and use it or your competition will.

So many data, so many applications for it.  How’s a company to know how to step
next?  If not an ocean of data, it sure
is a delivery from a fire hose.  The
discussion with Messrs. De Silva and Frankland provided some insight.

Avoiding Hidden Debt and leveraging clean data is the path to a “Digital Transformation Journey”, per Mr. Frankland.  He recommends a careful alignment of “People, Process, and Technology.”  A carrier will be challenged to create an ML-based renewal process absent involvement of human capital as a buffer to unexpected outcomes being generated by AI tools.  And, ‘innovating from the customer backwards’ (the Insurance Elephant’s favorite directive)  will help lead the carrier in focusing tech efforts and data analysis on what the end customers say they need from the carrier’s products. (additional depth to this topic can be found in Mr. Frankland’s upcoming Linked In article that will take a closer look at the challenges around ML, risk and technical debt.)

In similar thinking Mr. De Silva suggests a collaboration of business facets to unlearn, relearn, and deep learn (from data up instead of user domain down), fuel ML techniques with not just data, but proven data, and employ ‘Speed of Thought’ techniques in response to the need for carriers to build products/services their customers need.  Per Mr. De Silva:

“Any company not explicitly moving to Cloud-first ML in the next 12 months and  Cloud Only ML strategy in the next two years will simply not be able to compete.”

Those are pointed but supported words- all those data, and companies need
to be able to take the crude and produce refined, actionable data for their operations
and customer products.

In terms of tackling Hidden Debt and ‘black box’ outcomes, Mr. Frankland
advises that points such as training for a digital workforce, customer journey
mapping, organization-wide definition of data strategies, and careful application
and integration of governance measures and process risk mitigation will  collectively act as an antidote to the two
unwelcome potential outcomes.

Data wrangling- doable, or not? 
Some examples in the market (and there are a lot more) suggest yes.

HazardHub

Consider the volume of hazard data available for consideration within a jurisdiction
or for a property- flood exposure, wildfire risk, distance to fire response
authorities, chance of sinkholes, blizzards, tornadoes, hurricanes, earthquakes
or hurricanes.  Huge pools of data in a
wide variety of sources.  Can those
disparate sources and data points be managed, scored and provided to property
owners, carriers, or municipalities? 
Yes, they can, per Bob
Frady
of HazardHub, provider of
comprehensive risk data for property owners. 
And as for the volume of new data engulfing the industry?  Bob suggests don’t overlook ‘old’ data- it’s
there for the analyzing.

Lucep

How about the challenge sales organizations have in dealing with customer requests coming from the myriad of access points, including voice, smart phone, computer, referral, online, walk-in, whatever?  Can those many options be dealt with on an equal basis, promptly, predictably from omnichannel data sources?  Seems a data inundation challenge, but one that can be overcome effectively per Lucep, a global technology firm founded on the premise that data sources can be leveraged equally to serve a company’s sales needs, and respond to customers’ desires to have instant service.

Shepherd Network

As for the 5 quintillion daily IOT data points- can that volume become meaningful if a focused approach is taken by the tech provider, a perspective that can serve a previously underserved customer?   Consider unique and/or older building structures or other assets that traditionally have been sources of unexpected structural, mechanical or equipment issues.  Integrate IoT sensors within those assets, and build a risk analytics and property management system that business property owners can use to reduce maintenance and downtime costs for assets of any almost any type.  UK-basedShepherd Network has found a clever way to ‘close the valve’ on IoT data, applying monitoring, ML, and communication techniques that can provide a dynamic scorecard for a firm’s assets.

In each case the subject firms see the ocean of data, understand the
customers’ needs, and apply high-level analysis methods to the data that
becomes useful and/or actionable for the firms’ customers.  They aren’t dealing with all the crude, just
the refined parts that make sense.

In discussion I learned of Petabytes,  Exabytes, Yettabytes, and Zottabytes of data.  Unfathomable volumes of data, a universe full, all useful but inaccessible without a purpose for the data.  Data use is the disruptor, as is application of data analysis tools, and awareness of what one’s customer needs.  As Bernard Marr notes- oil is not an infinite resource, but data seemingly are.  Data volume will continue to expand but prudent firms/carriers will focus on those data that will serve their customers and the respective firm’s business plans.

Image source

Patrick Kelahan is a CX, engineering & insurance professional, working with Insurers, Attorneys & Owners. He also serves the insurance and Fintech world as the ‘Insurance Elephant’.

I have no positions or commercial relationships with the companies or people mentioned. I am not receiving compensation for this post.

Subscribe by email to join the 25,000 other Fintech leaders who read our research daily to stay ahead of the curve. Check out our advisory services (how we pay for this free original research).

Open banking – Keep calm and saddle up for a five year run

Image Source

A year on – and that’s a big milestone for many. But in the legacy banking world, nothing gets done in a year. And it’s not surprising that open banking has been more of an introvert than we expected. Eventful or not, open banking is one of the best things that could have happened to consumers, and will eventually turn out to be a case study for other global economies to learn from.

Open banking is not just a movement to get banks to relinquish their ownership of consumer data. It is more of a data revolution to identify consumer behaviour and use data analytics to provide personalised services – not just banking services.

There are multiple stakeholders involved in the process of making the most of this data revolution. Getting a consolidated view of a customer’s financial products is perhaps a low hanging fruit.

For a consumer focused data driven use case, that is more integrated into their lifestyle, more work needs to be done on open banking data.

  • Downstream apps need to build their interfaces with banks that have opened up their APIs.
  • That will be followed by proprietary intelligence that these downstream apps will add.
  • Proprietary intelligence using machine learning, predictive analytics etc., need critical mass of data – which only builds over time. For this these firms will also need to onboard customers.
  • Customer onboarding is easily said than done – comes with serious cost of acquisition for a small firm – that happens when they have backing such initiatives from Venture capital.

Every step above takes time. It would be a few years before a real data driven use case can reach the customer and for us to start seeing some success stories. But where are banks largely, and where are the startups in the journey?

A year ago the Competitions and Market Authority (CMA) set the pace for a bunch of banks (9 of them) to open up customer data through APIs. And 12 months on, there is more noise about a lack of noise in this space. I don’t believe there is any action missing, and this is why.

Banks had to open up customer transaction data through APIs – but CMA only came up with this idea in 2016. For banks to get it, plan it, and execute the APIs within even 24 months was always an aggressive timeline. HSBC’s Connected Money app was perhaps an exception to the usual pace of banks. Barclays seems to have a similar capability as well.

However, the integration that legacy Banks have provided to downstream systems are not the most intuitive. APIs exposed by banks use apps like Yodlee (who create the plumbing for the data) who then integrate to downstream customer facing apps like Money Dashboard for example.

One quick look at the apps show that the the experience offered by legacy banks to integrate into a customer facing app are so outdated. Especially for a customer segment that are used to a frictionless Monzo like experience. That is an area where banks can definitely do better. However, most Millennials and Generation Z customers directly bank with neo-banks, so this will be less of an issue with that customer segment.

Startups are still building the intelligence to make the most of the data revolution. However, most firms that I know of that are looking to provide PFM services, lending (underwriting, brokering or credit scoring), SME loyalty, or simply cleverer product switching, are all focused on growing their customer base in search of more data volumes.

Most of the clever applications need machine learning algorithms to feed on a lot of high quality customer data. That is when their results get accurate as the machine learns from continuous feedback. Releasing half trained machine learning apps to consumers can actually result in poor customer experience and churn.

Most firms I speak to, are focused on identifying product market fit for their data driven use case this year.

Customer acquisition has to be cleverly managed to ensure there is growth in data volumes, but also the predictive analytics is accurate enough to cut down churn. Its a hard game to play.

In a recent interview Tom Blomfield, CEO of Monzo mentioned that he wasn’t afraid of legacy banks or even the Neo-banks. But he was wary of new open banking powered apps just bringing clever capabilities and acquiring customers to dwarf the likes of Monzo. Open banking will be a slow burner, it would have failed if we didn’t see some success stories in the next 5 years.


Arunkumar Krishnakumar is a Venture Capital investor at Green Shores Capital focusing on Inclusion and a podcast host.

Get fresh daily insights from an amazing team of Fintech thought leaders around the world. Ride the Fintech wave by reading us daily in your email


Machine Learning for RIA loyalty and customer engagement; by Morgan Stanley

 

loaylty

Wealth management and AI is a natural combination. Standalone Fintechs, innovation labs of incumbents and of financial services IT providers, are all somehow working on this (3 types). There is another war of talent going on this area too. All three types of Financial services providers are looking for Data scientists and competing with all other industries (commerce, life sciences, and manufacturing). The market is tagging experienced conventional quants as AI experts. Public companies (mainly banks) are competing for tech branding.

I realized that I have not written about Morgan Stanley as much as Goldman or JP Morgan. Of course, this is not deliberate. I am well aware of the heads on competition between which of course is accentuated from business media. Look at the headlines during this reporting season and you will undoubtedly get a sense of this short-term pressure that public markets and the quarterly cycles, inflict.

What caught my attention this time about Morgan Stanley, was the release of the new version of the so-called “Next Best Action” system to the 16,000 RIA of MS. This system has been around for several years but as a rule-based system suggesting investment options for advisors and their clients. A system that every single bank with a wealth management offering has and that we all as clients wonder which is “best” (as if that is the right question in the first place since none of these rule-based systems could be customized).

Morgan Stanley’s “Next Best Action” is using Machine Learning to support advisors in increasing engagement. The success of this tool will be measured by its effectiveness to enhance the dialogue with the client whether it is through in-person meetings, phone calls or pure digital channels.

Like me, most of us are sick and tired of emails with pdf attachments of several analysts covering Alibaba (that I care about accumulating) and not knowing how to make sense of that. All of us, are realizing that only because of KYC stringent requirements, advisors look to incorporate our life events and goals into an investment proposal. Morgan Stanley’s “Next Best Action” system is using ML to advise clients on what to consider based on life events. For example, a client had a child with a certain illness, the system could recommend the best local hospitals, schools, and financial strategies for dealing with the illness. The system monitors and learns from the reaction of the client to the “Recommendations” and based on the client responses, improves the quality of ideas each day.

In a way, the system thinks for the advisor on a daily basis and presents relevant information and continuously improved recommendations. The advisor has a choice and can send customized emails and texts to clients. The system in a few seconds finds the clients’ asset allocation, tax situation, preferences, and values.

The system is empowering the advisor and this is where the potential of widespread adaptation lies. Never forget that tech adoption is always more of a cultural issue rather than a technical one. In machine learning, the more the system is used the better the next best actions are.

If the community of the 16,000 Morgan Stanley advisors make the “Next Best Action” their ally, then MS will have an edge and a loyal army taking care of their clients.

This is not about disintermediation. ML can build loyalty for the intermediaries servicing clients and at the same time offer continuously better advice to end clients.

This not some version of robo-advisory focused on best on-boarding and low fee execution. It is enhancing a hybrid wealth management offering in a way that offers a cutting-edge (value) to those using Morgan Stanley as a platform provider (i.e. the advisors) and the end clients.

Morgan Stanley has established its tech center in Montreal – Montreal Technology Centre. It has grown to 1200 tech employees focused on innovation in low-latency and electronic trading, cloud engineering, cybersecurity, AI/machine learning, and end-user technologies.

Barron’s reports that it took MS about 6yrs to develop the “Next Best Action”. The main KPI is customer engagement.  The other five variables monitored are: cash flow, brokerage business volume, new advice clients, the level of banking business, and account attrition.

Morgan Stanley draws from million conversations to build its AI

Efi Pylarinou is the founder of Efi Pylarinou Advisory and a Fintech/Blockchain influencer.

Get fresh daily insights from an amazing team of Fintech thought leaders around the world. Ride the Fintech wave by reading us daily in your email.