Artificial intelligence and financial services – building trust

Data has always been important in financial transactions. The earliest forms of writing involved keeping records of grain, sheep and cattle entering and leaving farms and warehouses in ancient Sumeria. More recently, the financial services sector has an established history of using statistical methods to make decisions about individuals – credit-scoring and data-based insurance pricing, for example.

Now the financial services sector is at the forefront of the adoption of machine learning and AI. The Bank of England’s Future of Finance report identifies automated decision-making based on machine learning as “one of the most important trends in technology today”. The report recognises that AI has the potential to “transform how customers experience finance and the agility, efficiency and resilience of financial firms”, offering benefits across a range of activities:

  • Retail payments services: improved checkout experiences and better understanding of shopping habits, bringing in data not only from transactions, but also from social media, wearables and location devices.
  • More accurate tailoring of lending decisions.
  • Better information about financial products and app-based access to services.
  • Fraud reduction.
  • Use of data standards to enable faster and more efficient cross-border payments, and improve risk monitoring.

However, financial services is a sector where the need to build and maintain consumer trust is particularly acute. The financial crisis of 2007-2008 eroded public confidence and remains fresh in people’s minds. Building trust in AI is not always easy. Our experience indicates that consumers are often willing to accept data collection and algorithm-based decision making in fields such as healthcare and medical research. In contrast, individuals can feel very exploited by more commercial applications of their personal data using AI.

“Black-box” decision making where the technology, although successful at making accurate decisions, does so in a way that is opaque and not fully explainable adds to mistrust. Individuals whose lives and finances are affected by AI want to be able to understand the reasons for decisions affecting them. Indeed, the much talked about GDPR introduced legal restrictions on automated decision-making concerning individuals, including profiling.

The Future of Finance report acknowledges this tension. It proposes a set of principles and practical measures to underpin the use of AI and machine learning in financial services in a way that aims to promote trust and confidence.

What is AI?

Definitions of AI vary and evolve over time. The EU’s the High-Level Expert Group on AI currently uses the following:

“Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions.

As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems).”

Responsible AI principles

The Future of Finance Report highlights the need to start from responsible AI principles, including fairness, accountability, transparency, security and responsible usage.

Of course, the Bank of England is not alone in exploring these issues. The use of AI is not, as yet, governed by specific legal structures. Existing legal obligations in areas like data protection, liability for harm, consumer protection and contract will apply as in any other type of business activity. However, there is focus among national and international policy-makers around the world on the risks presented by AI and machine learning and how to address them. The development of a principles-based approach is currently at the forefront of most initiatives, with the prospect of specific regulatory measures as an option for the future if it proves necessary.

At EU level, the High-Level Expert Group on AI issued detailed Ethics Guidelines for Trustworthy AI in April 2019. The group identified seven key requirements for the use of AI across all sectors:

  • Human agency and oversight
  • Technical robustness and safety
  • Privacy and data governance
  • Transparency
  • Diversity, non-discrimination and fairness
  • Societal and environmental well-being
  • Accountability

It calls upon developers to embed trustworthy AI into their products and services.

The UK’s Centre for Data Ethics and Innovation was set up in 2018 to investigate and advise on the use of data-driven technology. The Centre has recently published an interim report in its review of bias in algorithmic decision-making. This report highlights the leading role of the financial services industry in developing new approaches to identifying and mitigating bias.

“The financial services sector relies on being able to make decisions at scale about individuals’ financial futures based on predictions of likely behaviour, for example, in relation to repaying debts. Broadly speaking, the success of their business models are often based on how accurately they can make these predictions.”

Next steps

The Future of Finance Report proposes the establishment of a public-private financial sector working group by the Bank of England and the Financial Conduct Authority working together. This would:

  • Monitor developments in the use of machine learning
  • Develop principles and share best practice for responsible, explainable and accountable use of the technology
  • Explore the intersection with current rules
  • Feed into the work of the Centre for Data Ethics and Innovation on maximising benefits and managing risks through use of AI and machine learning

We welcome the prospect of active oversight and supervision of AI and machine learning in financial services. A focus on ethical principles may seem like a “nice-to-have” rather than a core development goal. But ignoring the question of trust at this stage is likely to lead to consumer backlash and tougher regulation in the medium term.

Our content explained

Every piece of content we create is correct on the date it’s published but please don’t rely on it as legal advice. If you’d like to speak to us about your own legal requirements, please contact one of our expert lawyers.

Mills & Reeve Sites navigation
A tabbed collection of Mills & Reeve sites.
Sites
My Mills & Reeve navigation
Subscribe to, or manage your My Mills & Reeve account.
My M&R

Visitors

Register for My M&R to stay up-to-date with legal news and events, create brochures and bookmark pages.

Existing clients

Log in to your client extranet for free matter information, know-how and documents.

Staff

Mills & Reeve system for employees.