Written by 18:11 Member, TOP-NEWS

Embedding AI in banking business models

In this article, our member KPMG reports on the challenge for European banks to find a balance between a sound digital transformation and managing the expectations of regulators.

European banks increasingly view technology as central to their future success. Supervisors agree: The ECB’s latest supervisory priorities identify an effective digitalisation strategy as an important response to industry challenges, ranging from changing customer preferences and rising competition from technology firms, to overcapacity and cost inefficiency. Banks should therefore expect supervisory scrutiny of their digitalisation strategies to increase during 2022, including benchmarking and targeted on-site inspections of governance, resources, skills and risk management.

Within these strategies, one of the areas in which supervisors could focus on is Artificial Intelligence (AI). Key variants of AI include Machine Learning — self-improving algorithms; Natural Language Processing — text and speech interpretation; Computer Vision — devices understanding their surroundings by analysing imagery (photos, videos); and Robotics — processing data with sensors.

AI has real transformative potential in banking. Development and adoption are growing fast and are likely to accelerate further as firms intensify their post-COVID investments. For banks, they can look forward to the following potential benefits:

  • Enhancing efficiency: Reducing costs and enhancing productivity by using AI to perform core processes in areas like finance, compliance, risk management and administrative tasks;
  • Generating revenue: Using AI to improve segmentation, anticipate customer needs or to create new products and services;
  • Reducing risk: Applying AI to risk analysis in areas such as credit decisions, market risk or insurance underwriting — enhancing institutional strength and systemic stability.

Examples of current banking use cases for AI can be grouped into five main areas, each of which is already generating specific applications for individual institutions:

  • New value propositions: Leveraging analytics to generate new insights and innovations, for example in lending applications, financial advice or investment research;
  • Risk management: Enhancing fraud detection, trading surveillance and evaluations of liquidity or counterparty risk;
  • Operations: Process re-engineering of administrative tasks, reporting or compliance activities;
  • Customer acquisition and management: Accelerating on-boarding or improving customer understanding and personalisation;
  • Customer experiences: Automating interactions via chatbots or virtual assistants, and enhancing existing channels by allowing human advisors to focus on value-added tasks.

Of course, like any new technology, AI can pose potential risks if it is implemented without suitable controls over its actions and the data it uses. Some of the major areas of risk include poor accountability and transparency; the potential for bias or discrimination; data misuse or privacy breaches; and the potential for financial instability by relying too much on third-party dependencies in the future.

The need to balance the benefits of AI against its potential risks represents a new challenge for banking supervisors. In the ECB’s written opinion on a proposal for a regulation laying down harmonised rules on artificial intelligence (CON/2021/40), it confirmed that as a supervisor it is committed to a technology-neutral approach in the prudential supervision of credit institutions, and that its role was to ensure the safety and soundness of credit institutions “irrespective of the application of any particular technological solution”.

Banks should therefore be alert to a range of possible supervisory attitudes to AI. Open and pro-active discussions with JSTs could help banks and supervisors avoid unpredictable or inconsistent interpretations of any AI use. Getting familiar with the proposed regulation from the European Commission can also help banks understand the specific definitions for high-risk AI, together with related obligations in areas such as data quality, risk management, transparency, documentation and traceability. In a banking context, high risks could arise when AI is used in:

  • Management and operation of critical infrastructure: cash supply, card-based and conventional payment transactions, clearance and settlement of securities;
  • Employment and workers management: recruitment, hiring decisions or performance appraisals; and
  • Essential private or public services: credit scoring and creditworthiness evaluation.

The use of AI in banking is ever increasing and banks and supervisors alike have much to learn. However, this is an exceptionally fast-moving field. It is expected that the coming years will see rapid advances in computing power, data availability and the capabilities of AI. Banks should not only be alert to the potential risks of AI implementation, but also how this may impact their digitalisation strategies within their future business models. 

Source: KPMG Article as of 15 March 2023
Image: Unsplash

(Visited 137 times, 1 visits today)
Close