DWN Op-Ed: How Can the Transforming Financial Sector Be Fairly Regulated?

2228

By Animesh Jain

INTRODUCTION

The delivery of financial services and operational and risk management procedures can both be significantly improved by the use of artificial intelligence (AI), particularly machine learning (ML). Financial services already include technology, and this trend will continue to bring about significant changes for both consumers and financial institutions. This approach has been influenced by the efforts made by financial authorities to promote innovation and the use of new technology within the financial sector. To maximize the advantages and minimize the risks of these new technologies, solid regulatory frameworks are crucial.

Over time, complex ML models and neural networks in AI have proven to be more accurate than human decision-making; however, the opaqueness of ML-based decision processes regarding how they arrived at their outcomes does not offer executives responsible for automated judgments with enough adequate assurance. The challenge arises from bankers’ need to explain the processes and decision-making to regulators. The major issue with big data and AI use in the banking and finance industry is that they violate established privacy norms. This occurs because AI may produce decisions with hidden biases since it has access to data on all customers. Therefore, banks and the financial industry may now be unintentionally exposed to risk. With this in mind, a comprehensible AI decision support system is required to automate financial services.

FAIRNESS

When financial institutions use AI, the majority of the issues are similar to those they face when using conventional models, but how they approach them may differ. From a fairness perspective, there are several fundamental guidelines for AI models. To avoid discrimination, for instance, it is crucial to ensure AI models are trustworthy and reliable. Additionally, establishing accountability and transparency in the use of AI entails giving the individuals whose data is being used the ability to express their disagreement with data-driven decisions and knowledge of them. Due to the increased emphasis on fairness in AI use, there is an advocation for more human engagement in the technology. The majority of what regulatory bodies have to say about AI seems to be about adverse outcomes that could occur, like unintentional biases that produce discriminatory effects and a lack of transparency in AI models. Both AI and models based on humans have shortcomings like bias and errors.  But a significant distinction between the two is that humans can be held accountable.

Contrarily, AI can prompt discussion regarding the distinction between humans and machines. Unfortunately, there are a number of factors that make this issue worse, including:

  1.     The extent to which financial institutions are utilizing AI.
  2.     The construction of AI algorithms.
  3.     The models’ inability to be adequately described.

From a regulatory perspective, explicitly and firmly designating responsibility to the appropriate individuals inside an organisation is the key to making AI regulatory frameworks operate. But there will need to be compromises between the advantages of widespread automation and the demand for human input and supervision.

In order to support effective AI governance, fairness could be defined in greater detail. The goal of fairness is to prevent unfair outcomes. But non-discrimination may not always be explicitly covered by consumer protection rules in some jurisdictions. A solid foundation for defining fairness in the context of AI may be provided by clearly outlining non-discrimination aims. Providing financial authorities with a legal framework for AI-related advice would further ensure that decisions made in the financial services industry that is impacted by AI, conventional models, and human judgement are all held to the same level.

REGULATIONS & STANDARDS

Due to the difficulties and complexity of AI, it requires fair and coordinated regulation and supervision. This means that based on their behavior and prudential risks, AI models should be governed and monitored in different ways. For instance, the outcomes of AI models that have a major impact on behavior and economic risks would require stronger regulation and oversight than those that don’t. Financial institutions’ earnings, the market, consumer safety, and reputation will all be impacted by AI. This means that in order to monitor how AI is utilized in financial services, prudential and conduct authorities must collaborate more.

Given that there are certain shared beliefs regarding how AI should be managed in the financial industry, there appears to be an opportunity for standard-setting groups in the financial industry to create international norms or guidelines in this area. The authorities’ opinions on the best ways to apply these recurring themes are continually evolving. In the long run, the development of global standards may result from individuals from all over the world continuing to share their thoughts and experiences. These kinds of worldwide standards may be beneficial, particularly in regions where digital transformation is just getting underway. They can also be used as a minimal requirement to aid in the financial sector’s orderly adoption of AI technologies. Standard-setting bodies will be able to identify common “best practices” that other jurisdictions can use as more specialized regulatory methods or supervisory expectations for certain AI use cases that arise. In addition, principles-based guidance can be employed in conjunction with a best practices approach because technology trends fluctuate and are not static.


Animesh Jain is a Senior Manager, Government Relations & Policy at MKAI.org. He has completed his masters in international security with concentrations in China & East Asia and Diplomacy from Sciences Po, Paris. He has previously worked with organizations including Kubernein Initiative, Tianjin Intertech Corporation, AI Policy Labs, National Skill Development Corporation and Observers Research Foundation in different capacities.