FSB report on artificial intelligence in the financial sector
The Financial Stability Board (FSB) was set up in 2009 by members of the G20 to coordinate the work of financial authorities at the international level and promote and develop financial stability policies. It is made up of representatives of state authorities responsible for finances from 24 countries, financial institutions, and supervisory and regulatory authorities.
On 1 November 2017, the institution, which is headquartered in Basel, published a report on its website on Artificial intelligence and machine learning in financial services. The authors of the report analyse the impact the increasing use of AI and machine learning is having on financial services.
According to the report, AI and machine learning are being used more and more frequently in the financial sector, for example for assessment of the quality of credit facilities, automation of contact with customers, or valuation and sale of insurance policies.
Financial institutions use AI and machine learning to optimise capital management, test models, and analyse the effect transactions have on the market. Hedge funds and brokers use these tools to optimise transactions. Both public and private institutions use these technologies to ensure compliance as legislation evolves, for surveillance, data quality assessment, and detection of fraud.
The report lists the potential benefits and risks arising as these technologies become widely used, with regard to preserving financial stability.
The benefits include more efficient processing of information, for example in connection with credit decisions, consumer contracts, and customer interaction. This can lead to a more efficient financial system. Moreover, AI and machine learning can forge new, as yet unforeseen, links between financial markets and institutions, due among other things to the availability of new data sources for institutions.
The authors also draw attention to the negative aspects of development of AI and machine learning, saying that network effects and scalability of new technologies may in the future give rise to third-party dependencies. AI and machine learning are likely to be provided by a small number of large tech companies. This could lead to the emergence of important new players falling outside the regulatory perimeter. There is potential for monopolies and oligopolies. Dominance of a single tech company in the market could endanger the financial stability of the market, because a crisis in the company, or insolvency, would affect the entire financial system.
Problems of “interpretability” and “auditability” of AI and machine learning methods could become a macro-level risk if these tools are not properly controlled by microprudential supervisors. Many models resulting from the use of AI or machine learning techniques are difficult to interpret, making the data classification mechanism unintelligible for the end user. This problem is easily underestimated, especially if the model is more effective than interpretation models. This could result in incorrect solutions being presented at times of economic crisis. This is because the models are “trained” during times of low volatility, and thus do not know what measures to take during an economic downturn. It is by no means certain that models based on AI will suggest appropriate management of long-term risks.
In light of the numerous risks involved in these two innovative tools, it is vital that systemic analysis be carried out, especially in the context of legal compliance. The most important areas are data protection law and cybersecurity. It is vital to continue to monitor development of these technologies. Otherwise they could get out of control and the results could be opposite to those intended.
Aleksandra Lisicka