Artificial Intelligence (AI) is already widely utilised by the finance sector and the market size of AI in fintech is projected to grow from $44.08 billion in 2024 to $50.87 billion by 2029. But, and it’s a big but, AI has also garnered the attention and scrutiny of regulators. Many are concerned about how the technology will be used within a compliant framework. Here’s a deep dive into what regulators have said so far and what we can expect going forward.
Areas of Concern for Regulators
One of the major concerns for regulators is what is commonly known as the “black box” problem, referring to the lack of transparency in the way AI algorithms make decisions. This can result in unpredictable outcomes, many of which could be detrimental to the stability of the financial markets. It could also put accountability at risk, one of the cornerstones of financial regulation.
At the City and Financial Global AI Regulation Summit 2023, Jessica Rusu, the UK FCA’s Chief Data, Information and Intelligence Officer, stated,
AI has the potential to transform the way we manage our finances, and is becoming pivotal in shaping the global economy. On one side of the coin, we have the shiny prospects of AI-powered innovation, promising greater operational efficiencies and accessibility in financial services, increasing revenues and driving innovation. On the other side of the coin, we have a whole host of potential risks. We are at a key moment now – we have options around deciding where to take AI.
And the US Financial Stability Oversight Council (FSOC) believes that the
lack of ‘explainability’ can make it difficult to assess the system’s conceptual soundness, increasing uncertainty about their suitability and reliability.
“Automation bias” has also been flagged by regulators, with concerns that overdependence on AI, without examining the validity of the outputs, could lead to misinterpretations and errors. Plus, AI systems are notorious for their bias, which gets introduced by the data used to train models. Such bias could lead to discriminatory financial decisions, which could harm both businesses and consumers. Regulators have expressed concern that AI systems could “produce, and possibly mask, biased or inaccurate results that could, in turn, implicate consumer protection issues.”
Concerns regarding cybersecurity and data privacy have also been extensively addressed by regulators. Especially on the way data is collected, analysed and stored, with the EU’s GDPR guidelines being a prominent example. But there are concerns that AI might increase vulnerability to cyber threats.
According to a 2023 study by McAfee, 53% of adults share their voice online at least once a week and it takes AI just 3 seconds of audio to clone a voice. Unfortunately, cybercriminals are among the first adopters of such technology, with 77% of AI voice scam victims losing money. Regulators fear that deepfakes could be used to defraud investors and also manipulate the markets.
Another major area of concern is the ethical implementation of AI in financial services. Based on the four pillars of accountability, transparency, fairness and ethics, the G20 has adopted human-centred AI principles, the OECD has published its own AI guidelines, the EU has its Ethics Guidelines for Trustworthy AI, and Singapore has the Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) and Personal Data Protection Commission (PDPC) framework for AI. Singapore’s central bank has already translated these principles into practice, in collaboration with 16 financial services providers, with the creation of an AI ethics framework, Veritas.
A Global Move to Regulate AI in Finance
We’re looking at AI and the capabilities of AI. The challenge for every regulator across the globe is to stay proactive and to be ahead of financial market developments and technological expansion no matter how fast-paced it is. So yes, adopt the technologies, but be careful and do it in steps.
commented Dr George Theocharides, Chairman of CySEC, in an interview with Finance Magnates.
Here’s a snapshot of the latest developments in the regulatory framework for AI applications across the world.
The European Union
The EU leads the world in AI governance, having released the AI Act in December 2023.
The AI Act sets rules for large, powerful AI models, ensuring they do not present systemic risks to the Union and offers strong safeguards for our citizens and our democracies against any abuses of technology by public authorities.
the European Parliament announced. The European Commission has also entered into an agreement with G7 leaders on a voluntary Code of Conduct for AI developers and international Principles of AI guidelines. The ESMA also has a Risk Standing Committee that looks into risks to retail investors and to the financial stability of Europe.
The UK
The Financial Conduct Authority (FCA) pegs itself as technology-agnostic and pro-innovation, and believes that digital infrastructure, consumer and data safety, and resilience are critical to AI integration. The dependence of AI deployments on large datasets and the cloud creates interdependencies that could lead to third party-related risks to stability. For this, the FCA is already working on a Critical Third Parties approach to address systemic risks associated with the reliance of financial services providers on third parties. The aim is to build resilient digital infrastructure and bolster cybersecurity.
Safety and resilience were also the key focus areas of the AI Safety Summit held by the UK government in November 2023. The key message of the summit was for financial firms to take responsibility for their own operational resilience, including for services outsourced to third parties. The FCA is also working on responsible AI use based on maintaining data quality, governance, management, accountability, ownership and protection.
The United States
The federal government has proposed the America Data Protection and Privacy Act, which details rules for AI applications, including obligations related to risk assessment. The National Institute of Standards and Technology (NIST) has also issued guidelines on behalf of the US government, such as the Secure Software Development Framework and the AI Risk Management Framework.
The FSOC has highlighted data security, consumer privacy, and risks of generative AI as its top considerations. According to the council, since there is little collaboration between humans and predictive AI-driven processes, there could be a large gap between perceived and actual biases in model design. This could threaten the overall security and integrity of the banking and financial services model, which, in turn, calls for increased vigilance over AI models.
Monitoring the rapid developments in AI, including generative AI, will be essential to helping ensure that oversight structures keep up with or stay ahead of risks posed by AI adoption, while facilitating efficiency and responsible innovation that promotes benefits and minimizes risks.
according to the FSOC.
Australia
The Australian Securities and Investments Commission (ASIC) is also monitoring developments in AI to gain insights into the opportunities and threats the technology brings. To safeguard market integrity and the consumer, the regulatory watchdog believes that the financial ecosystem has a duty to balance innovation with ethical and responsible use of technology. For now, the ASIC is exploring potential applications of AI and how the ethical use of AI and customer data can be promoted.
International Efforts
The AI Safety Summit, held in the UK in November 2023, brought together regulators from across the world, including the EU, UK, US, China, India, Japan and Brazil, to address concerns regarding:
- Reliability and potential biases in data sources.
- Risks of financial models.
- Governance of AI use.
- Consumer protection.
The declaration issued by the attendees of the summit establishes a commitment to the design, development, deployment and use of AI in a way that is human-centric, sage, trustworthy and responsible.
While establishing standards is the responsibility of regulators, maintaining ethical and clear communications with customers is the duty of financial services providers. 45% of financial services providers across Europe lack an AI ethics framework and 38% have no clear demarcations for accountability.
Our wide experience of partnering with banks, brokers and fintechs gives us deep insights into the challenges and opportunities the industry faces, including those brought about by the evolution of AI. Let the human experts at Contentworks create and implement compliant multichannel content strategies for your business. Book a free Zoom call with us to get started.