Table des matières

  • This is the seventh edition of the OECD Business and Finance Outlook, an annual publication that presents unique data and analysis on the trends, both positive and negative, that are shaping tomorrow’s world of business, finance and investment.

  • Artificial intelligence (AI) is transforming many aspects of our lives, including the way we provide and use financial services. AI-powered applications are now a familiar feature of the fast-evolving landscape of technological innovations in financial services (FinTech). Yet we have reached a critical juncture for the deployment of AI-powered FinTech. Policy makers and market participants must redouble their engagement on the rules needed to ensure trustworthy AI for trustworthy financial markets.

  • Deployment of AI applications across the full spectrum of finance and business sectors has progressed rapidly in recent years such that these applications have become or are on their way to becoming mainstream. AI, i.e. machine-based systems able to make predictions, recommendations or decisions based on machine or human input for a given set of objectives, is being applied in digital platforms and in sectors ranging from health care to agriculture. It is also transforming financial services. In 2020 alone, financial markets witnessed a global spend of over USD 50 billion in AI, and a total investment in AI venture capital of over USD 4 billion worldwide, accompanied by a boom in the number of AI research publications and in the supply of AI job skills.

  • Compared to many other sectors, AI is being diffused rapidly in the financial sector. This creates opportunities but also raises distinctive policy issues, particularly with respect to the use of personal data and security, fairness and explainability considerations.This chapter introduces AI and its applications in finance and proposes three complementary frameworks to structure the AI policy debate in this sector to help policy makers assess the opportunities and challenges of AI diffusion in this sector. One approach assesses how each of the ten OECD AI Principles applies to this sector. A second approach considers the policy implications and stakeholders involved in each phase of the AI system lifecycle, from planning and design to operation and monitoring. A third approach looks at different types of AI systems using the OECD framework for the classification of AI systems to identity different policy issues, depending on the context, data, input and models used to perform different tasks. The chapter concludes with a stocktaking of recent AI policies and regulations in the financial sector, highlighting policy efforts to design regulatory frameworks that promote innovation while mitigating risks.

  • Artificial intelligence (AI) is increasingly deployed by financial services providers across industries within the financial sector. It has the potential to transform business models and markets for trading, credit and blockchain-based finance, generate efficiencies, reduce friction and enhance the product offerings. With this potential comes the concern that AI could also amplify risks already present in financial markets, or give rise to new challenges and risks. This is becoming more of a preoccupation amidst the high growth of AI applications in finance. This chapter examines how policy makers can support responsible AI innovation in the financial sector, while ensuring that investors and financial consumers are duly protected and that the markets around such products and services remain fair, orderly and transparent. The chapter reviews benefits and challenges associated with data management; explainability and the robustness and resilience of machine learning models and their governance. It suggests policy recommendations to mitigate such risks and promote the safe development of AI use-cases in finance.

  • While the benefits and opportunities of AI seem boundless, certain applications of AI risk causing intentional or unintentional harms. It is critical to ground conversations on AI development in international standards on responsible business conduct, a foundation of sustainable economic development. International standards set out recommendations to help companies identify and address the negative impacts their operations and products may have on people and the environment. This chapter focuses on potential human rights impacts of AI and how companies developing and using AI can apply OECD guidance on human rights due diligence. It also examines how existing legislation, both on human rights and on AI, deals with this issue.

  • This Chapter explores some of the potential competition risks stemming from the use of AI, namely collusion and abuses of dominance, and highlights the challenges they pose for competition policy. It is still too early to tell whether many of these risks will materialise, and their overall impact on markets. However, it is clear that competition policy will have a role to play in ensuring that AI reaches its procompetitive potential. Beyond the need for sufficient technical capacity to assess AI technologies and their effects in markets, competition authorities may find that certain market outcomes caused by AI may be difficult to address under current enforcement frameworks. Nonetheless, competition authorities still have tools at their disposal to address at least some concerns regarding AI. These tools include merger control, market studies, competition advocacy and co-operation with other regulators. Further, some jurisdictions are considering additional legislative measures that may help.

  • Digital technologies and data – including Artificial Intelligence (AI) -- hold the potential to automate and thus improve the efficiency and effectiveness of regulatory, supervisory and enforcement activities. These functions have become increasingly complex, given the substantial increase of data of regulatory relevance to be processed in recent years, along with the growth of digital market forces posing new challenges. Market regulators and public enforcement authorities have turned to supervisory technology (SupTech) tools and solutions as a means to improve their surveillance, analytical and enforcement capabilities, which can in turn have important benefits for financial stability, market integrity and consumer welfare. This chapter takes stock of the most common uses of SupTech by regulatory, supervisory and enforcement authorities to date, identifies its associated benefits, risks and challenges, and outlines considerations for devising adequate SupTech strategies.

  • Artificial Intelligence (AI) has potential to resolve many challenges that our societies face, but as with all innovations, foreign acquisitions of some AI applications may raise security concerns.International investment in established companies is an important vector for diffusion of AI-related technologies across borders. Concerns about implications for essential security interests have led to tighter government control over such acquisitions, with AI-related technologies often explicitly included in the scope of investment screening mechanisms.Financing of research abroad is a parallel legal avenue to acquire know-how that is unavailable domestically. It can substitute for acquisitions of established companies. Governments have now begun to set out policies to control such transfers, specifically for AI-related areas. As for foreign investment in equity, such policies need to be carefully devised to avoid forgoing the benefits of international research cooperation. Policy principles agreed at the OECD to strike this balance could offer inspiration.