Executive summary

Accelerating the productivity of research could be the most economically and socially valuable of all the uses of artificial intelligence (AI). While AI is penetrating all domains and stages of science, its full potential is far from realised. Policy makers and actors across research systems can do much to accelerate and deepen the uptake of AI in science, magnifying its positive contributions to research. This will support the ability of OECD countries to grow, innovate and address global challenges, from climate change to new contagions.

Broad multidisciplinary programmes are needed that bring together computer and other scientists with engineers, statisticians, mathematicians and others to solve challenges using AI. Among other measures, dedicated government funding is required. It needs to be allocated using processes that encourage broad collaboration, rather than siloed funding for individual disciplines. One priority is to foster interaction between roboticists and domain experts. Laboratory robots could revolutionise some domains of science, lowering the cost and hugely increasing the pace of experimentation.

Governments can encourage and support visionary initiatives with long-term impact. Initiatives such as the Nobel Turing Challenge – to build autonomous systems capable of world-class research – can inspire collaboration and co-ordination in science, to help focus efforts on global challenges, drive agreement on standards and attract young scientists to such ambitious endeavours.

It is important to increase access to high-performance computing (HPC) and software for advances in AI and science. The provision of computing resources by large tech companies is helpful, but this has important gaps, and less well-funded research groups could fall behind. For academics to be competitive using state-of-the-art HPC/AI computing resources from commercial cloud providers is in most cases unrealistically expensive. National laboratories and their computing infrastructures, in collaboration with industry and academia, could address the gaps and help to develop training materials for institutions of tertiary education. Countries at the forefront of the field, including the United States and leaders in the European Union, may also collaborate on policy frameworks to make resources available from a shared pool.

Updating curricula could assist. For example, using already proven AI-enabled techniques, students could be taught how to search for new hypotheses in existing scientific literature. The standard biomedical curriculum provides no such training. New integrative PhD programmes and/or industry research programmes based on knowledge synthesis – aided by AI – could also help.

Governments can take steps to increase the availability of open research data and to harness the power of data across various fields, from health to climate. Examples include Europe’s Health Data Space, and GAIA-X, which aims to build a federated data infrastructure for Europe. Research centres can be helped to adopt systems such as federated learning that can apply AI to sensitive data held by multiple parties without compromising privacy. Another challenge is to make laboratory instruments more interoperable via standardised interfaces. Governments could bring laboratory users, instrument suppliers and technology developers together and incentivise them to achieve this goal.

Public research and development (R&D) can target areas of research where breakthroughs are needed to deepen AI’s uses in science and engineering. Research goals include going beyond current models based on large datasets and high-performance computing, and to find ways to automate the large-scale creation of findable, accessible, interoperable and reusable (FAIR) data. Another target could be to advance AutoML – automating the design of machine-learning models – to help address the scarcity and high cost of AI expertise. Research challenges could be organised around AutoML for science, and research could be funded that involves applying AutoML in AI-driven science.

Support should also be given for the development of open platforms (such as OpenML and DynaBench) that track which AI models work best for a wide range of problems. Public support is needed to make such platforms easier to use across many scientific fields.

Public R&D could help foster new, interdisciplinary, blue-sky thinking. For instance, natural language processing (NLP) can help to work with the enormous growth of scientific literature. However, current performance claims are overstated. Today’s research in NLP also offers limited incentives for the sort of high-risk, speculative ideation that breakthroughs may need. Research centres, funding streams and/or publication processes could be set up to reward novel methods – even if these are at a nascent stage.

Knowledge bases organise the world’s knowledge by mapping the connections between different concepts, drawing on information from many sources. Governments should support an extensive programme to build knowledge bases essential to AI in science, a need that will not be met by the private sector. Research could work towards creating an open knowledge network to serve as a resource for the whole AI research community. Relatively small amounts of public funding could help bring together AI scientists, scientists from multiple domains and professional societies ̶ along with volunteers ̶ to build the foundations for AI to utilise and communicate professional and commonsense knowledge.

The thematic diversity of research on AI appears to be narrowing and is increasingly driven by the compute- and data-intensive approaches that dominate in large tech companies. Bolstering public R&D might make the field more diverse and help to grow the talent pool. Funders could pay special attention to projects that explore new techniques and methods separate from the dominant deep-learning paradigm. Meanwhile, policy makers could support research to examine and quantify losses of technological resilience, creativity and inclusiveness brought about by a narrowing of AI research and the possible implications of the increasing dominance of industry in AI research.

Much of AI in science involves teaming with people, but funders could also help develop specialised tools to enhance collaborative human-AI teams, and to integrate these tools into mainstream science. Combining the collective intelligence of humans and AI is important, not least because science is now carried out by ever-larger teams and international consortia. Investment in this field of research has lagged other topics in AI.

Among other fields, progress is needed in applying machine learning to medical imaging. Failures during COVID-19 were considerable. As in other uses of machine learning in science, incentives are needed to encourage research on methods with greater validation. Funding should involve more rigorous evaluation practices.

Policy bodies should systematically evaluate the impacts of AI on everyday scientific practice, including on human-AI teaming, work, career trajectories and training ̶ where important changes could occur. Funding calls could require such assessments, and funders and policy makers should establish response mechanisms to act on the insights gathered. Among other measures, funders and policy makers could establish and support new independent fora for ongoing dialogue about the changing nature of scientific work and its impacts on research productivity and culture.

The deployment of large language models (LLMs), such as ChatGPT, demands attention from policy makers as their consequences are currently uncertain. LLMs could lead to more shallow work by making this easier, blur concepts of authorship and ownership, and possibly create inequalities between speakers of high- and low-resource languages. However, LLMs and other forms of AI could also aid governance processes, for instance in supporting peer review – a possibility that requires more study and testing.

Policy should address the potential dangers entailed in dual use of AI-powered drug discovery. Little attention has been paid to the imminent dangers of being able to automate the design, testing and making of extremely lethal molecules (and there will be other dual use research to consider, too). Policy makers and other actors in the research system need to assess which of the possible governance arrangements will best protect the public good.

Existing social networks and platforms could be used to help spread emerging practices. Social platforms such as Academia.edu and the Loop community could be used as testbeds for experimenting with combined human-AI knowledge discovery, idea generation and synthesis, and for propagating and evolving such approaches as literature-based discovery.

Steps are likewise needed to improve the reproducibility of AI research. Among other actions, public funding agencies can require code, data and metadata to be shared freely with third parties, allowing them to run experiments on their own hardware.

There is a strong case for sub-Saharan Africa, and possibly other developing regions, to receive much greater funding for AI in science. Development co-operation can help countries to advance open science, frame data protection legislation, improve digital infrastructures, strengthen overall AI readiness and support Africa’s own emerging initiatives, including indigenous development of data, software and technology. Projects with developing countries for AI in science can be mutually beneficial, and low-cost models of support have been proven. Development co-operation can also help create and support centres of research excellence.

Disclaimers

The Executive Summary and Chapter entitled “Artificial intelligence in science: Overview and policy proposals” were approved by the Committee on Scientific and Technological Policy at its 122nd Session on 22-24 March 2023 and prepared for publication by the OECD Secretariat.

The essays set out in Parts I to IV of this document are under the responsibility of the authors named and the opinions expressed and arguments employed therein are their own. The essays benefited from input and comments from the OECD Secretariat and CSTP delegates. The essays should not be reported as representing the views of the OECD or of its member countries.

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area.

Photo credits: Cover © PopTika/Shutterstock.com.

Corrigenda to OECD publications may be found on line at: www.oecd.org/about/publishing/corrigenda.htm.

© OECD 2023

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at https://www.oecd.org/termsandconditions.