Global regulators set out principles for safe AI across the medicines lifecycle

From discovery to manufacturing and post-marketing surveillance, new joint principles from the European Medicines Agency and US Food and Drug Administration are aimed to ensure that AI in medicines development is human-centred, risk-based and transparent.
AI drug development, conceptual illustration, pink and transparent pill on a dark blue background with white circles, and intersecting light blue lines

The pharmaceutical industry has been using mathematical models to identify and design new drugs for decades, but the development of generative AI is enabling faster routes to new targets, speeding up early research and monitoring clinical trials, manufacturing and safety.

With the AI train showing no signs of slowing down, the European Medicines Agency (EMA) and US Food and Drug Administration (FDA) have jointly identified ten principles for the development and use of AI technologies across the medicines lifecycle (see Box). 

Box: The European Medicines Agency and US Food and Drug Administration’s ten principles for the development and use of AI technologies across the medicines lifecycle

The ten principles are:

1. Human-centric by design to align with ethical and human-centric values;

2. Risk-based approach with proportionate validation, risk mitigation and oversight based on the context of use and determined model risk;

3. Adherence to legal, ethical, technical, scientific, cybersecurity and regulatory standards;

4. Clear context of use for role and scope;

5. Multidisciplinary expertise covering the AI technology and its context of use integrated throughout the technology’s lifecycle;

6. Data governance and documentation in a detailed, traceable and verifiable manner, maintained throughout the technology’s lifecycle;

7. Model design and development following best practices that is fit for use, considering interpretability, explainability and predictive performance, promoting transparency, reliability, generalisability and robustness for AI technologies contributing to patient safety;

8. Risk-based performance assessments to evaluate the complete system including human–AI interactions;

9. Lifecycle management with risk-based quality management systems implemented throughout the AI technologies’ lifecycles;

10. Clear, essential information presented in plain language relevant to the intended audience regarding the AI technology’s context of use, performance, limitations, underlying data, updates and interpretability or explainability.

Government missions

This initiative between the EMA and FDA builds on collaborative work following a bilateral meeting of both organisations held in April 2024, aligning with the EMA’s mission to promote the safe and responsible use of AI, as outlined in the ‘European medicines agencies network strategy’ (EMANS), which mentions leveraging data, digitalisation and AI. It also aligns with the EMA network data steering group’s data and AI workplan, which is aimed to optimise use of data and AI.

New pharmaceutical legislation, which was agreed by the European Parliament and the Council of the EU in December 2025, also accommodates the broader use of AI in the lifecycle of medicines in regulatory decision-making and creates additional possibilities for testing innovative, AI-driven methods for medicines in a controlled environment.

In the UK, the government’s ‘AI for science strategy’, published in November 2025, set a national goal of its first mission being to use AI to “accelerate drug discovery to develop trial-ready drugs within 100 days by 2030 and contribute to deploying new treatments faster”.

Principles in practice

The UK’s strategy positions AI-enabled drug discovery as a core part of scientific infrastructure rather than a future aspiration, paving the way for principles such as these ten identified by the EMA and the FDA as good AI practice in the medicines lifecycle to be put in place in drug development in the UK.

While many of the expectations underpinning these principles — including data integrity, validation documentation and accountability — are “already embedded within existing regulatory and good practice frameworks” in the UK, Amira Guirguis, chief scientist at the Royal Pharmaceutical Society, says “formal alignment with internationally agreed AI principles would provide greater clarity and consistency, particularly for medicines developed and intended for approval in more than one country”.

There is an important difference between AI being used to support research and relying on it as part of the formal evidence submitted for regulatory approval

Amira Guirguis, chief scientist at the Royal Pharmaceutical Society

In addition to high-level governance principles, Guirguis also believes that it is important to be clear about what an AI system is designed to do and what it is not. 

“There is an important difference between AI being used to support research and relying on it as part of the formal evidence submitted for regulatory approval,” she says.

“And in those cases, systems must be rigorously tested, clearly documented and subject to appropriate oversight.”

Medicines and Healthcare products Regulatory Agency AI initiatives

The Medicines and Healthcare products Regulatory Agency (MHRA) is progressing its own portfolio of regulatory science initiatives that explore how AI and advanced computational methods can safely strengthen evidence generation and decision-making, while “maintaining robust assurance, documentation and accountability”.

For example, to ensure regulation keeps pace with AI-enabled discovery and manufacturing, the MHRA is engaged with the CMAC Centre for Excellence in Regulatory Science and Innovation (CERSI) research centre of excellence, led by CMAC at the University of Strathclyde, bringing together real‑world industry evidence on how AI and digital tools are being used in pharmaceutical manufacturing and quality systems. 

The Medicines and Healthcare products Regulatory Agency’s consistent approach is that AI should be used to augment expert judgement, not replace it

Spokesperson for the MHRA

It is also connected to the UK’s Centre of Excellence on In-Silico Regulatory Science and Innovation (UK CEiRSI), which is aimed to develop tools, standards and best practice for integrating computational models and digital simulations (including AI/machine learning approaches) into regulatory decision-making.

“Across all of these initiatives, MHRA’s consistent approach is that AI should be used to augment expert judgement, not replace it; that data governance, documentation and validation are essential; and that models must be assessed against their intended context of use, with clear accountability and lifecycle management,” a spokesperson for the MHRA told The Pharmaceutical Journal.

“These principles are closely aligned with the direction set out by EMA and FDA and are intended to support responsible innovation while maintaining public trust,” they added.

Benefits of AI

Pharmaceutical companies are already testing out the benefits of adopting good AI practice in a medicine’s lifecycle.

In November 2025, GSK announced £45m in funding for six research programmes starting in 2026, in partnership with the Fleming Initiative, aimed to slow the progress of antimicrobial resistance through the discovery of new drugs; improve understanding of how the immune system responds to drug-resistant bacteria; and create AI models that predict how drug-resistant pathogens emerge and spread. 

As these technologies reduce development time and cost, it is reasonable to consider whether current patent periods remain appropriate

Mark Samuels, chief executive of Medicines UK

Another project will focus on developing an AI model to design antibiotics for multi-drug-resistant Gram-negative bacteria, such as Escherichia coli, which can cause urinary tract and bloodstream infections; and Klebsiella pneumoniae, which can cause pneumonia, sepsis and meningitis.

There could be other positive outcomes for patients too. 

“As these technologies reduce development time and cost, it is reasonable to consider whether current patent periods remain appropriate and whether earlier competition could benefit patients and the NHS,” says Mark Samuels, chief executive of Medicines UK.

Nonetheless, drug development remains high-risk and resource intensive, which is why principles for safe AI practice are vital and should be implemented in every jurisdiction. 

“Investment in validation, documentation and specialist expertise will increase upfront costs,” adds Guirguis.

“So these safeguards reduce the likelihood of regulatory challenge, unreliable findings or safety issues emerging later. 

“In that sense, good AI practice protects both patients and long-term value.”

Last updated
Citation
The Pharmaceutical Journal, PJ March 2026, Vol 317, No 8007;317(8007)::DOI:10.1211/PJ.2026.1.403215

    Please leave a comment 

    You may also be interested in