Model Behavior: FDA and EMA’s Guide to Good AI in Drug Development
The US Food and Drug Administration (FDA) and the European Medicines Agency (EMA) just issued a joint statement outlining 10 “Guiding Principles” for using artificial intelligence (AI) in drug development.
The principles apply to sponsors, contract research and manufacturing organizations, software and data vendors, and other partners that design, validate, deploy, or rely on AI in regulated work. According to the statement, these 10 guiding principles identify areas “where the international regulators, international standards organizations, and other collaborative bodies could work to advance good practice in drug development.” This alert explains what was issued, who is affected, what the principles mean in plain language, and why they matter for interactions with the FDA and EMA.
What Was Released
The statement lays out 10 principles for how AI should be designed, used, and managed when it generates or analyzes evidence in drug development. According to an FDA release on the statement, this will allow the potential of AI to be fully realized while ensuring reliability of the information to ensure patient safety and regulatory excellence. The FDA says that guiding principles are essential to address “the unique challenges and considerations of AI applications in drug development.”
The joint FDA-EMA statement adopts a practical and broad definition of AI as system-level technologies that support non-clinical research, clinical studies, manufacturing, and post-marketing safety and effectiveness work. It also makes clear that drugs still must meet core requirements for quality, efficacy, and safety, and that AI should support, not weaken, protections for patients. The term drug is used broadly to include drugs and biological products in the United States and medicinal products in the European Union.
Why This Matters for FDA and EMA Interactions
The FDA and EMA focus on whether the evidence behind decisions is reliable and complete. While the new principles do not change existing law, the Agencies say they align with what regulators already use in practice. The statement makes clear that sponsors and their partners should expect questions from regulators about how and from where the data is generated, how it was processed, and how the model was tested to show it works for its intended purpose. Regulators will also look at human oversight of AI-assisted decisions, cybersecurity protections for data and systems, and how performance is evaluated over time so that issues like data drift are found and fixed. Aligning early with these principles can reduce rework in submissions, help with inspection readiness, and build confidence that AI-supported processes produce reliable, auditable results across the drug lifecycle.
The 10 Principles in Plain Language
The 10 principles fall into five general “themes.”
The first theme is human-centric design. AI should be built and used in ways that reflect ethical and human values. Sponsors and their partners should think ahead about how the AI technology may affect patients and users and build protections from the start. The goal is to advance patient interests and public health while avoiding foreseeable harm. (Standard #1: Human-centric by design).
The second theme is risk-based control. Every AI use should have a clear context of use that explains what it does and how its outputs will be used. Based on that context, sponsors’ and their partners’ decisions should be made on how much risk the model poses and then match testing, safeguards, and oversight to that level of risk. Lower-risk tools can have lighter controls. Higher-risk tools that inform important decisions need deeper testing and stronger safeguards. (Standard #2: Risk-based approach).
The third theme is alignment with standards and the right expertise. AI work should follow applicable legal, ethical, technical, scientific, cybersecurity, and regulatory standards, including Good Clinical Practice and Good Manufacturing Practice. Using a mix of skills across the AI lifecycle is strongly advised. That includes domain experts, data scientists, software engineers, cybersecurity specialists, clinical and manufacturing quality leads, and patient safety personnel. The skill mix should match the intended use. (Standards #3: Adherence to standards and #5 Multidisciplinary expertise).
The fourth theme is sound data and model practice. Sponsors and their partners need to track and document data sources, processing steps, and analytical choices in a way that is detailed, traceable, and verifiable, with appropriate privacy protections. Sponsors and their partners should build models using sound software and system engineering and use datasets that fit the intended use and weigh tradeoffs among interpretability, explainability, and predictive performance. A clear context of use with transparency, reliability, generalizability, and robustness matter because patient safety depends on them. (Standards #4: Clear context of use, #6: Data governance and documentation, and #7: Model design and development practices).
The fifth theme is rigorous evaluation and lifecycle control. Sponsors’ and their partners’ performance assessment should look at the full system, including how people interact with the AI in real-world workflows. Validation should use data and measures that match the stated context of use, and testing should be well-designed. Quality management systems should govern the AI lifecycle. Sponsors and their partners should plan for monitoring and periodic re-evaluation to find issues like data drift and to make sure performance stays acceptable. Communication to users, patients, and other stakeholders in clear, plain-language is essential to convey the essentials about the AI, including what its purpose is, how well it performs, its limits, the data behind it, how it is updated, and how it is to be interpreted. (Standards #8: Risk-based performance assessment, #9: Life cycle management, and #10: Clear, essential information).
Fit With Existing FDA and EMA Requirements
These principles fit within frameworks the FDA and EMA already use to assess data integrity, product quality, and patient safety. The focus on good documentation in the standards mirrors long-standing expectations for traceability and auditability. The emphasis on validation and fit-for-use measures aligns with how regulators review clinical and manufacturing evidence today. The call for quality management across the AI lifecycle is consistent with Good Clinical Practice for trials and Good Manufacturing Practice for production, including change control, handling deviations, and periodic review. The attention on cybersecurity and privacy matches current requirements to protect systems and sensitive data. Because the principles sit within these familiar frameworks, the FDA and EMA will use them and do not need new rules as they can be implemented through existing processes and controls.
Last year, our team broke down two FDA guidances on the use of AI in drugs, biologics, and medical devices. You can read that alert here.
If you have any questions on using the FDA’s 10 new principles, please contact Abha Kundi, Emily Cowley Leongini, or a member of our Food, Drug, Medical Device & Cosmetic team.
Contacts
- Related Practices