FDA Proposes Framework to Advance Credibility of AI Models Used for Drug and Biological Product Submissions
The FDA’s new draft guidance introduces a framework to strengthen the credibility of AI models in drug and biologic submissions, aiming to ensure transparency, reliability, and ethical use of artificial intelligence in pharmaceutical development.
RA BLOGSALL BLOGS
1/6/20252 min read


FDA’s Draft Framework on AI Credibility in Drug and Biologic Submissions: Building Trust in Digital Innovation
In January 2025, the U.S. Food and Drug Administration (FDA) released a new draft guidance that introduces a framework to make artificial intelligence (AI) models more credible and transparent in drug and biologic submissions. This is the first official guidance from the agency that focuses on how AI can support decisions about a product’s safety, effectiveness, and quality.
According to FDA Commissioner Dr. Robert M. Califf, the goal is clear: to support innovation while maintaining strong scientific and regulatory standards. He explained that with the right safeguards, AI can transform clinical research and speed up medical product development.
Why This Matters for Drug Development
Since 2016, the FDA has received more than 500 submissions that included AI-based tools. These technologies are already being used to predict patient outcomes, analyze disease patterns, and process large datasets from real-world sources.
However, using AI in such sensitive areas requires confidence that the model’s predictions are accurate. This is known as model credibility, which means trust in how well an AI model performs for a specific purpose or situation.
What the Draft Guidance Emphasizes
The new document proposes a risk-based framework to help companies define the context of use for each AI model, identify and perform the right credibility checks to prove that the AI outputs are reliable, and engage early with the FDA to discuss plans for using AI in drug or biologic development.
This approach ensures that AI-generated data meets regulatory expectations and remains transparent, traceable, and reproducible. These values are central to regulatory science and help maintain public trust in product evaluation.
A Step Toward Responsible AI Regulation
The FDA developed this guidance with input from multiple centers, including CDER, CBER, and the Oncology Center of Excellence, along with experts from academia, industry, and technology. The draft reflects feedback from more than 800 public comments and expert discussions hosted by the Duke Margolis Institute for Health Policy.
By adopting a single, clear framework, the FDA aims to create consistency and accountability in how AI is used for medical product evaluation. This shows the agency’s ongoing commitment to ethical innovation and patient safety.
What It Means for Regulatory Affairs Professionals
For professionals in pharmaceutical regulatory affairs, this guidance is an important milestone. It highlights the need to understand AI-based evidence, data validation, and risk management in regulatory submissions.
The message is simple: AI will continue to shape how we design trials, review data, and communicate with health authorities. Regulatory specialists must now learn to interpret not just clinical or quality data but also algorithmic data that supports those findings.
The FDA’s framework represents more than a policy update. It is a shift toward smarter, evidence-based regulation, where credibility and transparency guide every stage of drug development.
REFERENCES;