AI Regulatory and Policy Landscape
Where AI for life sciences sits today in Ireland, and across the EU, UK, US and beyond.
Learning objectives
- Describe the EU AI Act's risk-based classification system (unacceptable, high, limited, minimal) and identify where typical biopharma AI use cases fall.
- Identify Ireland's National Competent Authorities relevant to AI in life sciences (HPRA, DPC, others as applicable) and describe their respective remits.
- Compare the regulatory postures of the EU, UK MHRA, US FDA, Health Canada and TGA with respect to AI in biopharma.
- Describe the FDA's evolving approach to AI/ML in drug development and biologics – the 2023 discussion paper on AI/ML in drug and biological product development, Model-Informed Drug Development, Good Machine Learning Practice principles and Computer Software Assurance – and what it implies for AI use inside biopharma organisations.
- Position the GAMP 5 framework for computerised system validation – alongside FDA Computer Software Assurance and EMA Annex 11 – against the AI tools and harnesses participants are likely to encounter in GxP contexts.
- Articulate the "risk-reward" posture regulators are taking – encouraging adoption while requiring proportionate controls – and explain why blanket positions (either prohibition or unrestricted use) sit outside that posture.
- Apply the data-protection landscape (GDPR, HIPAA where relevant, ePrivacy) as a governing constraint on every AI use case touching personal or patient data.
- Navigate the broader regulatory lattice the curriculum operates inside: ICH Q2 (analytical method validation), Q3 (impurities), Q7 (API GMP) and Q11 (drug substance development) for CMC and manufacturing-adjacent AI use; ICH E6(R3) GCP for clinical trial work; ICH E2E pharmacovigilance planning for safety work; MHRA software guidance for UK-regulated tools; and the EMA reflection papers on AI/ML in the medicinal-product lifecycle for European-tier submissions. Each is referenced where relevant in AP1 and the data and analysis sub-modules; the F1 expectation is that participants know which framework applies to which AI use case before they start work.