What it actually looks like, week to week.
The substance of the programme is in the Curriculum. The methodology is in Our Approach. This section covers the practical layer in between - what a sprint feels like for a participant, how they are supported across the arc, how the curriculum stays current and how outcomes are measured.
A sprint runs over six to 14 weeks depending on cadence agreed with the client.
Each features live training sessions, homework tied to the participant's real work, small-group coaching and a capstone project.
Pre-work is provided before each sprint, a parking lot is present during sessions, a digital resource pack and community of practice are provided after.
Content is reviewed quarterly against tool changes, regulatory changes and trainer feedback.
Outcomes are measured against four agreed dimensions - Activity, Quality, Value, Risk - from the Plan phase onwards.
A sprint is the unit of delivery. Each one compounds on the last.
Cadence. Most sprints run over six to 14 weeks. The exact rhythm is agreed with the client - some organisations prefer compressed delivery (two sessions a week), others prefer an extended cadence (one session a fortnight) that lets capability bed in alongside live work. We have done both. We find one session per week is optimal. But operational reasons often dictate a different cadence.
Live, facilitated sessions.
Live, facilitated sessions delivered virtually - or onsite if required. Each session covers a specific capability area inside the sprint's scope, follows the three-phase loop of teaching concept, applying to participant's own work and reflect and challenge across cohort - and produces something the participant takes back into their work that week. Sessions are typically two to three hours, with a break.
Applied to real tasks.
Between sessions, participants apply their new skills to real tasks from their daily work - pharmacovigilance case summaries, deviation responses, batch-record reviews, regulatory drafting, study report sections, supplier correspondence. Homework is reviewed by the facilitator and used as material for the next session.
Small-group, interleaved.
Small-group coaching sessions interleaved with the main sessions. Smaller groups, less formal, more depth on individual challenges and shared practice where appropriate. This is where most of the personalised capability development happens.
Three-plus skills chained.
Each sprint closes with a capstone project - a piece of work that requires participants to chain three or more skills from the sprint together, applied to a real problem they own. Capstones are evaluated and reports are sent back to each individual.
Compounds into the next.
This shape is consistent across sprints. The content varies - Sprint 1 is horizontal core skills, Sprint 2 is vertical role-specific work in workflows, Sprint 3 is strategic redesign, Sprint 4 is full operating-model integration.
Five stages. Consistent shape. The content shifts sprint by sprint; the rhythm doesn't.
Support runs across the programme's full arc - not just when each session is live.
Support runs across the programme's full arc rather than ending when each session does. Pre-work sets the table. A parking lot runs during sessions. Resource packs and a community of practice hold capability in place after. Between-sprint coaching threads the next one to the last.
Pre-work that earns its keep.
Participants receive pre-work tailored to their organisation's AI tooling - including the biopharma-specific tooling reality (Databricks, Snowflake, Power Automate primary; ChatGPT Enterprise, Claude or Copilot depending on what's been Global Tech approved) - and the specific use cases being covered.
The pre-work is not theatre. It establishes the baseline mental model and surfaces the specific tools, workflows and constraints each participant brings into the sprint. By session one, every participant arrives with relevant context and a personal task to work on from the start.
The parking lot.
Facilitators operate a standing list - the parking lot - for regulatory or organisation-specific questions that fall outside the session's scope. These are documented and returned to participants with guidance on where to seek authoritative answers within their compliance structures.
Resource pack · community.
Participants receive a digital resource pack containing the worked examples, prompt templates, validation framework references and skills used during the sprint.
They also gain access to the Brightbeam online community of practice - a moderated space where they can share experiences, ask questions and access updated materials.
Follow-up coaching.
Follow-up coaching sessions can be arranged to discuss progress, troubleshoot challenges encountered during internal cascading and tune the next sprint's content to emerging needs. The between-sprint conversation is where the next sprint's worked examples often get refined to reflect what the organisation has actually been doing with the skills from the previous one.
The curriculum is a live document.
AI tools and regulatory frameworks evolve rapidly; a curriculum developed one year and delivered the next risks subtracting value rather than adding it. Brightbeam's maintenance approach has three components and runs on a quarterly cycle.
Tool and feature updates.
Each quarter the programme team reviews updates across the major AI platforms - Claude, Copilot, ChatGPT, Gemini and the harnesses each provides - and adjusts worked examples, demonstrations and skills accordingly.
New capabilities relevant to biopharma workflows are incorporated. Deprecated features are removed. The platform-tier picture (which models are at Global Tech approval level in which jurisdictions) is refreshed as it changes.
Regulatory monitoring.
The EU AI Act implementation timeline, ICH Q-series updates, GAMP and Annex 11 revisions, FDA AI/ML guidance, FDA enforcement direction, MHRA software guidance and EMA reflection papers are monitored continuously.
The regulatory context embedded in the curriculum is refreshed within thirty days of any material change. Version-controlled materials ensure participants and facilitators always work from current content.
Champion communication.
When material updates are made, all internal champions and participants who have completed cascading training receive a briefing note summarising what changed and why - so nobody is teaching outdated content inside the client organisation.
The maintenance discipline is the practical reason the curriculum has stayed defensible across a fast-moving two years. It is built into the cost of running the programme, not added on as a premium.
Four dimensions, equal weight. One number flatters; four tell the truth.
Brightbeam measures programme outcomes across four dimensions, agreed with the client at the leadership workshop and tracked throughout delivery. The four-dimension structure is deliberate - measurement that tracks one flattering number is theatre, not measurement.
Who is using AI, how often and with which tools.
Captured through usage logs and participant journals.
Activity on its own does not prove value, but the absence of activity is the earliest signal that adoption is stalling.
The measurable impact on the work itself. Rework cycles reduced. Turnaround times improved. Error rates tracked against the baselines established in the Plan phase.
This is where the work itself starts to look different.
Hours freed. Volume increased. Cost avoided. Tied directly to the KPIs agreed at the leadership workshop.
This is the dimension that gets quoted in board papers and underpins the case for continuing into subsequent sprints.
Incidents, near-misses and compliance findings related to AI use. Tracked to ensure adoption is safe as well as productive.
This is the dimension that protects the organisation from the failure modes that emerge when AI is used in regulated work without adequate review.
A comparative impact report is produced at the close of each sprint and at programme completion.
A Brightbeam Certificate of Completion. Documented evidence, not accreditation.
Participants who complete the programme receive a Brightbeam Certificate of Completion documenting the sprints completed, skills covered and the biopharma-specific context in which they were delivered.
These certificates document the competencies acquired - including the regulatory awareness component - and can be used by the participant's organisation as evidence of AI literacy training. This is increasingly relevant given the AI literacy obligations now in force under Article 4 of the EU AI Act, which requires organisations deploying AI systems to ensure staff have sufficient AI literacy appropriate to their role.
The certificate is not a regulatory qualification and is not a substitute for any formal accreditation the client's quality system requires. It is documented evidence of a known curriculum delivered to a known standard.
The substance of the programme is in the curriculum. The methodology is in Our Approach. The practical detail of how it gets delivered is here. The outcomes detail - what previous cohorts have actually achieved - is in Outcomes.
If you have a specific delivery question that is not answered above, the FAQ has a How We Deliver section that goes deeper, organised by audience.
Talk to James Harte →