Every question we get asked. Organised by who is asking.

The FAQ is the workhorse of this site. It exists so that procurement, IT, quality, regulatory, L&D and the executive sponsor can each find their answers without reading everything else - and so decision-makers and participants can use it as an internal reference when they are explaining the programme to others.

01

Pick the section that matches your role.

02

Each answer is self-contained.

03

If a question you need answered is missing, the contact in the footer will get you to a real person.

§ 1 · Executive sponsors

The leadership-level questions. What this programme is, what it produces, what it costs in time and capacity, what it returns.

What does this programme actually produce?

Three things, in order. Trained individuals who can use AI confidently and defensibly inside regulated biopharma work. Embedded organisational practice - policy, governance, measurement and a community that keeps the work going after we leave. Measurable change in the work itself, tracked across four dimensions (Activity, Quality, Value, Risk) agreed before the engagement starts.

How long does it take to see results?

Activity changes show up in weeks. Quality changes - work being done faster, with fewer rework cycles - typically show up by the end of Sprint 1, six to ten weeks in. Value changes that hold up in a board paper take longer, usually by the close of Sprint 2. Embedded organisational change is a multi-quarter outcome. The 175-cohort case study referenced on the Outcomes page produced what client leadership described as 'three years of change in a quarter' across the duration of a single Embed engagement.

What's the investment?

The commercial framing is modular. A single-sprint engagement is the typical entry point, with subsequent sprints commissioned at the close of each one based on outcomes evidence. Specific investment depends on cohort size, sprint scope and the level of bespoke worked-example development required. Detailed pricing comes through proposal - the contact in the footer is the path.

What's the ROI?

Across every Embed engagement Brightbeam has delivered, year-one return on investment has been at least 3x. The average sits above 25x. One programme reached 88x. The 175-cohort case study returned €11.15M against a €250,000 programme cost in year one alone. Detail on the Outcomes page.

Who else has done this?

Brightbeam has worked with organisations across biopharma, medtech, advanced manufacturing and financial services. Named clients on file are listed on the About Brightbeam page. Named-client biopharma conversations and references are arranged through the contact in the footer.

What's distinctive about Brightbeam vs alternatives?

Three things. Sector specificity - we build worked examples from your own biopharma context (pharmacovigilance cases, deviation responses, batch records, CSR sections, regulatory submission drafts), not generic cases. Methodology heritage - the curriculum is built on Cognitive Task Analysis (CTA) and operationalised into a delivery model (Plan / Educate / Facilitate) refined across hundreds of cohorts. Embedded posture - we are not building a course you consume and forget. We are building organisational capability designed to compound after we leave.

What are the risks of doing this?

Two main ones. The first is delivery risk - that the programme runs but does not produce measurable change. We mitigate this with the four-dimension measurement framework agreed at the leadership workshop, baseline survey at the start, comparative impact reporting at the close of each sprint. The second is opportunity cost - your people invested time in this rather than other work. We mitigate this by using their actual work as the worked examples, so the time spent in the programme produces real outputs they would have had to produce anyway.

What are the risks of not doing this?

Three main ones. Regulatory exposure - the EU AI Act Article 4 literacy obligation is in force and most organisations cannot yet defend their position on it. Productivity decay - competitors moving faster on AI accumulate compounding advantage. Shadow IT - without sanctioned AI use, employees use personal accounts on personal devices to do work that organisational policy forbids, producing a worse compliance posture than no AI use at all.

How do you measure success?

Activity, Quality, Value, Risk. Agreed at the leadership workshop. Tracked through delivery. Reported in a comparative impact report at the close of each sprint and at programme completion. The four-dimension framework is described in detail on the How We Deliver page.

How does this fit with our existing AI initiatives?

Embed is designed to complement build work, not duplicate it. If you are building AI use cases internally or with another partner, Embed teaches your people the language, mental models and craft they need to commission, evaluate and operate those use cases. The two work in concert - Embed makes build projects faster and cheaper because staff already understand what AI can and cannot do. The build-over-buy economics that biopharma faces directly (most third-party regulated AI tools are immature, expensive and tooled around indications you do not have) make trained internal capability the higher-leverage investment.

What's the time commitment for our people?

A typical sprint involves around four to six hours per participant per week across the sprint duration - two to three hours of live session, plus homework time tied to their real work. The capstone project at the close of each sprint is typically a half-day to a full day of effort spread across two to four weeks. We design around participant workload deliberately and adjust the cadence to fit operational reality.

What level of leadership engagement is needed?

Real but bounded. The leadership workshop at the start (half a day to a day). Active sponsorship throughout, with brief check-ins between sprints. Visible attendance at capstone presentations. Endorsement of any policy or governance changes the programme produces. The programme does not run itself, but it does not require constant leadership attention either.

Can we start small?

Yes. Most engagements begin with Sprint 1 and a single cohort. The decision to commit to subsequent sprints is taken based on Sprint 1 outcomes. The Plan phase produces an honest assessment of fit before the first session runs.

What happens after the formal programme ends?

The internal community of practice continues. Updated curriculum materials remain accessible. Coaching calls extend by arrangement. Most importantly, the cascading discipline taught in Module 3 means trained champions continue developing capability inside the organisation after our formal involvement ends. The flywheel effect typically takes hold in months 6-12 - the trained cohort builds new initiatives, those initiatives release further capacity and that capacity gets reinvested.

One thing to plan for as the flywheel takes hold?

When the flywheel takes hold, your highest-AI-leveraged people become disproportionately valuable in role. We work with HR and leadership during Sprint 4 to make sure reward and promotion structures recognise that - so AI-accelerated performers are advanced rather than penalised by being 'too valuable to promote' out of role. The trap is real and the solution is structural.

Will this satisfy our regulators?

The curriculum is designed to help your organisation meet its regulatory obligations - particularly the EU AI Act Article 4 literacy obligation, the ICH Q10 expectation that staff are trained appropriate to their role, and the AI-specific disciplines now appearing in FDA guidance and enforcement. We do not claim the programme constitutes regulatory compliance in itself. Meeting your regulatory obligations remains your organisation's responsibility. The Brightbeam Certificate of Completion documents the competencies acquired and is widely accepted as evidence of AI literacy training. See About Brightbeam for our regulatory posture in detail.

How do we know it's working?

The comparative impact report at the close of each sprint is the answer. Baseline established at the start of the programme, change tracked across the four dimensions during delivery, results reported transparently at the end. If something is not working, the report says so - we treat measurement as a discipline, not as marketing.

↑ Back to filter
§ 2 · Champions & transformation leads

The internal-selling questions. How to build the case, run a pilot, keep momentum, make this stick.

How do I build the internal case for this?

Start with a real workflow problem your organisation already cares about - pharmacovigilance backlog, deviation cycle time, regulatory drafting load, audit-finding rework, CSR or PSUR turnaround. Frame the case around that workflow and the four-dimension measurement framework (Activity, Quality, Value, Risk) the programme will produce evidence against. Brightbeam can supply a leadership briefing pack to support this - request through the contact in the footer.

What proof points can I show my exec team?

The full curriculum on this site is one. The 175-cohort Outcomes case study (€11.15M return on €250K) is another. Brightbeam's track record across regulated industries (About Brightbeam) is another. Anonymised case studies in the Resources section as they land. Named-client references arranged through the contact. The Why Embed, Why Now page is designed specifically to be forwarded to a sceptical exec.

How do I run a pilot?

Most engagements start with a single Sprint 1 cohort - that is the pilot. The Plan phase before Sprint 1 includes a leadership workshop, baseline survey and function-by-function deep-dives that produce an honest assessment of fit before the first session runs. The decision to commit to Sprint 2 is taken at the close of Sprint 1 based on outcomes evidence. There is no all-or-nothing commitment up front.

How do I get buy-in from quality and regulatory?

Use the Quality and Regulatory section of this FAQ. It is designed to answer the questions they will ask. The curriculum's positioning on the PQS, audit posture, regulatory monitoring, PHI/PII/CSR data handling, ICH Q10 alignment, GAMP 5 and Annex 11 implications and Article 4 literacy obligation typically answers most concerns. Where they have a specific question we have not addressed, surface it through the contact and we will respond.

How do I get buy-in from IT?

Use the IT and Information Security section of this FAQ. Most IT concerns centre on tooling, data handling and integration - all of which the programme is designed to operate inside the organisation's existing approved environment. We do not require new tool deployments to deliver. We work primarily inside Databricks, Snowflake, Power Automate and the major AI platforms (Claude, Copilot, ChatGPT, Gemini) at whatever Global Tech approval level the organisation has.

How do I get buy-in from procurement and legal?

Use the Procurement and Legal section of this FAQ. The standard data processing agreement, security questionnaire pre-answers, IP terms and sub-processor disclosure are documented. Cross-border data transfer is handled at Plan stage. Most procurement teams have settled their position on Brightbeam within two working days.

Who should be in the first cohort?

Mix across functions and seniority. The Embed approach is designed for mixed cohorts and the outcomes are stronger when the regulatory affairs lead learns alongside the quality engineer, when the medical affairs manager sits next to the manufacturing scientist, when the pharmacovigilance officer learns alongside the clinical operations colleague. Avoid an all-functional or all-senior cohort - both produce weaker results than a mixed one.

What support do I get from Brightbeam during cascading?

The community of practice. Updated curriculum materials. Coaching calls between sprints to discuss internal cascading challenges. Champion-specific briefings when the curriculum updates materially. The Module 3 content is specifically designed to teach the cascading discipline, so you are not on your own.

How do I measure impact in a way leadership will accept?

The four-dimension framework. Agreed at the leadership workshop, baselined at programme start, tracked through delivery, reported in the comparative impact report at the close of each sprint. The framework is designed to produce numbers that hold up in a board paper without overclaiming.

What if the first sprint underperforms?

The comparative impact report says so. We do not gloss over weak results. The conversation between Sprint 1 and Sprint 2 is the moment to decide whether to continue, what to change, or whether the engagement is the right fit. Brightbeam treats that conversation seriously - extending an engagement that is not producing value benefits no-one.

How do I sequence sprints?

The default sequence is Core Skills → Vertical Skills → Strategic Change → Embedding. Most organisations work through that sequence over twelve to eighteen months, though pace varies. Some organisations stop after Sprint 2 and continue independently. Some go to Sprint 4. The decision is taken sprint by sprint.

Can I customise the curriculum to our specific workflows?

Yes - that is the Plan phase's job. The sub-module structure stays consistent across engagements, but worked examples, reference material and depth of coverage are tailored to your organisation. Sprint 2 in particular is designed around your specific role-by-role workflows - regulatory affairs, quality, clinical operations, pharmacovigilance, manufacturing, R&D, medical affairs, HEOR, supply chain and the shared-services functions each get their own depth.

What is not in the curriculum yet?

The current curriculum is built around regulatory, quality, clinical operations, pharmacovigilance, medical affairs and the supporting GBS functions. Some sector-specific areas are deliberately not yet baked in - aseptic processing and sterile operations, biologics and advanced therapies (the ICH Q5 series), CMC and lifecycle quality (ICH Q8/Q11/Q12/Q13 in their CMC application), serialisation and supply integrity, promotional review, market access and HEOR, real-world evidence and data strategy. We treat each of these as a candidate extension. If any of them is material to your organisation's evaluation, the Plan phase can include a scoped extension - flag it through the contact in the footer and we will scope it before the engagement begins.

How do I keep momentum between sprints?

The community of practice. Coaching calls. Capstone presentations. Visible champion-led activity inside the organisation between formal sprint dates. Most successful programmes have a between-sprint rhythm - even a short monthly internal session keeps the work alive.

What if our regulatory environment changes mid-programme?

The curriculum is monitored continuously and refreshed within thirty days of any material change to the EU AI Act, ICH Q-series, GAMP 5, EU Annex 11, FDA AI/ML guidance, FDA enforcement direction, MHRA software guidance and EMA reflection papers. Champions and active participants receive a briefing note when material updates are made.

How do I report on this to the board?

The comparative impact report is designed for it. Use the four-dimension framework as the structure. Pair it with one or two participant stories that bring the data to life. We can produce a board-ready summary at the close of each sprint as part of the engagement.

↑ Back to filter
§ 3 · L&D and talent

Where this fits inside the existing learning architecture. Cohort design. Cascading. Training records.

How does this fit into our existing learning architecture?

The programme is designed to integrate with your training matrix and competency requirements rather than to sit alongside as a parallel track. The Brightbeam Certificate of Completion is documented in a way that allows the participant's organisation to credit it against existing AI literacy, GxP or regulatory training requirements. We work with your L&D team during the Plan phase to map this in advance.

Will it count toward GxP or regulatory training records?

The certificate documents the competencies acquired. Whether it counts toward specific GxP, regulatory or competency records is a decision your quality and L&D teams make - different organisations integrate it differently. We provide the documentation; you decide how to apply it inside your training matrix.

What credit or certification do participants receive?

A Brightbeam Certificate of Completion documenting the sprints completed, skills covered and the biopharma-specific context in which they were delivered. The certificate is not a regulatory qualification and does not substitute for any formal accreditation your PQS requires. Within those bounds, it is widely accepted as evidence of AI literacy training under the EU AI Act Article 4 obligation.

How do you cascade beyond the formal cohort?

Module 3 teaches the cascading discipline directly. Champions and trained participants run internal sessions, demonstrations and pair learning with colleagues using the materials and patterns the programme provides. The community of practice supports them with updated content. Most organisations see significant secondary capability built in the six to twelve months after the formal programme ends - the flywheel effect described on the Outcomes page.

How do you handle different starting maturities in the same cohort?

The baseline survey in the Plan phase identifies the spread. Mixed cohorts handle the variance better than single-maturity cohorts because more advanced participants help less advanced ones learn faster, and less advanced participants ask the foundational questions that benefit everyone. Where the spread is genuinely too wide, we can split the cohort or run parallel cohorts at different paces.

What's the minimum viable cohort size?

Around eight participants is the lower bound for the cohort dynamics to work. Below that, the mixed-perspective benefit drops off and the programme economics tighten.

What's the maximum?

Around twenty-five participants per cohort is the upper bound. Beyond that, the live-session dynamics suffer and the small-group coaching becomes difficult to deliver well. Larger organisational rollouts typically run multiple cohorts in parallel rather than oversize a single one.

Can we co-deliver with our internal L&D?

Yes - this is increasingly common. Internal L&D involvement strengthens the cascading dimension because the learning function is already aligned with what is being delivered. The Plan phase is where co-delivery arrangements are agreed.

How do you handle remote vs in-person?

Both work. Remote delivery via Teams or equivalent is the default for most engagements. In-person delivery is offered where the cohort is geographically clustered and the value of in-room work justifies the cost. Hybrid is workable but requires more facilitator effort to maintain cohort dynamics.

What about participants who join part-way through?

We discourage it for cohort dynamics reasons but accommodate where unavoidable. Late joiners get accelerated pre-work covering the sessions they missed and a one-to-one onboarding call. Joining after the third session of a sprint usually means deferring to the next cohort.

How do you assess learning?

Through the homework reviewed by facilitators, the capstone project at the close of each sprint, and the comparative impact reporting that tracks Activity and Quality changes in the work itself. We do not run formal exams - the assessment is the work the participant produces during the programme.

What happens to participants who don't keep up?

Coaching support increases. Where the issue is workload rather than capability, we can adjust the participant's homework expectations. Where the issue is capability or engagement, the conversation goes back through the L&D and champion structure - Brightbeam does not unilaterally make participation decisions.

How do we handle the 'too valuable to promote' trap as the flywheel takes hold?

This question is increasingly raised by L&D and HR partners as engagements progress. When AI fluency makes a participant disproportionately productive in role, the standard reward and promotion structure can punish them - they become 'too valuable to move'. We work with HR during Sprint 4 to redesign the relevant reward and progression criteria so that AI-accelerated performers are advanced rather than penalised. The trap is structural; the fix is structural.

↑ Back to filter
§ 4 · Quality & regulatory

The compliance, audit and regulatory posture questions. The largest section in the biopharma FAQ for good reason.

How do you handle PHI, PII and CSR data?

Default position: PHI, PII and commercially confidential clinical study data do not enter AI tools used in delivery. Worked examples use anonymised or synthetic data drawn from sector-appropriate sources. Where a specific worked example would benefit from real data, the data handling decision goes through the Plan phase and requires explicit validation that the AI environment is approved for the data class involved.

Also under IT & InfoSec · Champions · Procurement & legal

How do you handle controlled documents?

Controlled documents stay inside your eDMS. Where the programme works with controlled-document patterns (SOPs, work instructions, batch records, deviation reports, validation protocols), it works with derivatives or anonymised versions, not the controlled originals. The boundary between the AI workspace and the controlled record is taught explicitly in Modules 2 and 3.

How does this work alongside our PQS?

The curriculum is designed to fit AI use into the existing Pharmaceutical Quality System (per ICH Q10) rather than to create a parallel structure. AP1 in Module 2 covers this directly - change control, document control, training records, supplier qualification, periodic review, deviation handling, CAPA. The aim is that AI use becomes a normal part of the PQS rather than a special exception.

What's the audit posture?

The curriculum teaches inspection-ready evidence as a discipline. For any AI activity that touches a regulated record or decision, participants learn to maintain evidence of tool, user, input, output, review and decision. The curriculum itself is designed to survive scrutiny - participants who work through it can defend what they did, why and under what controls.

How do you stay current with regulation?

Monitored continuously across the EU AI Act, ICH Q-series (Q2/Q3/Q7/Q9/Q10/Q11/Q12), GAMP 5, EU Annex 11, ICH E6 GCP, ICH E2E pharmacovigilance, FDA AI/ML guidance, FDA enforcement direction, MHRA software guidance and EMA reflection papers. The regulatory context inside the curriculum is refreshed within thirty days of any material change. Active participants and champions receive briefing notes when updates land.

Will this satisfy regulatory inspection?

The curriculum is designed to support an organisation's ability to demonstrate AI governance to FDA, MHRA, EMA, HPRA or any equivalent regulator. We do not claim it constitutes regulatory acceptance in itself. The regulator assesses your PQS and your specific AI use, not Brightbeam's curriculum. What the curriculum produces is documented, defensible practice that holds up in that conversation.

How do you handle the EU AI Act Article 4 literacy obligation?

The programme is designed to produce documented evidence of AI literacy appropriate to participants' roles. The Brightbeam Certificate of Completion records the competencies acquired. Whether this constitutes Article 4 compliance for your specific organisation is a decision your regulatory and legal teams take based on your AI deployment posture - but the programme is designed to provide the evidence basis for that decision.

How does this interact with ICH Q9 (Quality Risk Management)?

Q9 risk-based thinking is taught throughout AP1 and applied to AI specifically. Participants learn to assess AI use cases for risk to product quality and patient safety, document the risk assessment in formats compatible with the existing PQS, and apply controls proportionate to the risk identified.

How does this interact with ICH Q10 (Pharmaceutical Quality System)?

Q10 is the foundational structure the curriculum expects AI use to live inside. AP1 covers integration of AI use into management responsibility, knowledge management, change control, CAPA, deviation handling, supplier qualification and periodic management review. The aim is that AI use becomes a normal element of Q10 governance rather than a special workstream alongside it.

How does this interact with ICH Q12 (Lifecycle Management)?

Q12 expectations around established conditions, post-approval change management protocols and product lifecycle knowledge are addressed where AI use touches them - particularly in regulatory affairs work where lifecycle change documentation must be defensible.

How does this interact with GAMP 5?

AP1 covers GAMP 5 principles directly - particularly the Category 1-5 software classification logic and how it maps to AI tool selection and validation expectations. Participants learn to apply GAMP-derived risk-based thinking to AI tool qualification and the specific challenges AI presents (non-deterministic outputs, prompt-as-input, model version drift).

How does this interact with EU Annex 11?

Annex 11 expectations around computerised systems - validation, security, audit trails, electronic signatures, change control, business continuity - are covered in AP1 with specific application to AI tools. Participants learn to apply Annex 11 thinking to AI workflows even where the underlying tools are not classically validated systems.

How does this interact with 21 CFR Part 11?

AP1 covers Part 11 expectations directly - attribution, dated entries, reason for change, integrity, limited access. Participants learn to apply these expectations to AI-touched records where US FDA jurisdiction applies. The curriculum is designed to produce practice that holds up under Part 11 scrutiny.

How does this interact with FDA AI/ML guidance?

The current FDA AI/ML guidance landscape (the predetermined change control plan thinking, the lifecycle management framework and the wider direction of FDA expectation on AI in regulated decisions) is woven into the curriculum at multiple points. F1 sets the regulatory frame, AP1 operationalises it and Module 3 addresses governance.

Are AI tools you use validated?

The tools used in delivery are the tools your organisation has already approved or is prepared to approve. We do not bring validated AI tools - we use yours. Where the engagement requires a specific tool that is not yet approved, the validation conversation goes through your existing IT and quality processes, not around them.

What happens if there's an AI-related incident?

The curriculum teaches incident handling directly in Module 3 - escalation paths for incidents like data leak, hallucinated output in a regulated record or unauthorised tool use, with the recovery procedures that follow. During delivery, any incident involving Brightbeam material or facilitation is documented and managed through the engagement's contractual incident-handling provisions.

How do you handle GxP boundaries?

AP1 covers this directly. Participants learn to recognise where AI use crosses GxP boundaries (GMP, GLP, GCP, GDP, GVP) and apply the specific controls each Good Practice regime requires. The curriculum treats GxP as foundational, not as an afterthought.

What about clinical trial data specifically?

Clinical trial data is treated as one of the highest-sensitivity data classes in the curriculum. AP1 covers GCP-specific considerations, AP3 covers knowledge management implications and AP6 covers analysis-and-visualisation considerations. Default position: blinded data stays blinded, unblinded data does not enter ungoverned AI workflows, CSR sections are drafted with explicit traceability back to the source dataset.

What about pharmacovigilance work?

Pharmacovigilance is one of the most heavily-worked-example areas in the biopharma curriculum. AP1 covers the regulatory frame around AI in PV (the EMA Good Pharmacovigilance Practices position, signal detection oversight expectations), and AP2-AP5 cover the case management, knowledge curation and analysis applications. The default position is that AI assists the qualified person, not replaces them, and the audit trail makes that distinction visible.

What about regulatory drafting?

Regulatory drafting is one of the highest-leverage worked-example areas. The curriculum teaches participants to draft submission sections, response letters and change requests with AI assistance whilst preserving full traceability to source - so the resulting document holds up under regulatory scrutiny and the organisation's own QA review.

↑ Back to filter
§ 5 · IT & information security

Tooling, data handling, security posture. Integration with existing systems.

What AI tools do you use?

The tools your organisation has already approved or is prepared to approve. The curriculum is designed to be platform-agnostic - RBSF, EG, the agentic patterns and the compliance discipline transfer across Claude, Microsoft Copilot, ChatGPT, Gemini and other major harnesses. We do not require new tool deployments to deliver. In biopharma engagements specifically, primary tooling tends to be Databricks, Snowflake and Power Automate alongside the major AI platforms - reflecting what Global Tech has typically greenlit.

What about Zapier, Make and n8n?

Conditional on Global Tech approval. Where the organisation has approved them at enterprise tier, they are taught alongside the primary tools. Where they have not, the curriculum focuses on the approved alternatives. We do not push tools that have not cleared your security review.

What data goes where?

Default position: client data stays inside client environments. Worked examples use anonymised or synthetic data unless a specific exception is approved through the Plan phase. Where Brightbeam needs access to client material for worked-example preparation, it is governed by the engagement's data processing agreement.

Do you require new tool deployments?

No - the default is to work with the AI tools your organisation already has. Where a new tool is genuinely required for an engagement (rare), the deployment goes through your existing IT approval process, not around it.

What's your security posture?

Documented in detail in the security and data handling pack available through the contact in the footer. Summary: data minimisation, no PHI/PII/CSR data by default, anonymised worked examples, controlled-environment handling for client-confidential material, GDPR and equivalent compliance for cross-border data, ISO 27001 alignment.

Also under Procurement & legal

How do you handle multi-market Global Tech review?

Multi-market Global Tech review is the dominant compliance constraint biopharma engagements operate under. The Plan phase explicitly maps each tool, data class and use case against the Global Tech review status in each jurisdiction the engagement spans. Where a tool is approved in some markets but not others, the curriculum is calibrated accordingly and the limitation is documented.

How do you handle cross-border data transfer?

GDPR adequacy, Schrems II and equivalent considerations are assessed at the Plan stage based on the cohort's geography and the tools involved. Where the engagement spans jurisdictions, tool selection and data handling are adjusted accordingly. The decisions made are documented in the engagement's data handling record.

What's your DPA position?

A standard data processing agreement template is available through the contact in the footer. We are also able to negotiate against the client's preferred DPA. Most engagements settle the DPA position before the Plan phase begins.

How do you handle integration with our existing systems?

The curriculum is designed to teach participants to use AI alongside existing systems (eDMS, PLM, LIMS, ERP, RIM, CTMS, EDC, safety database, complaint management) rather than to integrate AI into them. Where an integration is required for a specific worked example, it goes through the client's existing IT integration process.

What about our information security policies?

The programme operates inside the client's information security policies. Where client policy is silent on AI use, the engagement helps surface and address those gaps as part of Module 3. We do not work around information security policy - we work with it.

What if a participant breaches AI policy during training?

Standard incident handling applies. The breach is documented, escalated through the engagement's incident-handling provisions and addressed through the client's existing disciplinary or compliance structures. Brightbeam does not handle the consequences of policy breach - that is the client's responsibility - but we do report and document.

Do you use our enterprise tenants or your own?

Where an enterprise tenant is available and approved for the engagement, we use it. Where one is not, the Plan phase makes a documented decision about how worked examples will be handled. Default is the client's environment.

What logging and monitoring do you put in place?

For Brightbeam-facilitated activity, standard usage logging through whatever tools the engagement runs on. Where the client has additional monitoring requirements, they apply. The engagement's logging and monitoring posture is agreed at Plan stage.

What's your incident response process?

Documented in the security and data handling pack. Summary: incidents are notified to the client within agreed timeframes, root cause analysis is run jointly, corrective actions are documented, and any pattern is reflected in updated curriculum or process.

Do you have ISO 27001 certification?

Brightbeam aligns delivery to ISO 27001 controls and operates inside ISO 27001-compliant environments where required. Specific certification status as of the engagement start is documented in the security and data handling pack.

↑ Back to filter
§ 7 · Participants

What being on the programme actually involves.

What do I have to do?

Attend the live sessions, complete the homework tied to your real work, contribute to the cohort discussion and produce a capstone project at the close of each sprint. The work is real work - most of what you produce is something you would have had to produce anyway.

How much time will it take?

Around four to six hours per week across each sprint - two to three hours of live session, plus homework time. The capstone at the close of each sprint is typically a half-day to a full day spread across the final two to four weeks.

What tools will I learn?

The major AI platforms - Claude, Microsoft Copilot, ChatGPT, Gemini and the harnesses around them. Plus the biopharma-specific tooling stack (Databricks, Snowflake, Power Automate). The specific tools used depend on what your organisation has approved. The curriculum teaches the underlying patterns (RBSF, EG, agentic discipline) that transfer across all of them, not the specifics of any single tool.

Will I need to download anything?

Usually not - the tools used are typically web-based or already installed inside your organisation. Where an installation is required for a specific session, you will be told in advance and supported through it.

What if I'm new to AI?

The Foundations module is designed for exactly this. The mixed-cohort approach means you will be alongside colleagues at different starting points, including some at your level. The pre-work will get you to a confident starting position before session one.

What if I'm already advanced?

The Plan-phase baseline survey identifies your starting point. The mixed-cohort approach gives you a role helping less experienced colleagues, which deepens your own understanding. Where you are genuinely beyond the Sprint 1 content, we can accelerate or pull you forward into Sprint 2 content earlier.

What happens if I miss a session?

Session recordings (where the cohort consents) are available afterwards. Catch-up homework is provided. Missing one session is normal and recoverable. Missing several signals a workload conversation that is best had with your manager and the champion.

What support do I get during?

Live facilitator support during sessions. Small-group coaching between sessions. Asynchronous question support through the cohort's working channel. The community of practice for cross-cohort questions.

What support do I get after?

Continued access to the community of practice. Updated curriculum materials as they refresh. Coaching call availability for sustained challenges. The cascading materials in Module 3 to support you taking the work into the rest of your team.

Will this help my career?

Indirectly. The programme is designed to build organisational capability, not personal certifications. That said, participants regularly report that the AI fluency they develop becomes a meaningful career asset in regulated industries where AI literacy is increasingly expected. The Brightbeam Certificate documents what you have learned.

Do I get a certificate?

Yes - a Brightbeam Certificate of Completion documenting the sprints completed, skills covered and the biopharma-specific context. See How We Deliver for the certificate's positioning.

What about confidential work - how is it handled?

The default position is that PHI, PII and CSR data do not enter AI tools used in the programme. Confidential work stays inside the controls your organisation already operates. Where a worked example needs real data, the handling is agreed at Plan stage with explicit validation that the environment is appropriate for the data class involved.

See also Quality & regulatory

Will using AI risk my regulatory standing?

The whole curriculum is built to answer this question with: not when it is used the way the curriculum teaches. Participants finish the programme with a clear framework for when AI use is appropriate, what controls apply, what the audit trail looks like and where the boundaries are.

Will AI replace my role?

No. The programme is designed to make you more effective in your role, not to remove your role. The pattern Brightbeam sees consistently is that AI fluency makes high-judgement work faster and lets practitioners spend more time on the parts of the role that require human expertise. The flywheel effect described on the Outcomes page is about role enhancement, not role replacement.

↑ Back to filter
§ 8 · Commercial & contracting

The cross-cutting commercial questions. Pricing. Engagement models. Renewal.

How is the programme priced?

Modular. Each sprint is commissioned at a fixed scope and fee based on cohort size, depth and bespoke worked-example development. Specific pricing comes through proposal.

What's the typical engagement size?

Most engagements begin with one cohort of twelve to twenty-five participants in Sprint 1. Multi-cohort, multi-sprint engagements are common in biopharma. Larger organisations frequently run multiple cohorts in parallel across regions or functions.

Can we run multiple cohorts in parallel?

Yes - this is the typical pattern for larger biopharma organisations. Parallel cohorts allow horizontal coverage of different functions or geographies inside the same sprint window.

What's the minimum commitment?

A single sprint. Most engagements start with Sprint 1 only. Sprint 2 and beyond are commissioned independently based on outcomes evidence.

How is payment structured?

Standard staged payment against milestone delivery - typically Plan completion, mid-sprint, sprint close. Specific terms are in the MSA.

Can we extend or modify mid-programme?

Yes, by mutual agreement. Mid-sprint scope changes are uncommon but possible where the engagement reveals a need that was not visible at the start.

What if we need to pause?

Pauses between sprints are normal and accommodated without penalty. Mid-sprint pauses are possible but disrupt cohort dynamics - we will discuss alternatives where requested.

Are there volume discounts?

The pricing structure scales with engagement size in the way you would expect. Specific commercial framing happens through proposal.

How do you handle multi-year engagements?

Multi-year engagements are typically structured as a master agreement with annual or sprint-by-sprint commercial commitments. This protects both parties from over-commitment. Multi-year is increasingly common in biopharma where the four-sprint sequence naturally spans twelve to eighteen months.

What's included and what's extra?

The sprint fee includes all live delivery, coaching, materials, the digital resource pack and the comparative impact report. Bespoke worked-example development against your specific content is included up to a typical scope; significant content development beyond that is sometimes priced separately. Specific scope is detailed in each SOW.

↑ Back to filter
Closing

If your question is not answered here, the contact in the footer is a real person who reads their email. We would rather hear from you than have you guess.

If there is a question we should have answered, tell us. The FAQ is maintained based on what actually comes up in the sale.

Ask James Harte