AI and Ethics Series : Building a Shared Framework for AI Governance in the NHS: Seven Regions, One Conversation

See LinkedIn Post

In my previous article, I discussed the UK’s fragmented regulatory and legal landscape for AI in healthcare, a system that leaves clinicians, Trusts, and especially primary care carrying disproportionate risk when technology fails.

This follow-up proposes an outline of a model that could address this imbalance. It’s a working concept that I’m testing with PPIE (Patient and Public Involvement and Engagement) groups and sharing here to gather wider input from colleagues across clinical, digital, and governance fields.

The goal is to see whether a shared regional approach could strengthen AI assurance, accelerate safe adoption, and reduce the uneven exposure that exists across the NHS.

Why We Need a Shared Approach

AI is already part of day-to-day NHS operations, transcription tools, diagnostic algorithms, triage systems, and automation are everywhere. Yet oversight remains inconsistent.

Large Trusts have procurement teams, information-governance leads, and clinical-safety officers. Most GP practices, pharmacies, and optometrists do not. Expecting every provider to run its own ethics or assurance process is neither realistic nor safe.

If each organisation independently assesses the same product, the NHS wastes resources, delays innovation, fragments expertise, and ends up with inconsistent standards. Governance must scale as adoption scales or risk will grow faster than capability.

The Concept: Regional AI Ethics Committees (RAIECs)

NHSE has now announced the seven new regions, with responsibility for innovation and digital transformation. These regions could provide the foundation for a distributed but connected governance model.

Core features

  • Seven Regional AI Ethics Committees (RAIECs) operating under a single national framework.
  • A shared N365 portal for submissions.  Any NHS organisation can upload an AI proposal for ethics and safety review.
  • Mutual recognition: once one region approves a system (with conditions), that decision applies nationally, avoiding duplicate reviews.
  • A central registry of approved, rejected, or withdrawn systems, searchable across the NHS.
  • Built-in transparency: each decision records rationale, ethical considerations, and post-market monitoring requirements.

Reducing the training burden

Centralising oversight into seven regional groups also lowers the training and competency load:

  • Each region builds a multidisciplinary team once, not hundreds of times over.
  • Expertise consolidates regionally, then cascades down to ICBs and providers through training, templates, and mentoring.
  • The outcome is consistency of judgment with far less duplication of effort.

Supporting Frameworks and Procurement

Beyond ethics review, RAIECs could play a vital role in creating practical frameworks and guiding procurement standards for AI.

Since the UK lacks the statutory protections that European providers gain under the EU AI Act, contractual clarity becomes essential. Regional committees could:

  • develop standard procurement clauses that assign liability and post-market obligations to suppliers,
  • define minimum safety and explainability requirements for tenders,
  • advise procurement teams on assessing vendor claims and technical documentation, and
  • reduce the risks to Trusts and practices from medical negligence.

By coordinating this at a regional level, we’d help Trusts and smaller providers buy AI systems with consistent safeguards without every organisation reinventing legal and assurance language from scratch.

Linking with the Innovation Passport

The NHS 10-Year Plan proposed an innovation passport to fast-track promising technologies.

The RAIEC structure could make that concept real:

  • An AI system approved by one RAIEC would earn a national “innovation passport.”
  • NHS organisations could verify status and conditions through the N365 portal.
  • Shared evidence would support adoption across England, cutting redundant local sign-off while maintaining oversight.

This would support the speed of adoption the 10-year plan calls for whilst ensuring strong governance is in place to protect patients, clinicians and our NHS organisations.

Continuous Oversight: Post-Market Surveillance

Approval is only the beginning. AI systems evolve, and their performance can drift.

An integrated surveillance framework may be structured as follows:

  • MHRA defines post-market obligations and enforces compliance.
  • Regional committees monitor live deployments, each focusing on a product family (e.g. imaging AI, transcription tools).
  • NHS England / DHSC aggregate national data, hold suppliers accountable through contracts, and publish safety updates.
  • Trusts and ICBs oversee local implementation and escalate incidents.
  • CQC inspects whether these duties are being met.

This creates a continuous feedback loop between local use and national learning, a system that spots problems early and acts before harm occurs.

Protecting Primary Care and Smaller Providers

Primary care must be part of the governance story. GP practices, community pharmacies, and optometry services are already using AI tools but rarely have formal assurance capacity.

A regional ethics and procurement framework would let them:

  • check if a tool is already approved and safe to use,
  • understand contractual and liability conditions, and
  • access expert guidance without maintaining in-house specialists.

It levels the playing field and ensures that patient safety isn’t determined by the size of the provider.

Building Accountability and Shared Learning

A regional framework clarifies rather than diffuses responsibility:

  • Developers remain accountable for product quality and updates.
  • Regions provide ethical oversight, training, and monitoring.
  • Trusts and practices ensure safe deployment and human oversight.
  • National regulators MHRA, NICE, ICO, CQC, HSSIB retain their statutory roles.

The shared registry becomes a national memory, recording successes, failures, and lessons so the NHS can finally stop repeating the same mistakes.

Why I’m Sharing This

I’ve recently taken on the role of Chair of the Midlands AI Governance Group and also attend the national AI Ambassador network. The same concern keeps coming up: clinicians and managers want to use AI responsibly, but they know our governance structures aren’t yet fit for purpose.

These ideas are just a starting point for collaboration. Over the coming weeks, I’ll be exploring them with PPIE groups, colleagues across the NHS and DHSC, and crucially the Health Innovation Networks (HINs). Their role in supporting local adoption, evaluation, and spread of innovation could make them vital partners in building the regional ethics and procurement model.

If you see gaps, conflicts, or opportunities to strengthen this concept, please share your thoughts. The NHS needs a practical, national model that supports safe, confident AI adoption across every part of the NHS.

#AIinHealthcare #NHSInnovation #EthicalAI #AIgovernance #DigitalHealth #CQC #MHRA #DIU #HIN

John Uttley – Innovation Director & SIRO, NHS Midlands and Lancashire