AI and Ethics Series : Why the NHS needs a unified AI Ethics Framework and the Skills to Run it
Friday, 22 August 2025

Imagine this: you’re a patient whose treatment plan has been influenced by an algorithm. The clinician reassures you it’s “AI-assisted,” but no one can clearly explain what that means, how the decision was made, or whether the same result would have happened in another hospital.
This isn’t science fiction — it’s the ethical crossroads we’re already standing at.
AI in healthcare has the potential to speed up diagnoses, free up clinicians’ time, and improve patient outcomes. But it also introduces risks we’ve never faced at this scale — from hidden bias in training data to opaque decision-making that even the developers struggle to explain.
The NHS has always operated on strong ethical foundations: patient confidentiality, informed consent, equity of care, transparency, and accountability. These values don’t change because the technology changes — but the way we uphold them has to.
Why ethics can’t be “do it yourself”
Right now, AI ethics in the NHS can feel like 42 different orchestras playing their own version of the same song. Each Trust or organisation builds its own framework — some comprehensive, some basic, some skipping over critical elements altogether.
This decentralised approach has problems:
- Inconsistency — Patients might receive very different levels of ethical protection depending on where they are treated.
- Gaps in oversight — A missing bias check here, an absent explainability requirement there, can have real-world consequences.
- Duplication of effort — Each organisation is spending time and money reinventing a wheel that could have been built once, tested, and refined nationally.
Instead, we need a national NHS AI ethical framework — one that’s evidence-based, risk-proportionate, and designed to apply consistently across the whole system. Local organisations can still add specific safeguards for their context, but the foundations would be the same everywhere.
A single framework would:
- Reduce duplication and speed up safe adoption of innovation
- Ensure consistent protections for patients and staff
- Make it easier for developers to understand the rules of engagement
- Give the public confidence that no matter where they receive care, the ethical safeguards are the same
It’s not just the framework - it’s the people
Even the best ethical framework will fail if the people tasked with applying it don’t have the right skills.
The committees overseeing AI ethics can’t just be the same governance groups with “AI” added to the agenda. AI ethics involves concepts that are new to many healthcare leaders:
- Algorithmic bias and fairness auditing
- Model explainability and interpretability
- Human-in-the-loop design and oversight
- Data provenance and representativeness
- Sustainability impacts of large-scale computing
- Contestability and redress mechanisms for automated decisions
These aren’t skills we’ve historically needed in the NHS in such depth. They require a mix of expertise — not just clinicians and information governance specialists, but data scientists, cyber security leads, equality and diversity professionals, sustainability experts, and representatives of patients and the public.
And crucially: they need training.
It’s not realistic to expect even experienced committee members to confidently assess bias metrics or challenge the design of an explainability feature without some targeted learning. Building AI ethics capability means:
- Structured induction training for new committee members
- Ongoing refresher sessions as technology and regulations evolve
- Access to technical advisors for complex cases
- Peer networks across organisations to share experience and challenges
Without this investment in skills, ethics committees risk becoming “rubber-stamp” bodies — approving projects without truly understanding the implications.
Proportional governance: matching oversight to risk
The NHS Ethics Framework I’ve created uses a simple but powerful principle: the higher the potential risk, the higher the level of governance required. (Based on the Alan Turing Institute AI Ethical Framework)
- Low risk: light-touch ethical review, transparency, minimal bias checks.
- Medium risk: fairness audits, stakeholder engagement, explainability checks.
- High risk: independent review, mandatory human-in-the-loop, public transparency reports.
This avoids overburdening low-risk projects (like an internal FAQ chatbot) while ensuring robust safeguards for high-stakes systems (like diagnostic AI).
But here’s the catch: without a national approach, one Trust’s “medium risk” might be another’s “high risk” — meaning a patient’s protection depends on postcode.
The public’s trust is the NHS’s licence to innovate
Public trust is fragile. A single high-profile AI failure could set adoption back years — not because the technology is bad, but because the safeguards weren’t there.
Transparency is critical. That means publishing plain-language summaries of ethical reviews, bias audits, and stakeholder engagement for medium and high-risk projects — not hidden away in internal reports, but made available under Freedom of Information or through open board papers.
It also means explaining AI decisions in a way people can understand and challenge. Patients should never feel like they’re up against an unaccountable “black box.”
The call to action
If we’re serious about making AI in the NHS safe, fair, and trusted, we can’t rely on hundreds of local versions of “what good looks like.”
We need:
- A national NHS AI Ethics Framework — mandatory, consistent, and proportionate to risk.
- Trained, multi-disciplinary ethics committees — equipped with the technical and ethical literacy to scrutinise AI projects effectively.
- Clear public transparency commitments — so patients and staff can see, understand, and challenge how AI is being used.
Because the alternative isn’t just messy — it’s unsafe. And in healthcare, unsafe isn’t an option.
If trust is the foundation of healthcare, AI ethics is the scaffolding. Right now, we’re building hundreds of different scaffolds, each with its own weaknesses. Isn’t it time we built one strong, national structure that can hold up the weight of innovation?
In next weeks article, I breakdown the risk based governance model, giving practical 'how to guide' for assessing AI risk.
John Uttley – Innovation Director & SIRO, NHS Midlands and Lancashire
#AIethics #NHSinnovation #ResponsibleAI #HealthTech #DataEthics #ExplainableAI #AIgovernance #DIU #MLCSU