AI and Ethics Series : One Size Doesn’t Fit All - But the NHS Needs One Risk Model for AI

See LinkedIn Post

Picture this:

Two AI systems are about to be deployed in the NHS.

  • One is an internal chatbot answering staff FAQs about HR policies.
  • The other analyses diagnostic images to flag possible cancer cases for urgent review.

Should both face the same level of ethical review?

If we treated them identically, we’d either over-burden harmless innovation or under-protect patients from high-stakes risks.

That’s why risk-based governance is the backbone of the AI ethics framework I’ve implemented in my own organisation — a framework I developed using the Alan Turing Institute’s model as its foundation.

Why risk-based governance matters

AI isn’t inherently good or bad — it’s a tool. But the impact of how we use it varies dramatically.

A light-touch HR bot gone wrong might cause inconvenience; a flawed cancer-detection model could cause real harm.

Risk-based governance means matching the level of oversight to the potential consequences.

It keeps innovation moving without compromising safety, fairness, or trust.

In the framework we created locally, the governance levels look like this.

  1. Low risk - Example, HR Policy Chatbot, or summarising minutes of a meeting. Oversight: basic ethics review, transparency about use, minimal bias checks.
  2. Medium risk - Example, Population health analytics to identify at risk groups. Oversight: Stakeholder impact assessment, fairness audits, explainability checks.
  3. High risk - Example, AI Diagnostics influencing treatment decisions. Oversight: Full ethics board review, human in the loop for all outputs, public transparency reports, independent validation.

The six risk factors every project should be judged against

Risk classification isn’t guesswork. Ethics committees should look at these factors:

  • Scale of impact: How many people are affected, and how often?
  • Severity: Could it affect health, liberty, or legal rights?
  • Transparency: Can we explain the AI’s decisions?
  • Bias potential: Could it produce discriminatory or unfair outcomes?
  • Autonomy reduction: Does it limit or override human choice?
  • Population vulnerability: Are vulnerable groups disproportionately affected?

We apply these locally — but the same six could underpin a national approach.

From local practice to national standard

Here’s where I think the NHS needs to evolve.

At the moment, different Trusts and organisations are creating their own AI governance processes, often from scratch. This means:

  • Patients in one area may get stronger ethical protections than those elsewhere.
  • Developers face inconsistent requirements from Trust to Trust.
  • Teams spend time reinventing something that could be built once and adapted.

Rather than my organisation’s framework becoming the national model wholesale, I believe the best route is to use it as a template — a starting point for creating a national NHS AI ethics framework through multi-stakeholder input.

That means involving:

  • Clinicians
  • Data scientists and AI developers
  • Information governance and cyber security specialists
  • Patient and public representatives
  • Equality, diversity, and inclusion experts
  • Sustainability leads

Only with all these voices can we create something robust, practical, and widely trusted.

Training committees to spot the hidden risks

Risk-based governance only works if the people applying it can spot issues early.

This means upskilling ethics committees to understand:

  • What “model drift” looks like in real-world deployments
  • How bias can emerge from seemingly neutral datasets
  • The difference between an explainable and a black-box model
  • When a project that sounds low risk (e.g. rostering AI) could have hidden equality impacts

Without this training, even the most carefully designed framework risks being misapplied — creating false confidence in projects that need deeper scrutiny.

Practical takeaway for NHS teams

If you’re considering an AI project:

  1. Start with a risk classification — use the six factors above.
  2. Be honest about potential harms — don’t underplay them to speed approval.
  3. Document your reasoning — ethics committees need evidence, not hunches.
  4. Build governance in from day one — it’s far harder to retrofit later.

Closing reflection:

Risk-based governance is like triage for AI projects. It ensures urgent ethical risks get priority attention while safe, low-impact innovation can flow quickly. But just like clinical triage, it only works when the criteria are consistent, the assessors are trained, and the process is trusted.

The NHS cannot afford to follow the project management approach of PINO (prince in name only) for AI Ethics. If it is not done properly, then things will go wrong.

The good news? We already have local examples, like ours. The challenge — and opportunity — is scaling one nationally, with the right expertise around the table, so every patient and clinician benefits equally.

Next week I will start to discuss some of the key concepts of AI Ethics.

John Uttley – Innovation Director & SIRO, NHS Midlands and Lancashire

#AIgovernance #AIethics #DigitalHealth #NHSinnovation #RiskManagement #ResponsibleAI #DIU #MLCSU