AI and Ethics Series : AI in UK Healthcare: The Regulatory and Legal Landscape (and Its Gaps)

See LinkedIn Post

If an AI scribe mangles a discharge summary and harm follows, who answers first, the clinician who signed it, the Trust that rolled it out, or the vendor whose model hallucinated?

The EU has now legislated an answer, through the AI Act, which classifies medical AI as “high risk” and places clear duties on developers and deployers. The UK, by contrast, has opted for a “pro-innovation, regulator-led” approach. That means responsibility today is spread across a patchwork of laws and regulators and the cracks are showing.

The UK’s Regulatory Jigsaw

  • MHRA regulates medical devices, including AI software if classified as “software as a medical device.” This covers market approval, safety, and post-market reporting. It works for static devices but is stretched when models continuously adapt.
  • NICE sets evidence standards and runs early value assessments for digital and AI tools. This helps decision-makers judge benefits but does not resolve liability.
  • ICO enforces UK GDPR and data protection rules, ensuring lawful, transparent use of patient data. It does not determine whether an AI system is clinically safe.
  • CQC regulates providers, not products. Its role is to inspect whether services are safe, effective, and well-led. AI will inevitably be drawn into this lens, but the CQC does not regulate AI itself.
  • HSSIB investigates patient safety incidents on a no-fault basis. Their investigations are explicitly about learning, not blame. They can highlight where AI contributed to harm and issue safety recommendations, but they do not determine who is legally responsible.

Each has a role to play, but none owns the whole picture.

Tort Law and Liability

In the absence of AI-specific legislation, liability falls back on long-standing principles of tort law:

  • Clinicians and Trusts: Patients harmed by AI will usually bring negligence claims against the treating clinician or the NHS Trust. The duty of care sits here, and courts ask whether it was breached by relying on an unsafe system or failing to exercise judgment.
  • Suppliers and Manufacturers: Can be pursued under product liability law (Consumer Protection Act 1987). To succeed, the patient must show the product was defective and caused harm. With opaque, evolving AI models, this is hard to prove.
  • Contracts matter: Procurement contracts typically emphasise service delivery, performance, and data protection. They are less consistent in spelling out liability for harm, model drift, or long-term monitoring obligations. Where liability terms are not explicit, the legal risk defaults to the provider, leaving Trusts and clinicians exposed.

Put bluntly: without clear contract protections or a national framework, the NHS carries the legal risk when AI fails.

Europe’s Different Path

The EU AI Act changes this equation. It creates:

  • Clear obligations for developers of high-risk AI (technical documentation, risk assessments, monitoring).
  • Defined duties for deployers (training, oversight, human review).
  • National regulators empowered to enforce compliance and fine breaches.

That means if an AI fails in Europe, responsibility does not automatically collapse onto the clinician or the hospital. There’s a chain of obligations that can be traced back to the vendor.

In the UK, without such legislation, the Trust or clinician remains the easy target for claims.

Why This Matters

The UK’s choice of light-touch regulation may help innovation move faster, but it also creates real uncertainty:

  • Clinicians are wary of using tools if they know they carry the liability.
  • Trusts face legal exposure if contracts are not watertight.
  • Patients may lose confidence if accountability is unclear.

The problem is even starker in primary care. NHS Trusts at least have procurement specialists, information governance teams, and clinical safety officers who can scrutinise contracts and assess risk. GP practices, community pharmacies, and opticians have none of that infrastructure.

If every GP surgery is left to navigate AI adoption alone, the risks multiply thousands of times across the system. An individual practice cannot realistically negotiate liability terms with a vendor or run safety assurance on an adaptive model.

This creates a two-tier risk profile:

  • Trusts:  exposed, but with some governance capacity.
  • Primary care:  exposed, without the tools to defend itself.

For the NHS as a whole, this makes a coherent national game plan essential. Without it, we risk pushing the greatest legal and clinical risk onto the smallest providers, and ultimately, their patients.

Closing Reflection

The regulatory and legal framework in the UK is fragmented, and tort law defaults responsibility to clinicians and Trusts. Without the clarity provided by the EU AI Act, the NHS has two options:

  1. Rely on national frameworks that are strong enough to fill the gap.
  2. Be ruthless at the procurement and contracting stage to ensure suppliers share the liability.

Next week, I’ll outline a practical model for how the NHS could achieve this through shared regional governance and an innovation passport . A way to strengthen accountability without slowing innovation.

#AIinHealthcare #EthicalAI #NHSInnovation #HealthcareRegulation #DigitalHealth #MLCSU #DIU

John Uttley – Innovation Director & SIRO, NHS Midlands and Lancashire