AI and Ethics Series : From Ethics to Assurance - Building the NHS 'AI MOT' with an AI TrustMark Review
We test cars every year for roadworthiness. Shouldn’t we do the same for the AI tools guiding patient care? Following my article on Regional AI Ethics Committees, I’ve explored how we could keep NHS AI safe after deployment, through a national AI TrustMark Review that ensures every system remains accurate, fair, and transparent long after go-live.
Digital Innovation Unit renews ISO accreditations for quality and information security
The Digital Innovation Unit (DIU) has successfully renewed its certifications for the internationally recognised ISO 9001 (Quality Management) and ISO 27001 (Information Security Management) standards for another 3 years.
AI and Ethics Series : Building a Shared Framework for AI Governance in the NHS: Seven Regions, One Conversation
The NHS is adopting AI at speed, transcription tools, diagnostics, triage systems, but our governance hasn’t caught up. In my latest article, I’ve outlined a working concept for Regional AI Ethics Committees (RAIECs), a way to share expertise, reduce duplication, and strengthen assurance across England. The idea is simple: seven regions, one shared framework. A single N365 portal for AI reviews and approvals Mutual recognition of decisions between regions to reduce duplication Consistent training and procurement standards Continuous post-market monitoring Without EU-style AI legislation, we need smarter governance and clearer contracts to protect clinicians, Trusts, and especially primary care. I’m now testing this concept with PPIE groups, NHS colleagues, and Health Innovation Networks and I’d really value your feedback.
AI and Ethics Series : AI in UK Healthcare: The Regulatory and Legal Landscape (and Its Gaps)
AI in healthcare is moving fast, but the UK’s regulatory framework isn’t keeping pace. We have MHRA, NICE, ICO, CQC, HSSIB and long-standing tort law. Each plays a role, but none owns the whole picture. The result? If an AI system fails, the liability often lands on clinicians and Trusts by default. This risk is even sharper in primary care, where GP practices and pharmacies don’t have procurement or governance teams to protect them. Without a coherent national plan, the smallest providers could end up carrying the biggest risks. In Europe, the AI Act provides a clearer chain of accountability. In the UK, we’re left patching gaps with contracts and assumptions. In article 6, I map out the current UK landscape and explain why the absence of AI-specific legislation leaves the NHS and especially primary care exposed.
AI and Ethics Series : Who Watches the Machines? Building Trust in NHS AI Through Governance and PPIE
Who watches the machines? AI in the NHS is still new, so examples of failure are rare. But when they happen, they matter. The Google DeepMind & Royal Free case showed what happens when governance is weak and patients are left out of the conversation: trust collapses, and progress stalls. In my latest article I explore why oversight and governance can’t be an afterthought, and why patient and public involvement must be meaningful, not just token seats at the table. That means training, experience, and independence for all committee members, from patients to non-execs. If the NHS wants AI that is both powerful and trusted, governance is the multiplier.
AI and Ethics Series : Why Bias Auditing is Essential in Ethical use of AI within the NHS
In this article I start to dig more of the detail of why ethics as a framework is hugely important within the NHS. Bias is something we may not consider when we start to look at creating a machine learning model, or when we go out to the market to procure an AI solution, and yet it is Bias that can lead to patient harm, can reinforce health inequality and ultimately lead to a failed project.
