AI and Ethics Series : Who Watches the Machines? Building Trust in NHS AI Through Governance and PPIE
Monday, 15 September 2025
Failure of AI in the NHS is rare, mainly because the technology is still new and novel in healthcare. That means examples of failure are few and far between but when they happen, they matter. The Google DeepMind partnership with the Royal Free is a case in point. The ambition was good: using AI to detect acute kidney injury faster. But patients weren’t properly informed, data governance was weak, and regulators later ruled the project unlawful. The technology itself wasn’t the scandal. The lack of oversight and patient involvement was, and trust suffered as a result.
This matters because trust, once broken, doesn’t just affect a single project. It spills over. When one high-profile failure occurs, it colours public perception of every subsequent AI initiative. For the NHS, which depends on public trust like no other institution, that’s a risk we can’t afford.
If the NHS is serious about using AI responsibly, the question isn’t just what can the technology do? but who gets to decide how it is used, and who ensures it doesn’t cause harm?
Beyond Technical Governance
Too often, AI projects are managed with the same tools as any other IT deployment, Gantt charts, RAID logs, and a project board with the usual makeup of staff. But an algorithm deciding whether a patient’s scan is “high risk” isn’t the same as rolling out a new HR system. The stakes are fundamentally different.
Governance for AI in healthcare needs to go further:
- Independent scrutiny: Not just the project team marking their own homework.
- Clear accountability: When outcomes go wrong, there must be clarity on who is responsible the clinician, the trust, the vendor, or a combination.
- Ethical review as routine: Just as clinical trials can’t skip ethics committees, AI should never bypass formal ethical oversight.
This isn’t bureaucracy for its own sake. It’s about ensuring AI is safe, effective, and aligned with NHS values.
When Oversight Goes Missing
The Royal Free/DeepMind case demonstrated the cost of weak oversight. Patients were never properly informed that their identifiable health data, 1.6 million records was being shared with a private company.
The result? A project that could have been a national exemplar instead became a national cautionary tale. The lesson is blunt: without robust governance, even well-intentioned innovations will lose legitimacy.
And the Royal Free case isn’t unique to AI. History is full of examples where healthcare technology moved faster than governance, such as care.data in 2014 is another reminder. Each failure deepens public scepticism, making the next innovation harder to implement.
The Role of PPIE
One of the strongest safeguards we can build into AI oversight is meaningful patient and public involvement. Not tokenistic consultation, but genuine participation in shaping how AI is designed, tested, and governed.
That’s why my next step is working with three established PPIE groups to co-create and validate an oversight model for NHS AI. Building governance together from the start.
A stronger model should include:
- Formal patient seats on AI ethics committees, with equal voice alongside clinicians and managers.
- Public input into defining risks and benefits, not just technical experts deciding what matters.
- Independent non-executive members, ensuring oversight isn’t dominated by those with a vested interest in rapid deployment.
For this to work, participation has to be meaningful. That means training patient members and non-executive directors in the core concepts of AI so they can engage with confidence. Otherwise, public involvement risks being reduced to a box-ticking exercise, “we had patients in the room” without giving them the knowledge and experience needed to influence decisions. Empowering all committee members, professional and lay alike, ensures oversight is credible and respected.
A Framework for Trust
Some argue that strong governance slows innovation. In reality, the opposite is true. When staff and patients see that AI has been tested, reviewed, and independently scrutinised, they are far more likely to adopt it with confidence. In addition, in my previous articles I talked about process based governance, where the level of risk determines the level of scrutiny that should be applied, therefore not all AI projects will be subject to the same level of oversight.
Trust is the multiplier. Governance, transparency, and accountability don’t just prevent harm, they enable safe adoption at scale. Without them, projects stall, face public backlash, or get quietly shelved.
For the NHS, embedding governance that combines professional expertise with lived experience isn’t optional. It’s the only route to making AI both technically powerful and socially legitimate.
In my next article I look into the issues of Who is responsible when things go wrong and the role of the regulators.
#AIinHealthcare #EthicalAI #NHSInnovation #AIGovernance #PPIE #HealthTech #DIU
John Uttley – Innovation Director & SIRO, NHS Midlands and Lancashire
