AI and Ethics Series : Taming the Wild West of Free AI Tools in Healthcare

See LinkedIn Post

Free AI tools are everywhere - and staff are already using them. The real risk? Not knowing who, how, or why

Welcome to the first in my new series on AI and Ethics from a SIRO’s perspective.

This time, I’m tackling a big question:

Can NHS staff safely use free AI tools — like ChatGPT, Claude, or Gemini — without putting information at risk?

It’s a tricky balance. On one hand, AI offers huge potential for improving efficiency. On the other, unmanaged use can be a serious information governance nightmare.

This hit home for me in November 2024 when I read an Australian case from the Office of the Victorian Information Commissioner (OVIC). It involved a social care worker’s unsupervised use of ChatGPT — and the fallout was a stark warning for anyone responsible for information risk. More on that later…

Why We Needed to Act

Like many NHS organisations, we’re under pressure to find efficiencies and explore AI’s potential. The real risk, I realised, wasn’t introducing free AI tools — it was not knowing how staff were already using them, for what, and by whom.

As SIRO, that uncertainty was unacceptable. So, I set two clear objectives:

  1. Create a safe, simple process for staff to use free AI tools without increasing corporate information risk.
  2. Capture and share effective use cases so the whole organisation could benefit.

Our Five-Step Approach

Here’s how we did it — quickly, and without tying staff up in red tape.

Step 1 – Registration

A simple SharePoint form lets staff register their chosen AI tool and its intended use case.

Step 2 – Risk Panel Review

The form feeds into a SharePoint list reviewed twice a week by a panel (SIRO, IG Lead, Cybersecurity Lead, Clinical Safety Officer). We check the tool’s privacy policy, assess risk, and record all decisions. Feedback is usually given within a week.

Step 3 – Approval & Safeguards

If approved, staff must complete MLCSU-developed training before use. They’re reminded: no personal or sensitive corporate data goes into free AI tools, and we explain how to opt out of training the model.

Step 4 – Training & Feedback Loop

Training covers prompt engineering, how GenAI works, hallucination risks, and IG/Cyber/clinical safety responsibilities. Staff discussions feed back into policy refinement.

Step 5 – Communication & Transparency

We regularly publish updates across the organisation, raising awareness and informing governance groups and the board.

Two Months In – Results

  • 79 users approved for non-corporate AI
  • 13 pre-approved use cases with instant approval (risk already assessed)
  • 68 users trained
  • 13 AI tools approved and listed on our intranet for wider adoption

Is This Forever?

No. This was a short-term measure to get a handle on unknown risk.

We’re now rolling out Microsoft Copilot to 279 users, with all staff getting Copilot Chat access. Once everyone’s trained (using a refined version of our current AI training), I expect we can retire the non-corporate approval process in 12–18 months — neatly aligned with CSU closure!

Why Bother with Free AI if We Have Copilot Chat?

While Copilot (especially the paid version) is valuable, it still lags behind some free tools in certain areas.

  • Google’s NotebookLM organises research in ways Copilot can’t.
  • Napkin AI creates export-ready diagrams on par with big consultancy visuals — at zero cost.
  • Claude and Gemini often release new features faster than integrated platforms like Copilot and offer better performance at coding.

The Cautionary Tale That Sparked Action

The OVIC case I mentioned earlier:

An Australian social care worker used ChatGPT to draft a court protection order for a child at risk. They didn’t proofread it. The AI misrepresented key evidence, added nonsensical sentences, and Americanised the spelling. Worse, the worker had uploaded the child’s case file — a breach of data protection law — and had used ChatGPT over 100 times for case work.

The employer had no AI policy, no training, and no monitoring in place. Between July and December, 900 staff visited the ChatGPT site. OVIC concluded that GenAI was so novel and powerful that it required dedicated policy, training, and monitoring.

That’s exactly what I wanted to avoid in my organisation.

Could This Work for Other NHS Trusts?

Absolutely. Our approved use cases, training content, and assessment process could be adapted elsewhere, reducing duplication of effort and lowering risk across the board.

In my next article, I’ll share how we implemented our AI Ethics framework to deliver on our strategy’s ambitions.

John Uttley – Innovation Director & SIRO, NHS Midlands and Lancashire

#AIAdoption #HealthTech #DigitalTransformation #ResponsibleAI #NHSInnovation #RiskManagement #AITraining #GenAI