Why We Have to Get AI in Healthcare Right – A Personal Reflection from Liz Ashall-Payne, Founder of ORCHA

Liz Ashall-Payne

Over the past few weeks, I’ve read the headlines and, like many of you, felt a real sense of unease:

These stories aren’t just news items—they’re reminders that while AI holds huge promise for healthcare, we’re still working out how to use it safely and responsibly. And they’re exactly why we at ORCHA, in partnership with Hartree, have been working so hard to build something practical: a way to properly assess AI tools used in health and care.


What’s Really Going On

AI is already starting to shape the future of healthcare—from helping spot diseases earlier to making services more efficient. But with all this potential comes very real risk. If we’re not careful, we can roll out tools that aren’t properly tested, that misuse data, or that simply don’t work for everyone—especially those most in need.

Take the recent case of the Foresight model. It was trained on data from 57 million people. But the fact that GPs weren’t fully aware of how that data was being used caused the whole project to be paused. Trust was lost.

Then there’s the warning about AI translation apps. I completely understand why services might turn to them—they’re fast, easy, and cheap. But when we’re dealing with someone’s health, “good enough” just isn’t good enough. Miscommunication can have serious consequences.


What We’re Doing About It

This is exactly why we created the AI Assurance Designathon—a space where we brought people together from all sides: clinicians, developers, regulators, ethicists. It wasn’t about shiny tech or high-level policy talk. It was about asking the hard, practical questions:

  • How do we make sure AI in healthcare is tested properly?
  • How can we check that the data it uses is handled responsibly?
  • What does “safe” really look like when it comes to AI?

And then we built a solution. A new AI Assurance Module—something structured, practical, and ready to use. A way to evaluate digital health tools that include AI, without making it so complicated that it becomes a blocker.


Why This Matters to Me

I’ve spent years working in and around health and care. I’ve seen how digital can change lives—but I’ve also seen the damage that can be done when tech is rushed, misunderstood, or not properly tested.

At ORCHA, our job has always been to help people find and use digital health tools they can trust. As more and more of those tools start to use AI, we have to raise the bar. Not just because it’s the right thing to do, but because if we don’t, people will stop trusting digital health altogether.

We can’t let that happen.

This work we’re doing with Hartree isn’t just about building a framework – it’s about building trust. And we’re going to keep working at it until the systems around AI in healthcare are as strong, fair, and safe as they need to be.


Let’s Chat

If you’re working with AI in health and care and want to know what’s safe, what’s effective, and how to manage the risk, let’s grab a coffee and chat.

I’ll be at Confed in Manchester next week – drop me a message, and let’s connect.

Because getting this right is something we all have a stake in.

– Liz Ashall-Payne

Founder, ORCHA