AI in healthcare: Just because the AI intern can, doesn’t mean it should

Written by: Lauren Turner
Published by: 18/03/2026
Categorised: News, Thought leadership
Last updated: 18/03/2026

Dr Sean Higgins, Chief Product Officer at Lanas, explores why we’re keeping a close eye on the AI intern and where we’re drawing the line in clinical systems – for now.

AI in healthcare has officially entered its ‘helpful, but slightly over-enthusiastic intern’ phase.

It can summarise records, draft letters, surface patterns we’d otherwise miss, and make clinical systems software feel noticeably less like it was designed in the early 2000s. That’s not hypothetical progress – it’s real, tangible utility. As a clinician, I feel it immediately: less admin, fewer clicks, and more time spent actioning the work that actually matters.

But as the capabilities of AI in healthcare improve, the most important question isn’t which model you use, or even where it’s hosted. These are important debates, but they’re not the most pressing or difficult discussions.

Setting boundaries

The more difficult, and more interesting question is: ‘Where is the line between AI being useful and AI having too much autonomy inside clinical systems?’

Not in theory. Not five years from now. But today.

This isn’t an abstract position for us at Lanas. We’ve already deployed AI across multiple settings: in primary care and aged care in Ireland, and in secondary care in the UK. In each case, we’ve taken the same approach – one that’s cautious, iterative and grounded in real-world clinical workflows rather than demos or conference slides. We ship, we observe, we listen and we adjust. (And we argue internally more than people might expect.

Positive impact

Summarising long patient records, condensing documents, ambient scribing of consultations, drafting administrative or clinical text, and helping clinicians orient themselves more quickly are all examples of AI acting as a genuinely useful assistant. In these cases, the clinician remains firmly in control. The output is visible. It’s reviewable. It can be edited, rejected or ignored entirely. The clinician is still the author, even if they didn’t type every word.

That’s the comfortable zone – and it’s where most of today’s real value lives.

Blurred lines

However, things start to become more complex when healthcare AI software shifts from assisting to acting. The risk doesn’t come from AI writing something. It comes from AI doing something, especially when the action happens without a human fully seeing, reviewing and approving it.

A deliberately unglamorous example makes the point. Appointment reminders have existed forever. They’re boring, deterministic and predictable – if a patient has an appointment at 10.30am on Friday, the system sends an SMS saying exactly that. Everyone knows what’s going out the door.

Now introduce an AI agent into the flow. The message becomes more personalised, more conversational, maybe even more helpful. Most of the time, that’s fine. Sometimes it’s better. Occasionally, though, it won’t be. And when wording is wrong, assumptions creep in, or something slightly hallucinated slips through, the model doesn’t carry the risk. The practice does. The clinician does.

And that’s the inflection point we keep coming back to internally.

The line we’re comfortable drawing today is a simple one. AI in healthcare can be trusted to prepare and recommend, but it shouldn’t act on behalf of a provider without visibility, review and explicit execution by a human.

AI can suggest, but it shouldn’t decide

The line we’re comfortable drawing today is a simple one. AI in healthcare can be trusted to prepare and recommend, but it shouldn’t act on behalf of a provider without visibility, review and explicit execution by a human. AI can draft, but it shouldn’t send. It can suggest, but it shouldn’t decide. Nothing should be published, transmitted, or actioned externally without a person consciously pressing ‘go.’

You can think of it like editing a manuscript – write whatever you like, but nothing goes to print without the editor’s approval. Or like air traffic control – systems surface information and make constant recommendations, but it’s a human who ultimately clears the plane for take-off. That final checkpoint matters.

Transparency matters

If AI helped create something, it should be made obvious. Clear labelling, clear timestamps, and clear signals that a summary reflects a point in time rather than permanent truth aren’t compliance theatre – they’re basic safety design principles. Clinical context ages. Information changes. Summaries go stale. Pretending otherwise is how trust erodes, and rapidly too.

None of this is an anti-AI position, and it’s certainly not a claim that this line should never move. We debate the use of AI in healthcare constantly, especially when customers quite reasonably ask for things that look slick, impressive and extremely ‘future-forward.’ While it’s often possible to make a system feel faster or more magical by removing constraints, the most pressing challenge lies in actually deciding which constraints matter.

We’re comfortable experimenting around scale, speed and utility. These are areas where iteration is healthy and learning is expected. What we’re not comfortable with getting wrong is privacy, security and trust. Autonomy, in particular, is where things can rapidly go sideways if you’re not careful.

That’s why we draw the line where we do today

For now, responsibility still sits with clinicians and practices. Models don’t assume liability. Safety nets aren’t yet strong enough to justify full autonomy. Until that changes, until accountability is genuinely shared and auditability is robust, keeping a human in the loop isn’t conservatism. It’s good product judgement.

Healthcare AI software should help clinicians think faster, see clearer and spend less time wrestling with software. It just shouldn’t press ‘send’ without their knowledge.

At least, not just yet…

Dr Sean Higgins is Chief Product Officer at Lanas, leading our global product strategy.

A qualified general practitioner and founder of Billink Payments, he has worked at the intersection of healthcare and technology for over a decade.

Menu