inside numan

3 minute read

Safety isn’t a feature - it’s the foundation of AI in healthcare

numan editorial

Written by Numan Editorial

Numan app weight loss
Share:

The rise of AI in healthcare has opened the door to extraordinary possibilities around personalised support, getting you to the right care faster, and systems that learn with every interaction. However, it has also revealed real risks, not always the kind that make headlines.

The most dangerous may not be wild hallucinations. They’re the ones that sound right. Clean, confident, and plausible, until they quietly lead someone down the wrong path.

That’s why at Numan, safety isn’t a bolt-on. It’s not a disclaimer or an afterthought. It’s the foundation we build on because if we’re serious about reshaping care, we have to be serious about protecting patients.

Beyond the checklist: real safety starts before the build

Ticking compliance boxes is a bare minimum. Regulations like the EU AI Act, GDPR, and MHRA guidance provide essential scaffolding. But frameworks aren’t blueprints and compliance doesn’t guarantee responsibility.

At Numan, we move early: test edge cases, simulate misuse, plan for failure - before models go live. It’s not just about what's legal. It's about what's responsible. We ask ourselves: how could this go wrong, and how do we catch it if it does?

Friction isn’t failure, it’s safety at work

In most tech teams, friction is a bug. But in AI for healthcare, friction is a feature. It adds cost, but the stakes demand it.  When a patient message gets flagged for safety, or a chatbot response is paused for review, it isn’t a breakdown. It’s the system working as it should.

We build with intentional slowdowns:

  • Restricting certain outputs by default

  • Escalating messages that mention risk or harm

  • Keeping humans in the loop

  • Prompt versioning and extensive evaluations 

  • Our  in-house safety classifier

This isn’t just theoretical. In practice, our safety classifier has caught messages that appeared innocuous at first glance but contained subtle safety concerns. Because the system paused, a human was able to intervene, preventing a potential risk before it reached the patient.

This is what safety looks like in action: layered, live, and always accountable to a clinician.

Knowing the limits is part of knowing the field

Strong boundaries make care safer. We focus on what AI can do today: summarise, signpost, and support healthy habits. Clinical decisions like diagnosis or dosing require human oversight and appropriate regulation.

Our assistant, Nu, never gives dosing advice. It doesn’t guess at diagnoses. And it doesn’t suggest medication changes. Why? Because trust doesn’t come from flashy features. It comes from clarity, honesty, and knowing when to hand over to a healthcare professional.

Healthcare doesn’t need more shortcuts. It needs support systems that extend care safely, especially when it comes to behaviour change. That’s where we focus: combining medication with coaching, not replacing care with convenience.

The numan take

The loudest innovations aren’t always the most lasting. At Numan, we believe in building AI that’s bold in vision but careful by design.

That means testing before scaling. Adding guardrails before features. And holding ourselves to a higher standard than just “it works most of the time.”

Because if AI is going to reshape healthcare, it has to do more than impress. It has to earn trust every single time. Trust is what drives adoption and proves the value of innovation.

We don’t move fast and break things. We move wisely and test things. And when we draw a line, we do it not just to avoid harm, but to show patients and regulators what we stand for. We believe in bold innovation underpinned by uncompromising safety standards.

numan editorial

Written by Numan Editorial

Helping your health wherever we can.

See full profile
Share: