Governing Trust in Health AI: A Qualitative Study of Cybersecurity Professionals Perspectives

This qualitative study reveals that cybersecurity professionals view health AI as a fragile, augmented clinical infrastructure where institutional trust is contingent upon visible governance and accountability rather than technical performance alone.

Adekunle, T., Ohaeche, J., Adekunle, T., Adekunle, D., Kogbe, M.

Published 2026-03-03
📖 4 min read☕ Coffee break read
⚕️

This is an AI-generated explanation of a preprint that has not been peer-reviewed. It is not medical advice. Do not make health decisions based on this content. Read full disclaimer

The Big Picture: Who is Watching the Watchmen?

Imagine a hospital as a massive, high-tech spaceship. For a long time, we've been talking about the pilots (doctors) and the passengers (patients). We've asked: "Do the pilots trust the autopilot? Do the passengers feel safe?"

But this study asks a different question: "What do the engineers in the engine room think?"

The researchers interviewed cybersecurity professionals—the people who build the digital walls, guard the data, and keep the spaceship's computer systems from crashing. They wanted to know: When we put Artificial Intelligence (AI) into healthcare, do these security experts trust it? And how do they think we should govern it?

The Four Main Takeaways (The "Engine Room" Report)

Here are the four main things the security experts told the researchers, translated into everyday metaphors:

1. AI is a "Super-Powered Co-Pilot," Not a Replacement

The Metaphor: Imagine a GPS in a car. It can tell you the fastest route, warn you about traffic, and even suggest a detour. But you wouldn't let the GPS drive the car while you sleep, right? You still need a human hand on the wheel.

The Finding: The cybersecurity pros didn't see AI as a robot doctor that takes over. They see it as augmented infrastructure. It's a tool that helps doctors work faster and spot things they might miss (like a super-fast X-ray reader). However, they insist that a human must always be the "co-pilot" making the final decision. If the AI says "This looks like a broken bone," the doctor still has to look at the X-ray and say, "Yes, I agree."

2. The Digital House is "Leaky," and AI Makes the Leaks Bigger

The Metaphor: Imagine trying to build a fancy new glass extension onto an old house that already has cracked windows and a shaky foundation. If you add a high-tech glass wall (AI) to a house that is already falling apart (fragmented data systems), the whole thing becomes even more fragile.

The Finding: The experts said healthcare data is messy. It's stored in different places, in different formats, and often not very securely. AI needs huge amounts of data to work, which means it pulls from all these leaky pipes. The experts worry that because the foundation is weak, the AI might accidentally expose private patient info or get confused by bad data. They view data breaches not as "accidents" that might happen, but as events that will happen, so we need to be ready for them.

3. Trust is Like a "Slow-Brewing Tea," Not a Light Switch

The Metaphor: You don't trust a stranger with your house keys the moment you meet them. You trust them slowly, over time, as they prove they are honest and careful.

The Finding: The experts said you can't just flip a switch and say, "We trust this AI." Trust is contingent (it depends on things). It depends on:

  • How the AI was trained (was it fed good data?).
  • Whether humans are checking its work.
  • How the hospital reacts when things go wrong.
    If a hospital is transparent and admits mistakes, trust grows. If they hide things, trust evaporates.

4. Security Guards are the "Trust Architects"

The Metaphor: Think of cybersecurity professionals not just as people who fix broken locks, but as architects of trust. If they build a strong, visible safety net, people feel safe jumping. If they are invisible or reactive, people are scared to jump.

The Finding: These experts see their job as more than just technical. They are responsible for "stewardship." This means:

  • Security by Design: Building safety into the AI from day one, not taping it on later.
  • Continuous Testing: Like checking a bridge for cracks every day, not just once a year.
  • Education: Teaching doctors and admins that "digital safety" is just as important as "hand washing."

The Bottom Line: Why This Matters

The paper concludes that trust in Health AI isn't about how smart the computer is; it's about how responsible the hospital is.

If a hospital has a history of being careless with data, or if they treat security as an afterthought, patients and doctors won't trust the AI, no matter how "smart" it is. But if the hospital treats cybersecurity as a core part of patient care—like a nurse or a doctor—then trust can grow.

In short: We can't just build faster AI. We have to build safer, more honest hospitals to hold it. The security experts are the ones holding the blueprints for that trust.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →