Agentic AI healthcare trust barrier limits autonomous deployment at scale
Early deployments show measurable efficiency gains while governance gaps and data readiness concerns limit large-scale autonomous adoption
Healthcare leaders are advancing cautiously on agentic AI, emphasizing governance, validation and human oversight as automation begins to show measurable operational gains across clinical environments.
The ability of AI systems to act with increasing autonomy introduces a new class of risk, particularly in clinical settings where errors directly affect patient outcomes and institutional trust.
“In clinical deployment, you can’t really strike out, because that actually has real consequences for human health. When it comes to delivering real care, there is a human expert in the loop, so there are no rogue agents running wild,” said Betsabeh Madani-Hermann, global head of research at Philips.
“You only get really one chance at this. The first time it goes wrong, public trust is gone,” she said.
Madani-Hermann illustrated the risk using a historical analogy, referring to the 1854 cholera outbreak in London, when epidemiologist John Snow traced infections to a contaminated water pump through careful data mapping.
“If that pump was dismantled before the map was complete, it could have been the wrong pump. That could be an agentic AI that goes wrong,” she said.
The comparison underscores a central governance challenge: automation decisions must be grounded in sufficient data and validated insight, rather than speed alone. In healthcare, where interventions are irreversible, the tolerance for error is effectively zero.
Current deployments, therefore, prioritize constrained autonomy, in which AI systems augment decision-making without replacing clinical judgment. This approach reflects both regulatory expectations and institutional risk management practices.
At the same time, organizations are under pressure to demonstrate value from AI investments, creating tension between innovation and caution. Leaders must balance the need to move quickly with the requirement to maintain clinical safety and public confidence.
Philips is a Dutch health technology company focused on areas including diagnostic imaging, image-guided therapy, patient monitoring and health informatics.
Clinical ROI
Madani-Hermann spoke at the AI and Business Innovation Summit, held in London on March 25 and organized by Economist Impact, with the session moderated by Vaibhav Sahgal, the organization's principal for technology and finance.
She said early deployments of AI and automation are already generating measurable returns in imaging, diagnostics and clinical workflows, particularly in high-volume hospital environments.
“If you can capture an MRI (Magnetic Resonance Imaging) image three times as fast with the same accuracy, send the patient through a system five times faster, and cut off 10 to 20 minutes of a cardiologist’s time that they spend with a patient, just imagine the efficiency that creates in a broader healthcare ecosystem,” she said.
The examples point to higher patient throughput, shorter waiting times and better utilization of scarce clinical resources. In systems under persistent strain, incremental efficiency improvements can compound into significant capacity gains.
Healthcare systems face structural workforce constraints that cannot be solved through hiring alone.
“You can’t really recruit your way out of the gaps in the healthcare system we have today. There are simply not enough resources to be deployed,” she said.
AI systems can also process volumes of clinical data beyond human capability, supporting faster and more accurate decision-making in complex cases. This includes identifying subtle patterns in imaging data and correlating multiple variables across patient histories.
Advances in AI are redefining a long-standing trade-off between speed and quality in healthcare delivery.
“Access to knowledge has been something that’s been a barrier for most of human history. That access is now becoming a lot easier, and that is why you can actually increase the quality while the speed also goes up,” she said. “There are some systems that can go out there and make their own decisions fully, others are more like copilots. A lot of people conflate what agentic AI actually means.”
Madani-Hermann suggested AI is easing a long-standing tension between thorough analysis and rapid response as data becomes more accessible and computational tools augment human judgment. She added that organizations often misunderstand the concept of agentic AI, which spans multiple levels of autonomy and capability.
She suggested that confusion over different levels of agentic AI can distort expectations and deployment decisions. Distinguishing between assistive tools and autonomous systems is critical for governance and risk management.
Risk and access
Despite these advances, organizations must carefully manage deployment timing and data readiness, particularly in environments where decisions have direct clinical consequences.
Madani-Hermann said insufficient data and premature decisions remain key risks in real-world deployment.
“Not having enough data to make the right decision worries me. Making decisions too early is a real concern,” she said. “You don’t want to wait too long, but you also don’t want to jump the gun too quickly. There is this balance you have to keep.”
Healthcare organizations must anchor deployment decisions in clinical safety principles.
“We start from ‘do no harm’, and then whatever you can add on to it,” she said.
Her comments imply a continuous validation process in which systems are tested, monitored and refined over time rather than deployed in a single step. Governance frameworks must evolve alongside the technology.
In fact, AI could either reduce or reinforce inequalities in healthcare access, depending on how systems are designed and deployed.
“Technology is supposed to make things more equitable, give reach to places where it hasn’t been reaching before,” she said. “But if you’re not careful, there could be those same inequalities that get exacerbated instead of being diminished.”
She noted that access challenges are not limited to rural regions but also affect patients in major cities, where logistical barriers can delay treatment.
“I’m not just talking about rural and urban; you could be stuck in traffic and not make it to a stroke center. Even in the right city, you may not access care,” she said.
Her comments suggest that closing these gaps will depend not only on technological capability but also on deployment models, infrastructure and policy support.
Philips continues to invest in both breakthrough innovation and incremental improvements across its portfolio. It works closely with clinicians and partners to align development with real-world needs and constraints.
Madani-Hermann said most innovation balances long-term breakthroughs with immediate operational needs.
“We were the number one patent filer with over 1200 patents in Europe in med tech. But that’s only about maybe 20 or 30% of the story,” she said. “You can’t just say stop and build the next cool robot and bring it in five years. It just won’t work.”
Most innovation efforts focus on continuous improvements to existing systems, reflecting the operational realities of healthcare environments where services cannot be paused for long-term experimentation.
“There is no tech for the sake of tech in healthcare. It’s about starting with the problem first,” she said. “We co-innovate with people on the ground, we’re not just sitting in a corner saying it would be cool to build something.”
“Don’t just wait for technology to happen to you, but challenge yourself to learn every day and ask those hard questions.”
She encouraged business leaders to actively engage with emerging technologies while building internal understanding and capability.
Looking ahead, she said the pace of adoption will depend on how effectively organizations can integrate governance, data readiness and operational discipline into their AI strategies.



