AI Is Changing Healthcare Faster Than Most Systems Are Ready For
(My key takeaways from a clinician-led roundtable on AI, access, and care delivery)
Healthcare is shifting fast, and artificial intelligence is no longer a future concept sitting in research labs or pilot programs. It’s already embedded in clinical workflows, operational systems, and patient interactions, often in ways that feel subtle, uneven, and sometimes uncomfortable.
That reality came through clearly in a roundtable on the AI and Healthcare YouTube channel I watched a few days back, led and moderated by Dr. Sanjay Juneja (popular on IG as OncDoc). The discussion brought together voices from across healthcare delivery and AI implementation: Dr. Debra Patt, Dr. Jason Hill and Mika Newton.
What stood out wasn’t hype or fear-mongering. It was how clearly AI exposes healthcare’s real constraints, workforce shortages, administrative overload, data fragmentation, and policy lag, while also offering practical ways to move forward. Below are the most important themes that emerged during the roundtable webinar.
A Perfect USMLE Score Doesn’t Mean AI Can Practice Medicine
One of the most headline-grabbing moments discussed was an AI system achieving a perfect score on the USMLE. On paper, that sounds like a turning point. In practice, the panel viewed it as fear-mongering within the community.
Medical exams largely test recall, pattern recognition, and the ability to apply known frameworks. AI excels at that. Clinical care, however, rarely happens with perfect information. Patients arrive with incomplete histories, competing priorities, social constraints, and personal values that don’t fit neatly into guidelines.
As the discussion made clear, knowledge isn’t the same as wisdom. AI can retrieve evidence instantly, but judgment, especially under uncertainty, remains a human responsibility.
AI Raises the Floor More Than the Ceiling
A recurring insight was that AI doesn’t primarily make the best clinicians dramatically better. Instead, it raises the baseline.
This mirrors what’s already been observed in the software engineering field. AI tools tend to pull the bottom third closer to the median rather than creating new superstar, hyperscaler-ready coders. In healthcare, that matters enormously. Because truth be told, healthcare system isn’t short on intelligence, it’s short on capacity. With an aging population, rising chronic disease burden, and shortages across primary care, oncology, nursing, and behavioral health, the real challenge is delivering consistent, good care to more people. And AI can truly come handy here with its role in expanding care coverage, not replacement.
The Fastest Wins Aren’t Clinical, They’re Administrative
Perhaps the most practical takeaway was also the least glamorous: the highest-impact AI use cases today are administrative. Think of these as use cases that help alleviate doctors from performing toil work that takes away their time from delivering critical care to their patients.
Documentation, post-incident reports, coding, billing, prior authorizations, and quality reporting consume enormous clinician time. The emotional toll is even greater. Switching from a conversation about life-altering diagnoses with a patient to checkbox-driven paperwork creates burnout that many clinicians describe as moral injury, not just workload fatigue.
Automating and simplifying these tasks doesn’t just save time. It keeps experienced clinicians in practice longer and allows them to focus on work that actually requires clinical judgment.
AI Works Best as a “Peripheral Brain”
Where AI does shine clinically is decision support. The panel shared real examples of using AI to quickly synthesize medical literature for esoteric questions or refresh rare differential diagnoses, tasks that would otherwise require hours of research after clinic hours.
In these moments, AI acts like a peripheral brain: fast, searchable, and tireless. The clinician still integrates patient preferences, goals of care, and real-world context to come up with a decision. And as a result, the cognitive load is lighter.
Smaller, On-Device Models Are a Turning Point
One of the more practical insights from the discussion was the shift away from “bigger is better” AI toward smaller, faster models that can run locally within healthcare environments.
This isn’t just a technical preference, it’s a workflow and compliance advantage. When AI can operate on-device or inside existing systems, it reduces the need to move sensitive patient data across networks, which lowers privacy risk and simplifies governance. Just as importantly, it enables real-time insight.
In healthcare, timing often determines whether information is useful for patient care delivery. A recommendation delivered hours later is documentation. A recommendation delivered in the moment is decision support.
That’s why the panel emphasized feedback loops. The closer an insight is to the action, whether it’s a clinical decision, an operational response, or a patient behavior, the more likely it is to influence outcomes. In that sense, the value of AI isn’t about making decisions for clinicians or patients. It’s about surfacing the right context at the right time.
Interoperability Is Improving, Adoption Is Still the Bottleneck
The panel made an important distinction between data being available and data being actionable.
Technically, healthcare organizations can exchange far more patient information electronically today than they could just a few years ago thanks to APIs. Records can be retrieved across care settings, and many systems are connected through health information networks. Much of this increased access has been driven by FHIR-based APIs, information blocking regulations, and expanded participation in health information networks, which together have made it technically possible to retrieve patient records across care settings.
Yet that data rarely shows up where it actually matters, inside workflows, in real time, or in ways that support clinical and operational decisions.
The barriers aren’t technical. They’re organizational.
Compliance rules are complex and often interpreted conservatively. Workflows aren’t designed to absorb external data without adding clicks. And perhaps most importantly, there’s a persistent fear of liability: if a system can see more data, does it also become responsible for acting on everything it sees?
As a result, many organizations avoid enabling capabilities that are objectively better than the status quo because they aren’t perfect. The panel’s conclusion was blunt: the technology has moved ahead of governance, risk models, and workflow design.
Mental Health Is the Most Promising, and Riskiest, AI Frontier
Mental health emerged as a space where AI adoption is accelerating fastest, driven by severe clinician shortages and the conversational nature of therapy.
But it’s also where failure modes are most dangerous. AI systems can miss critical cues, overreact to normal stress, or reinforce harmful thought patterns because they tend to be agreeable and affirming.
The consensus was clear: AI should support triage and prioritization, helping separate low-risk cases from those needing urgent human intervention. It should not operate as a standalone therapist.
Trust, Liability, and Policy Will Decide the Pace of Change
Across every topic, trust kept resurfacing: trust from providers, patients, and regulators.
Healthcare organizations face asymmetric risk: a single data breach or missed signal can outweigh years of incremental improvement. Patients, meanwhile, are often asked to opt into systems they don’t fully understand.
Without clearer policy frameworks and shared liability models, adoption will remain uneven, regardless of technical capability.
AI Isn’t the Storm, It’s the Spotlight
What this roundtable made clear is that AI isn’t disrupting healthcare in isolation. It’s illuminating problems the system has normalized for decades: administrative overload, workforce scarcity, fragmented data, and slow feedback loops.
Waiting for perfect AI will only prolong those issues. The real opportunity lies in adopting tools that make healthcare better than it is today, while continuing to refine governance, trust, and oversight.
A Practical Example: Applying AI to Post-Event Learning
One theme that came up repeatedly in the roundtable was that the most immediate value of AI isn’t replacing clinical decisions, it’s simplifying some of the everyday tasks that eat into a clinician’s “pajama time”.
In healthcare, one such challenge is especially visible after critical events. Teams often piece together timelines from multiple systems, pull communication logs, document what happened for quality review, and extract lessons for future response, all manually.
At OnPage, our AI-powered event reporting is designed to address exactly that type of administrative burden. It automatically analyzes messages, notes, acknowledgements, and timestamps from an incident and generates a structured timeline aligned to existing review templates approved by the healthcare organization. Instead of spending hours reconstructing what happened, administrators can focus on process improvement and training.
It’s a small but practical example of the broader point the panel made: some of the most impactful AI use cases are the ones that reduce cognitive and documentation load, turning data into something teams can actually learn from.



