August 19, 2025

Burda on Healthcare: Making Sense of AI in Healthcare
I never knowingly use artificial intelligence (AI) to do any of my work. I shut off any embedded or default AI-powered assistant in any office software program I use. I intend to leave this mortal coil without ever asking AI to do anything for me. Ever. Whatever I leave behind, good or bad, it was all me.
With that bias stated, the hottest topic in healthcare other than the fallout from the Trump regime’s One Big Beautiful Bill Act is AI. What can it do? What can’t it do? Can we trust it? Should we not trust it?
In March, I wrote a column, “AI in Healthcare? I Don’t Know About That,” in which I suggested that the more the healthcare system adopts AI, the less consumers trust AI. That led to this special edition of our 4sight Health Roundup podcast, “When Will AI Adoption by Providers Meet AI Acceptance by Patients?” featuring Robert Pearl, M.D., former CEO of the Permanente Medical Group, and David W. Johnson, founder and CEO of 4sight Health. They agreed that healthcare adoption and consumer acceptance will converge within the next five years, if not sooner.
That conversation, no matter how insightful, hasn’t stopped health services researchers, think tanks, advocacy groups, consultants, academics and more from still trying to figure out what’s going on in people’s heads regarding healthcare and AI. I offer up the following mixed messages as examples.
Is It Live or Is It Memorex?
That advertising slogan from 1972 featuring the voice of Ella Fitzgerald shattering a glass came to mind after I read this study published last month in arXiv, a research-sharing platform sponsored by Cornell University. Three researchers from the University of British Columbia, University of California at Berkeley and Stanford School of Medicine looked at the use and frequency of AI disclaimers on interpretations of medical imaging results. The disclaimers tell users like radiologists, medical specialists, other physicians and other clinicians that the AI interpretations weren’t “professionally vetted” or are not a “substitute for medical advice.”
The study pool, from 2022 through 2025, included 500 mammograms, 500 chest X-rays, 500 dermatology images and 500 medical questions. The researchers divided the AI models by large language models (LLM) and vision language models (VLM).
Here’s what they found: The percentage of LLM images with disclaimers dropped from 26.3% in 2022 to 0.97% in 2025. VLM images with disclaimers dropped from 19.6% to 1.05%.
“Even highly accurate models are not a substitute for professional medical advice, and the absence of disclaimers may mislead users into overestimating the reliability or authority of AI-generated outputs,” the researchers warned.
Either the AI interpretations got so good, the AI developers didn’t feel the need to tell users that it was a machine that read the images, or the AI interpretations were just as inaccurate as they were before, and the AI developers decided to drop the disclaimers anyway for reasons you can only imagine.
This article, “AI Companies Have Stopped Warning You That Their Chatbots Aren’t Doctors,” published in the MIT Technology Review, suggests some reasons why disclaimers are disappearing. It’s a good read.
AI Implementation Tells EHR Optimization to Hold Its Beer
Also last month, the Association of Medical Directors of Information Systems (AMDIS) and Witt Kieffer, the executive search firm, published their latest survey of physician informatics executives. This year’s survey results are based on responses from more than 160 of them.
Ninety-five percent of the physician informatics executives said their job responsibilities have expanded the most in the past two years in the area of AI, including AI tools and machine-learning algorithms. Furthermore, 77% cited AI implementations as one of their organization’s top informatics priorities this year. A distant second was EHR enhancements or optimizations. Addressing clinician burnout? Meh. Only 39% said it was a top priority.
When it comes to AI specifically, 86% said they were involved in implementations of vendor solutions, 82% said they were involved in AI strategy and 81% said they were involved in AI governance.
Do You Prefer an AI-Assisted or DIY Doctor?
I’m a self-taught handyman who dabbles in basic home improvement projects. I learn the hard way using rudimentary tools from the trades, like painting, wallpaper hanging, electric, plumbing, HVAC, carpentry, tile or masonry.
When I confront a complicated project, I usually call in an expert. Every time I do, I learn something. Not by watching them. But by asking questions about how they did it after they’re done. Invariably, I find out that they used some new, sophisticated tool or technique that I had never heard of, a tool or technique that made their job exponentially easier and the results exponentially better.
Does it make me envious? Absolutely. Does it make me question their competence as a tradeswoman or tradesman? Never. It’s quite the opposite. I am totally impressed that they took the time and interest to learn how to use a new tool or technique to do their job better for me.
That personal bias is central to the results of this study that appeared also last month in JAMA Network Open.
Researchers from the University of Wuerzburg in Germany, the University of Cambridge in England, and the Charité–Universitätsmedizin Berlin, also in Germany, wanted to know whether patients felt any different if their family physician used AI to help with administrative tasks, diagnose illnesses and injuries and treat illnesses and injuries.
To find out, the researchers showed a representative sample of nearly 1,300 adults fictitious social media and billboard advertisements from family physicians. The researchers showed a control group of adults ads that made no mention of the doctor using AI. The researchers showed the rest of the adults ads that said the doctor used AI for one of the three purposes: administrative, diagnostic and therapeutic.
Then the researchers asked the adults to rate the physicians’ competence, trustworthiness and empathy, plus the adults’ willingness to make an appointment with the physician in all three areas on a five-point scale. In each of the three areas — administrative, diagnostic and therapeutic — the adults rated the physicians who used AI as less competent, less trustworthy and less empathic and less likely to get an appointment request.
“Our results indicate that the public has certain reservations about the integration of AI in healthcare,” the researchers said. “Potential reasons for existing skepticism may include concerns that physicians rely too much on AI.”
Some of us still prefer our doctors to have graduated from the medical school of hard knocks and who prescribe antibiotics at the first sign of a fever. I prefer my doctor — and my plumber — to make the best decisions possible regarding all my leaks, and if AI helps him or her do that, I’m all for it.
Online? Yes. In-Person? No.
That doesn’t mean I trust all the medical information I read online. In fact, I don’t trust most information I read online, medical or otherwise. It’s the journalist in me. If your mom says she loves you, check it out. It’s original, credible and verifiable sources for my information. With the rise in AI, they’re getting harder and harder to find as you sift through all the AI-generated search responses.
It’s where I depart from most of my fellow citizens, according to the results of this survey conducted by the Anneberg Public Policy Center (APPC) at the University of Pennsylvania. The APPC also released the survey results in July.
The APPC asked a representative sample of more than 1,600 U.S. adults a series of questions about AI-generated health information available online. Here are some of the more interesting findings:
- 30% said AI-generated health information online gives them the answer they need “often” or “always.”
- 63% said the AI-generated health information they get online is “somewhat reliable” or “always reliable.”
- 50% said they were “not too comfortable” or “not at all comfortable” with doctors and other providers using AI tools rather than their own experience alone when making decisions about their care.
That’s strange, right? People will trust the medical information they get online from AI, but they won’t trust that information if their doctor gets it from AI. Yet it supports what the JAMA Network Open study found. It also provides a motive for AI developers to strip disclaimers from their medical products. And it explains why AMDIS physician informaticist members are so damn busy.
It all makes sense now. Or does it?