March 18, 2026
The Gray Area of Doctors, AI and Patient Safety
Can two things be true at the same time? A lot of people think so. For me, the jury is still out. I’m more of a black-and-white person. Gray areas scare me.
Last week gave us a great example of that in healthcare. More doctors are using artificial intelligence (AI), which simultaneously poses the biggest safety risk to patients.
Here’s how it went down.
On March 9, ECRI, the patient-safety organization, released its latest annual ranking of the top 10 patient safety concerns. “Navigating the AI diagnostic dilemma” topped ECRI’s list for 2026. In other words, of all the possible safety risks facing patients this year, ECRI decided that the biggest risk of all was AI.
To prove its point, ECRI cited research in its 40-page report that showed that AI repeatedly misfires on accurately diagnosing patients. ECRI also cited research that showed that doctors are increasingly using AI for diagnostic purposes.
“Although AI has immense potential to improve clinical workflows and expand access to expertise, the rapidly growing use of AI in healthcare raises serious safety and governance challenges,” ECRI said.
Further: “Placing too much trust in an AI model to diagnose patients without factoring in clinician expertise can lead to misdiagnosis — the very problem AI was intended to solve.”
Three days later, on March 12, the American Medical Association (AMA) released the results of its latest annual Physician Survey of Augmented Intelligence. The AMA prefers calling AI “augmented intelligence” because augmented intelligence would never replace doctors, unlike AI. Potatoes potatoes. Let’s call both AI for this blog post.
The 16-page report is based on a survey of about 1,700 physicians across eight broad medical specialties. Here are some of the more interesting findings:
- 72% of the respondents said they currently use one or more AI use cases in their clinical practices. That’s up significantly from 48% in 2024.
- The most common use case was summarizing medical research and standards of care, with 70% of the doctors saying they’re doing that now or plan to do it by the end of this year.
- 45% of the respondents said they are currently using or plan to use by the end of the year AI to help them diagnose patients.
- 74% of the respondents said AI will be “somewhat helpful” or “very helpful” in helping diagnose patients. That was second only to improving work efficiency, cited by 78% of the doctors.
- Yet 88% said they were “mildly,” “some what” or “very” concerned that the use of AI will result in physicians losing their skills.
In sum, physicians’ use of AI will move from largely administrative to largely clinical because they trust the diagnostic capabilities of AI even though they think it will make them less smart.
If two things can be true at the same time — what ECRI said is true and what the AMA said is true — then a third thing can be true, too: Me, afraid as a healthcare consumer.
If we want to build a better healthcare system, we need less gray and more black and white, please.
Thanks for reading.