Check out 4sight Friday email and podcast.

Dave Burda curates a weekly email and hosts a weekly podcast discussing news that impacts market-based change.

Read & Listen Here

Heed the FTC’s Warnings on Your Use of Healthcare AI

Blog | 
Innovation | 
Policy | 

I’m not sure how many times data scientists and marcom people are in the same room at a hospital, health system or medical practice. But if you’re looking for a reason to invite both to the same Zoom meeting, the Federal Trade Commission just gave you one.

In April, the FTC shared its thoughts on how to protect consumers from the increasing use of technologies powered by artificial intelligence, machine learning and algorithms. The agency thoughts came in the form of a post on the FTC’s blog written by Andrew Smith, director of the FTC’s Bureau of Consumer Protection.

You can read his blog post here.

The post isn’t specific to healthcare, but it applies to healthcare. In fact, the post opens with a reference to the study published last October in Science that found racial bias in an algorithm used for population health management.

The FTC’s detailed guidance—and warnings—to businesses like healthcare that use AI-enabled tech to make decisions about their customers like patients falls under five general recommendations. Below are the recommendations with a potential landmine, chosen by me, from the detailed guidance under each of the recommendations:

✓ Be transparent

Warning: “Secretly collecting audio or visual data – or any sensitive data – to feed an algorithm could also give rise to an FTC action.”

✓ Explain your decision to the consumer

Warning: “If you are using AI to make decisions about consumers in any context, consider how you would explain your decision to your customer if asked.”

✓ Ensure that your decisions are fair

Warning: “You can save yourself a lot of problems by rigorously testing your algorithm, both before you use it and periodically afterwards, to make sure it doesn’t create a disparate impact on a protected class.”

✓ Ensure that your data and models are robust and empirically sound

Warning: “If you provide data about your customers to others for use in automated decision-making, you may have obligations to ensure that the data is accurate.”

✓ Hold yourself accountable for compliance, ethics, fairness and nondiscrimination

Warning: “If you’re in the business of developing AI to sell to other businesses, think about how these tools could be abused and whether access controls and other technologies can prevent the abuse.”

Wherever the FTC uses the word “customer” in this post, you can exchange it for the word “patient” or “member” or “beneficiary,” and it would all make perfect sense. Healthcare AI vendors are partnering with providers and payers to develop AI models for everything from clinical decision support to revenue cycle management—all using the data supplied by the providers or the payers to feed their algorithms.

The FTC’s guidance should be incorporated into the data governance protocols of providers and payers and be part of the marketing plans of healthcare AI vendors hawking their latest technologies. Given AI’s huge opportunity to transform healthcare financing and delivery, stepping on any of these landmines laid out by the FTC would be disastrous.

To learn more about this topic, please read “How Can Healthcare Avoid Screwing Up AI’s Potential?” on 4sighthealth.com.

Stay home, stay safe, stay alive.

Thanks for reading.

About the 4sight Health Author
David Burda News Editor & Columnist

Dave is 4sight Health’s biggest news junkie, resident journalist and healthcare historian. He began covering healthcare in 1983 and hasn’t stopped since. Dave writes his own column, “Burda on Health,” for us, contributes to our weekly blog and manages our weekly e-newsletter, 4sight Friday. Dave believes that healthcare is a business like any other business, and customers—patients—are king. If you do what’s right for patients, good business results will follow.Follow Burda on Twitter @DavidRBurda and on LinkedIn.