ChatGPT in Medicine: STAT Answers Readers’ Burning Questions About AI

Stat News / By Lizzy Lawrence, Mohana Ravindranath and Brittany Trang 
 
Artificial intelligence is often described as a black box: an unknowable, mysterious force that operates inside the critical world of health care. If it’s hard for experts to wrap their heads around at times, it’s almost impossible for patients or the general public to grasp.
 
While AI-powered tools like ChatGPT are swiftly gaining steam in medicine, patients rarely have any say — or even any insight — into how these powerful technologies are being used in their own care.
 
To get a handle on the most pressing concerns among patients, STAT asked our readers what they most wanted to know about generative AI’s use in medicine. Their submissions ranged from fundamental questions about how the technology works to concerns about bias and error creeping further into our health systems.
 
It’s clear that the potential of large language models, which are trained on massive amounts of data and can generate answers to myriad prompts, is vast. It goes beyond ChatGPT and the ability for humans and AI to talk to each other. AI tools can help doctors predict medical harm on a broader scale, leading to better patient outcomes. They’re currently being used for medical note-taking, and analysis of X-rays and mammograms. Health tech companies are eager to tout their AI-powered algorithms at every turn.
 
But the harm is equally vast as long as AI tools go unregulated. Inaccurate, biased training data deepen health disparities. Algorithms not properly vetted deliver incorrect information on patients in critical condition. And insurers use AI algorithms to cut off care for patients before they’re fully recovered.
 
When it comes to generative artificial intelligence, there are certainly more questions than answers right now. STAT asked experts in the field to tackle some of our reader’s thoughtful questions, revealing the good, the bad, and the ugly sides of AI.
 
As a patient, how can I best avoid any product, service or company using generative AI? I want absolutely nothing to do with it. Is my quest to avoid it hopeless? 
 
Experts agreed that avoiding generative AI entirely would be very, very difficult. At the moment, there aren’t laws governing how it’s used, nor explicit regulations forcing health companies to disclose that they’re using it.
 
“Without being too alarmist, the window where everyone has the ability to completely avoid this technology is likely closing,” John Kirchenbauer, a Ph.D. student researching machine learning and natural language processing at the University of Maryland, told STAT. Companies are already exploring using generative AI to handle simple customer service requests or frequently asked questions, and health providers are likely looking to the technology to automate some communication with patients, said Cobun Zweifel-Keegan, managing director of the International Association of Privacy Professionals.
 
But there are steps patients can take to at least ensure they’re informed when providers or insurers are using it.
 
Despite a lack of clear limits on the use of generative AI, regulatory agencies like the Federal Trade Commission “will not look kindly if patients are surprised by the use of automated systems,” so providers will likely start proactively disclosing if they’re incorporating generative AI into their messaging systems, Zweifel-Keegan said.
 
“If you have concerns about generative AI, look out for these disclosures and always feel empowered to ask questions of your provider,” Zweifel-Keegan said, adding that patients can report any concerning practices to their state attorney general, the FTC and the Department of Health and Human Services.

Read Full Article