noteworthy
Return to Table of Contents
spring 2026
“When it comes to medicine,” says Professor Mahed Maddah, “human judgment is needed as much as information.” Photographs by Michael J. Clarke
WebMD? That’s so 1998.
These days, ChatGPT and other artificial intelligence (AI) platforms are the go-to source for medical information for many people. And why not? With the right prompts, you can drill down on whatever ailment you’re curious about and get very specific and actionable responses.
Which is more complex than it appears, according to Mahed Maddah, an assistant professor of information systems and operations management. Maddah—whose research interests include data quality, user-generated content, and health informatics—has lately turned his attention to AI and healthcare. In one recent paper, he and his coauthors investigated public perceptions of ChatGPT’s value, focusing on both its practical benefits and the level of engagement and emotional impact it offers users.
Patients liked the convenience and cost-effectiveness of AI-as-medical-resource, and the feeling that they could almost have a conversation with the chatbot. And that may be a problem, Maddah says. He notes that while ChatGPT is designed to provide accessible, conversational responses based on available data, physicians are trained to deliver medically grounded advice—even when that advice involves difficult or unwelcome information.
“The high satisfaction that patients have for ChatGPT is not necessarily good news,” says Maddah. “When it comes to medicine, human judgment is needed as much as information.”
Still, he acknowledges that ChatGPT is a solid preliminary tool for seeking out health information, especially for nonemergency situations. “ChatGPT has the potential to transform health information-seeking behavior by offering both practical and emotional benefits,” says Maddah. Given its growing importance, he urges developers to optimize its functionality and user experience to ensure that first, it does no harm. —Ben Hall
