“Study warns against relying on ChatGPT for Medical Advice, citing Health Risks”

Previously, it was generally advised not to search for disease symptoms on Google due to the possibility of incorrect information. Now, similar concerns are being raised about ChatGPT, an AI chatbot developed by OpenAI that has become popular for providing answers to user questions. Researchers have found that the free version of ChatGPT may give inaccurate or incomplete responses to medication-related queries, or even no responses at all. This poses a potential risk to patients who rely on OpenAI’s chatbots for medical guidance.

According to CNBC, the researchers requested references from ChatGPT to verify the accuracy of its responses. It was discovered that ChatGPT provided incorrect or incomplete answers to almost three-quarters of drug-related questions. Furthermore, when asked for references, ChatGPT only included references in eight responses, and those references cited non-existent sources.

The study highlighted a notable example where ChatGPT wrongly claimed that there were no reported interactions between Pfizer’s Paxlovid and the blood pressure-lowering drug verapamil. In reality, these medications can excessively lower blood pressure when taken together, posing a potential risk to patients.

Based on these findings, the study emphasizes the importance of being cautious for both patients and healthcare professionals who consider using ChatGPT for drug-related information. Lead author Sarah Grossman, an associate professor of pharmacy practice at LIU, recommends verifying any responses from the chatbot with reliable sources. “Healthcare professionals and patients should exercise caution when relying on ChatGPT as a reliable source of drug-related information,” Grossman advised.


Posted

in

by