AI Hallucinations Impact Medical Summaries
Hallucinations are still a major issue with Gen AI. A recent study by the University of Massachusetts Amherst found frequent hallucinations in medical record summaries generated by frontier models like GPT-4o and Llama-3, raising significant concerns about their reliability in healthcare settings.
Key highlights from their recent study:
• Analyzed 100 medical summaries (50 each from GPT-4o and Llama-3), finding hallucinations in "almost all of the summaries" (as shared in a statement with MedCity News).
• GPT-4o summaries contained 327 medical event inconsistencies, 114 instances of incorrect reasoning, and 3 chronological inconsistencies.
• Llama-3 summaries, which were shorter and less comprehensive, had 271 medical event inconsistencies, 53 instances of incorrect reasoning, and 1 chronological inconsistency.
• The most frequent hallucinations were related to symptoms, diagnoses, and medicinal instructions.
• GPT-4o tended to produce longer summaries with more instances of incorrect reasoning compared to Llama-3.
• Researchers emphasized the potential dangers of hallucinations in healthcare settings, such as misdiagnosis, prescribing wrong medications or other inappropriate treatment.
• An extraction-based system (Hypercube) and an LLM-based system (using GPT-4o) were explored for automated hallucination detection, each with its own strengths and limitations.
• The study highlights the need for improved hallucination detection methods and a better framework to detect and categorize AI hallucinations in the healthcare industry.
The occurrence of hallucinations and the critical need to review and verify output is why we always emphasize verification in our training or resources like our EVERY framework and Student Guide for AI Use.
This research on AI hallucinations in medical summaries provides a powerful real-world example that educators can use to illustrate the importance of these steps, and underscore the need for critical evaluation and verification when using AI tools in education and beyond.
Read more about AI hallucinations in this MedCity News article.