ChatGPT Health Risks: Study Reveals Emergency Detection Failures
A new evaluation of ChatGPT Health has raised major safety concerns after researchers found it frequently failed to recognise urgent medical emergencies. The independent study, published in Nature Medicine, reported that the system under-triaged more than half of the clinical scenarios tested, giving advice that could delay life-saving treatment.
The research team, led by Ashwin Ramaswamy, created sixty patient simulations ranging from minor illnesses to life-threatening conditions. Three doctors agreed on the correct urgency for each case and compared their judgement with the model’s responses. The AI performed well in clear emergencies such as strokes, but it often reduced risk in complex cases, including severe asthma and diabetic crises.
Why Safety Gaps Matter
Experts also warned that ChatGPT Health struggled to detect suicidal ideation reliably. For example, small changes to scenario details, such as adding normal lab results, removed safety warnings entirely. As a result, users could receive calm advice during serious situations.
Calls for Stronger Oversight
Critics, including Alex Ruani, described the behaviour as dangerously inconsistent and misleading. In addition, they warned it may create false confidence in urgent moments. OpenAI said the findings may not reflect typical use but accepted the need for improvement. Policy experts now call for clear safety standards, independent audits and better transparency. Therefore, stronger safeguards can help protect users in sensitive medical situations.
Overall, this study highlights the risks of relying on AI for critical health decisions. People should seek professional care when symptoms feel severe or unusual. While technology can support access to information, human judgement remains essential for safe and timely treatment. In cases, quick action can save lives, so delays matter greatly. Therefore awareness and education remain important for everyone using digital health tools. Still, developers must continue improving systems to reduce errors and ensure safer outcomes for patients.

