A 60-year-old man was hospitalized with severe psychiatric and neurological symptoms after following a diet plan he derived from ChatGPT, highlighting growing concerns over the misuse of artificial intelligence for medical guidance.
The case, recently documented in the American College of Physicians Journal, revealed that the man, whose identity has not been disclosed, sought ways to eliminate sodium chloride (commonly known as table salt) from his meals after reading about its potential health risks. Instead of consulting a medical professional, he turned to ChatGPT for advice. The AI chatbot suggested replacing salt with sodium bromide, a compound that visually resembles table salt but is primarily used in industrial cleaning products and certain medications.
Toxic Swap Leads to Severe Health Crisis
For three months, the man substituted table salt with sodium bromide purchased online, while also experimenting with other dietary restrictions, including diluting his drinking water. Over time, his health deteriorated significantly.
By the time he was admitted to the hospital, he exhibited alarming symptoms: extreme paranoia, visual and auditory hallucinations, excessive thirst, and impaired motor coordination. Doctors were surprised by the sudden onset of psychiatric issues, particularly since he had no prior history of mental illness or significant medical conditions.
Within the first 24 hours of hospitalization, his condition worsened. Medical teams treated him with intravenous fluids, electrolyte replacement, and antipsychotic medication. At one point, he attempted to escape the hospital, prompting his transfer to an inpatient psychiatric unit. He remained under medical supervision for three weeks before his condition stabilized enough for discharge.
Understanding Sodium Bromide’s Risks
Sodium bromide, while chemically similar in appearance to table salt, has vastly different properties and uses. Though it was historically used in medications as a sedative, its consumption in significant amounts can cause “bromism,” a condition marked by neurological, psychiatric, and dermatological disturbances. Symptoms can range from confusion and paranoia to skin rashes and coordination problems. The man’s three-month exposure to the compound likely triggered the severe psychiatric episode that led to his hospitalization.
AI-Generated Health Advice Under Scrutiny
The incident raises broader concerns about relying on AI chatbots for health and nutrition advice. While AI systems like ChatGPT can provide general information, they are not designed to substitute professional medical expertise. In fact, OpenAI, the developer of ChatGPT, explicitly warns users in its terms of service that the chatbot’s output “may not always be accurate” and should not be used as a “sole source of truth” or for “diagnosis or treatment of any health condition.”
Medical experts reviewing the case cautioned that the man’s experience underscores the risks of treating AI-generated suggestions as authoritative. “It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation,” the journal report emphasized.
A Wake-Up Call on Responsible AI Use
The incident serves as a stark reminder of the potential dangers when individuals bypass professional medical guidance in favor of online or AI-driven alternatives. Although the man in this case had a background in nutrition studies, his decision to rely on ChatGPT without expert verification nearly cost him his life.
Doctors stress that while AI can be a helpful tool for learning or general awareness, it cannot replace trained medical professionals. Proper diagnosis and treatment require clinical judgment, medical history, and personalized care—factors beyond the capability of a chatbot.
Moving Forward
The man’s recovery highlights both the resilience of medical intervention and the importance of caution in adopting health recommendations from non-medical sources. As AI continues to expand into everyday use, experts argue for stronger safeguards, clearer warnings, and increased public awareness about its limitations.
For now, medical authorities advise that AI-generated content should only be treated as supplementary information, not as medical advice. Anyone considering dietary or lifestyle changes is urged to consult licensed healthcare providers before making potentially dangerous decisions.