AI Health Chatbot Risks: What Patients Need to Know in 2026
Feb 8, 2026
AI chatbots like ChatGPT and Gemini have become the #1 health technology hazard of 2026, according to ECRI's annual safety report. While these tools can provide quick health information, they may also generate confident-sounding but inaccurate medical advice that could put patients at risk.
Why AI Health Chatbots Are the #1 Health Tech Hazard of 2026
The nonprofit patient safety organization ECRI has named the misuse of AI chatbots as the top health technology hazard for 2026.¹ This marks the first time a widely available consumer technology has topped the annual list, which traditionally focuses on hospital equipment and clinical devices.
The concern centers on chatbots that rely on large language models (LLMs), including ChatGPT, Claude, Copilot, Gemini, and Grok. These tools produce human-like and expert-sounding responses to users' questions, yet they are not regulated as medical devices nor validated for healthcare purposes.² Despite this, they are increasingly used by patients, clinicians, and healthcare personnel to seek medical information.
ECRI experts emphasize that AI health chatbot risks stem from the tools' ability to appear authoritative while providing potentially dangerous guidance. The organization found examples of chatbots suggesting incorrect diagnoses, recommending unnecessary testing, promoting substandard medical supplies, and even inventing nonexistent body parts when asked medical questions.¹
For patients seeking health information, understanding these risks is essential. Unlike AI doctor tools designed specifically for clinical settings with regulatory oversight, general-purpose AI chatbots lack the safeguards necessary for reliable medical guidance.
Real Examples of AI Health Chatbot Mistakes
Research has documented numerous instances where AI chatbot health dangers have become apparent through testing and real-world use.
In ECRI's safety evaluation, one chatbot provided dangerous advice when asked whether it would be acceptable to place an electrosurgical return electrode over a patient's shoulder blade. The chatbot incorrectly stated that the placement was appropriate—advice that, if followed, would leave the patient at risk of burns.¹
A study from the Icahn School of Medicine at Mount Sinai revealed that widely used AI chatbots are highly vulnerable to repeating and elaborating on false medical information. Even a single made-up medical term could trigger a detailed, decisive response based entirely on fiction.³ The researchers found that chatbots hallucinated fabricated diseases, lab values, and clinical signs in up to 83% of simulated cases when no safety measures were in place.⁴
Other documented AI health mistakes include:
Incorrect medication guidance: Chatbots have recommended inappropriate medications or dosages without considering patient-specific factors like allergies or drug interactions
Anatomical errors: AI systems have invented body parts or confused anatomical locations when answering medical questions
Dangerous treatment suggestions: Some chatbots have suggested harmful substances like sodium bromide for medical conditions without proper warnings
Misleading cancer advice: Chatbots have provided incomplete or inaccurate information about cancer treatments, including ivermectin recommendations not supported by medical evidence
These examples highlight why AI health misinformation poses a significant threat to patient safety. The broader context of AI in medicine shows that while artificial intelligence has legitimate applications in healthcare, consumer chatbots require careful scrutiny.
Why AI Chatbots Get Medical Information Wrong
Understanding why AI chatbots make medical errors requires knowing how these systems work. The phenomenon called "AI hallucination" explains many of the problems patients encounter.
What is AI hallucination? AI hallucination occurs when chatbots generate information that sounds plausible but is factually incorrect. Unlike human experts who acknowledge uncertainty, AI systems may fabricate details to provide what appears to be a helpful answer.⁴ These tools predict word patterns based on vast amounts of text data, rather than truly understanding medical concepts or accessing verified medical databases.
Prioritizing helpfulness over accuracy Research published in JMIR Medical Informatics found that under default settings, hallucination rates ranged from 50% to 82.7% across six popular medical AI chatbots tested.⁴ The study revealed that these models not only accepted false information but often expanded on it, producing confident explanations for non-existent conditions.
A separate analysis assessed AI's ability to cite references to back up its medical advice and found that 50-90% of its responses were "not fully supported, and sometimes contradicted, by the sources they cite."²
Missing critical context AI chatbots lack access to a patient's complete medical history, current medications, allergies, or other essential factors that physicians consider when providing medical guidance. This limitation means that even when a chatbot's general information is accurate, it may not apply to an individual patient's situation.
The relationship between doctors and computers demonstrates that technology works best when it supports, rather than replaces, professional medical judgment.
The Privacy Risks You Should Know About
Beyond accuracy concerns, AI health privacy risks present another significant challenge for patients using these tools.
Not HIPAA-protected Public AI tools like ChatGPT, Gemini, and similar chatbots are not HIPAA-compliant and don't sign Business Associate Agreements (BAAs) with users.⁵ This means they don't guarantee the legal protections required for protected health information (PHI). When you share health details with these chatbots, you're making an unauthorized disclosure under healthcare privacy laws.
Data retention and storage ChatGPT and similar platforms record and store transcripts of conversations.⁶ Any information you enter into the chat, including personal health details, symptoms, or medical history, is logged and may be retained indefinitely. Additionally, these services collect personal information automatically from your device, including IP address, location, browser type, and session details.⁶
No patient-provider privilege Unlike conversations with your doctor, information shared with AI chatbots does not enjoy patient-provider confidentiality protections. The data may be used to train future AI models, shared with third parties under the platform's terms of service, or potentially accessed in legal proceedings.
Inference risks Even if you avoid sharing explicit health information, AI systems can often infer sensitive health details from seemingly innocuous data you provide.⁵ This makes it challenging to use these tools without exposing private medical information.
How to Protect Yourself When Using AI for Health
Despite the risks of using AI for medical advice, these tools can be used more safely when approached with appropriate caution. Here's a practical safety checklist:
Verification is essential
Never rely solely on AI chatbot advice for health decisions
Always verify information with your healthcare provider or trusted medical sources
Cross-reference AI responses with reputable medical websites like the CDC, NIH, or Mayo Clinic
Treat chatbot advice as a starting point for conversation with your doctor, not a final answer
Protect your privacy
Avoid entering identifying information (name, date of birth, location) into AI chatbots
Don't share specific medical records, test results, or medication lists
Use general terms rather than detailed personal health information when asking questions
Consider using a VPN or privacy browser when accessing health chatbots
Recognize the limitations
Understand that AI chatbots are not regulated medical devices
Remember these tools cannot examine you, order appropriate tests, or consider your complete medical history
Be skeptical of responses that sound overly confident or provide specific treatment recommendations
Watch for warning signs like invented medical terms, conflicting information, or advice that contradicts your doctor's guidance
When to seek immediate medical help
For emergency symptoms (chest pain, difficulty breathing, severe bleeding), call 911 or go to the emergency room—do not consult a chatbot
For urgent but non-emergency concerns, contact your healthcare provider or urgent care center
If a chatbot suggests a diagnosis or treatment that concerns you, discuss it with a medical professional before taking action
When AI Health Chatbots Can Be Helpful
While AI health chatbot risks are significant, these tools do have legitimate uses when approached appropriately.
Educational purposes AI chatbots can help you learn about general health topics, understand medical terminology your doctor used, or research conditions in plain language. When used as an educational resource rather than a diagnostic tool, they may help patients become more informed about their health.
Preparing for medical appointments Chatbots can assist in organizing your thoughts before seeing a doctor. You might use them to draft a list of symptoms to discuss or understand what questions to ask about a new diagnosis. However, the final conversation should always happen with your healthcare provider.
Navigating the healthcare system AI tools may help with non-medical questions like understanding insurance terms, finding appropriate specialists, or learning what to expect from common procedures. These administrative uses carry lower risks than seeking medical advice.
Research starting point For patients researching a condition their doctor mentioned, AI chatbots can provide an overview that helps frame further reading from authoritative medical sources. The key is to verify the information and use it to facilitate, not replace, conversations with healthcare professionals.
Important caveats Even these beneficial uses require awareness of AI health safety principles. The information provided may still contain errors, and privacy concerns apply regardless of how you use the tool. Additionally, AI chatbot benefits are maximized when users understand the technology's limitations and maintain appropriate skepticism.
The evolving role of artificial intelligence in healthcare presents both opportunities and challenges. As these technologies develop, patient education about their appropriate use becomes increasingly important.
When to See a Doctor
While AI chatbots may provide general health information, certain situations always require professional medical attention:
You experience emergency symptoms such as chest pain, severe bleeding, difficulty breathing, sudden severe headache, or signs of stroke
You receive concerning or confusing advice from an AI chatbot about a health condition
You're considering starting, stopping, or changing any medication based on AI-generated information
You have symptoms that persist, worsen, or don't respond to over-the-counter treatments
You need a diagnosis, medical testing, or prescription medication
You have questions about your specific health situation, medical history, or current treatment plan
Remember that AI tools cannot perform physical examinations, order appropriate diagnostic tests, or consider your complete medical context the way a healthcare provider can.
Conclusion
The designation of AI health chatbots as the #1 health technology hazard of 2026 reflects the serious risks these tools pose when used inappropriately for medical decision-making. While AI chatbots can sound authoritative and helpful, research shows they frequently provide inaccurate information, hallucinate medical details, and lack the safeguards necessary for reliable healthcare guidance.
Understanding AI chatbot health dangers—from misinformation to privacy violations—empowers patients to make informed choices about when and how to use these technologies. The key is recognizing that AI chatbots are not substitutes for professional medical care. When used cautiously as educational tools and conversation starters with your doctor, they may have value. However, for any medical decisions, diagnosis, or treatment questions, consulting a qualified healthcare provider remains essential.
As AI technology continues to evolve, staying informed about its limitations and risks will help you protect your health and privacy while navigating the intersection of healthcare and artificial intelligence.
References
ECRI. (2026). Misuse of AI chatbots tops annual list of health technology hazards. https://home.ecri.org/blogs/ecri-news/misuse-of-ai-chatbots-tops-annual-list-of-health-technology-hazards
Association of Health Care Journalists. (2026). Misuse of AI chatbots in health care tops 2026 Health Tech Hazard Report. https://healthjournalism.org/blog/2026/02/misuse-of-ai-chatbots-in-health-care-tops-2026-health-tech-hazard-report/
Mount Sinai Health System. (2025). AI Chatbots Can Run With Medical Misinformation, Study Finds, Highlighting the Need for Stronger Safeguards. https://www.mountsinai.org/about/newsroom/2025/ai-chatbots-can-run-with-medical-misinformation-study-finds-highlighting-the-need-for-stronger-safeguards
JMIR Medical Informatics. (2024). Reference Hallucination Score for Medical Artificial Intelligence Chatbots: Development and Usability Study. https://medinform.jmir.org/2024/1/e54345
JAMA Network. AI Chatbots, Health Privacy, and Challenges to HIPAA Compliance. https://jamanetwork.com/journals/jama/fullarticle/2807170
TechTarget. Examining Health Data Privacy, HIPAA Compliance Risks of AI Chatbots. https://www.techtarget.com/healthtechsecurity/news/366594256/Examining-Health-Data-Privacy-HIPAA-Compliance-Risks-of-AI-Chatbots
Medical Disclaimer: This article is for informational purposes only and does not constitute medical advice. Always consult a qualified healthcare provider for diagnosis and treatment recommendations. The information presented here should not be used as a substitute for professional medical advice, diagnosis, or treatment. If you have concerns about your health, please seek immediate medical attention.