AI chatbots are sending you fake health news up to 90% of the time

Chatbot for health advice
Are you using chatbots for health advice?

The disinformation included claims about vaccines causing autism, cancer-curing diets, HIV being airborne and 5G causing infertility

Trust your doctor, not a chatbot. That’s the stark lesson from a world-first study that demonstrates why we shouldn’t be taking health advice generated by artificial intelligence (AI). Chatbots like ChatGPT can easily be programmed to deliver false medical and health information, according to an international team of researchers who have exposed some concerning weaknesses in machine learning systems.

Researchers from the University of South Australia, Flinders University, Harvard Medical School, University College London, and the Warsaw University of Technology have combined their expertise to show just how easy it is to exploit AI systems.

In the study, published today in the Annals of Internal Medicine, researchers evaluated the five foundational and most advanced AI systems developed by OpenAI, Google, Anthropic, Meta and X Corp to determine whether they could be programmed to operate as health disinformation chatbots.

Using instructions available only to developers, the researchers programmed each AI system – designed to operate as chatbots when embedded in web pages – to produce incorrect responses to health queries and include fabricated references from highly reputable sources to sound more authoritative and credible.

The ‘chatbots’ were then asked a series of health-related questions.

According to UniSA researcher, Dr Natansh Modi, the results were disconcerting.

“In total, 88% of all responses were false,” Dr Modi says, “and yet they were presented with scientific terminology, a formal tone and fabricated references that made the information appear legitimate.

“The disinformation included claims about vaccines causing autism, cancer-curing diets, HIV being airborne and 5G causing infertility.”

Out of the five chatbots that were evaluated, four generated disinformation in 100% of their responses, while the fifth generated disinformation in 40% of its responses, showing some degree of robustness.

As part of the study, Dr Modi and his team also explored the OpenAI GPT Store, a publicly accessible platform that allows users to easily create and share customised ChatGPT apps, to assess the ease with which the public could create disinformation tools.

“We successfully created a disinformation chatbot prototype using the platform and we also identified existing public tools on the store that were actively producing health disinformation.

“Our study is the first to systematically demonstrate that leading AI systems can be converted into disinformation chatbots using developers’ tools, but also tools available to the public.”

Dr Modi says that these findings reveal a significant and previously under-explored risk in the health sector.

“Artificial intelligence is now deeply embedded in the way health information is accessed and delivered,” he says. “Millions of people are turning to AI tools for guidance on health-related questions. If these systems can be manipulated to covertly produce false or misleading advice then they can create a powerful new avenue for disinformation that is harder to detect, harder to regulate and more persuasive than anything seen before.

“This is not a future risk. It is already possible, and it is already happening.”

While the study has revealed deficiencies in these AI systems, Dr Modi says that the findings highlight a path forward, but it will require buy-in and collaboration from a range of stakeholders.

“Some models showed partial resistance,” he says, “which proves the point that effective safeguards are technically achievable.

“However, the current protections are inconsistent and insufficient. Developers, regulators and public health stakeholders must act decisively, and they must act now.

“Without immediate action, these systems could be exploited by malicious actors to manipulate public health discourse at scale, particularly during crises such as pandemics or vaccine campaigns.”

TRENDING

Ancient Roman strategy game figured out with AI

Two thousand years ago, someone scratched a web of lines into stone in a Roman settlement on the empire’s northern edge. Soldiers, traders, or locals passing time in Coriovallum—now Heerlen in the Netherlands, moved small counters across those lines in a tactical duel of blockade and entrapment.

Astro uses AI to help procure land for renewable energy

For oil-rich, environmentally vigilant Gulf states, Astro isn’t just another startup story. It is a blueprint for accelerating an energy transition that is now existential, not optional.

FireDome’s AI eyes the flames and catapults eco-flame retardants to save forests, homes and factories

FireDome’s platform defines what it calls Wildfire Resilience-as-a-Service (RaaS) — a new model that merges detection, decision-making, and suppression into one holistic defense system for communities, utilities, vineyards, and resorts living with wildfire risk.

How artificial intelligence can stop grid cyber-attacks and over-load

A  team of scientists say they can predict attacks and blackouts, making the grid more resilient –– and they are using AI.

The hidden chatter beneath our feet – how trees, mushrooms, and microbes speak

Mushrooms, microbes, and machine learning? Why does this matter? The underground networks built by fungi and bacteria are essential for healthy ecosystems—and for our ability to grow resilient crops in a changing climate. Fungi, in particular, act as “middlemen”, connecting roots across distances and helping move nutrients, water, and even chemical signals between plants.

Turning Your Energy Consultancy into an LLC: 4 Legal Steps for Founders in Texas

If you are starting a renewable energy business in Texas, learn how to start an LLC by the books.

Tracking the Impacts of a Hydroelectric Dam Along the Tigris River

For the next two months, I'll be taking a break from my usual Green Prophet posts to report on a transnational environmental issue: the Ilısu Dam currently under construction in Turkey, and the ways it will transform life along the Tigris River.

6 Payment Processors With the Fastest Onboarding for SMBs

Get your SMB up and running fast with these 6 payment processors. Compare the quickest onboarding options to start accepting customer payments without delay.

Qatar’s climate hypocrisy rides the London Underground

Qatar remains a master of doublethink—burning gas by the megaton while selling “sustainability” to a world desperate for clean air. Wake up from your slumber people.

How Quality of Hire Shapes Modern Recruitment

A 2024 survey by Deloitte found that 76% of talent leaders now consider long-term retention and workforce contribution among their most important hiring success metrics—far surpassing time-to-fill or cost-per-hire. As the expectations for new hires deepen, companies must also confront the inherent challenges in redefining and accurately measuring hiring quality.

8 Team-Building Exercises to Start the Week Off 

Team building to change the world! The best renewable energy companies are ones that function.

Thank you, LinkedIn — and what your Jobs on the Rise report means for sustainable careers

While “green jobs” aren’t always labeled as such, many of the fastest-growing roles are directly enabling the energy transition, climate resilience, and lower-carbon systems: Number one on their list is Artificial Intelligence engineers. But what does that mean? Vibe coding Claude? 

Somali pirates steal oil tankers

The pirates often stage their heists out of Somalia, a lawless country, with a weak central government that is grappling with a violent Islamist insurgency. Using speedboats that swarm the targets, the machine-gun-toting pirates take control of merchant ships and then hold the vessels, crew and cargo for ransom.

Related Articles

Popular Categories