A new tool identifies harmful nutritional misinformation and evaluates potential risks

A new tool that not only identifies diet and nutrition misinformation online but also assesses the risk of content for potential harm has been developed by a UCL research team.

Unlike existing tools, which offer a binary assessment of whether the content is ‘true’ or ‘false’, this first-of-its-kind tool addresses misinformation that is not blatantly false but is still potentially dangerously misleading, especially among vulnerable groups.

The tool’s developers identified that ‘true’ or ‘false’ assessments fail to capture the cumulative and contextual ways in which misleading health information can influence behavior and decision-making.

The spread of health misinformation online presents a major public health threat, according to the WHO. From restrictive diets and extreme fasting to the unsafe use of dietary supplements (estimated to account for 20% of drug-induced liver injury in the US alone), misinformation can have disastrous, sometimes fatal, consequences.

When it comes to diet and nutrition, misinformation often operates through selective framing that masks potential health risks. Harmful misleading content tends to fly under the radar of fact-checkers and escape critical oversight until high-profile cases make headlines.”


Alex Ruani, lead author and developer, UCL Institute of Education

A tool, it’s called Diet-Nutrition Misinformation Risk Assessment Tool or Diet-MisRATis a rule-based content analysis model that adapts the World Health Organization (WHO) approach to assessing hazardous exposures in physical settings to digital information environments. It treats online content as ‘medium’ and misleading properties as ‘risk agents’, which are known to increase the vulnerability of recipients. It ranks materials as green, amber or red according to a weighted misinformation risk score.

In this framework, the potential risk of harm depends on the content, the context and how likely the recipient is to be misled. By expanding the definition of misinformation beyond false facts, this tool will help policymakers, digital platforms and regulators implement safeguards, prioritize their responses and take proportionate action when faced with harmful misleading content.

Diet-MisRATThe results are tested and calibrated through five rounds of verification, including against the decisions of nearly 60 specialists in dietetics, nutrition and public health. The trial demonstrated the tools to deliver reliable assessments. The process also identifies the core characteristics of misinformation (inaccuracy; dangerous removal; manipulative framing) and indicators that increase the potential risk (methods and conditions in which the content is consumed; prominence).

For example, when evaluating content containing claims such as ‘it’s safer to give your child a high dose of vitamin A than the MMR vaccine’, the tool classifies this as a critical risk level because it shows false safety framing, omits the risk of excessive vitamin A doses and undermines public health guidelines, increasing the likelihood of harmful real-world decisions.

Co-author Professor Anastasia Kalea (UCL Division of Medicine) said: “It is important to include specialist skills when assessing the risk of misinformation. Our tool was calibrated and validated by feedback from almost 60 subject-matter experts. This helps ensure that assessments of potential harm reflect appropriate professional judgment”.

By isolating features of misleadingness and relating them to potential recipient outcomes, researchers can draw a picture of what makes content high-risk and what properties determine the scale of its impact.

Examples of harm related to the spread of online health misinformation include the case in 2025 of skin lesions caused by cholesterol diagnosed in men who adopt a carnivore diet, a disproportionate trend amplified by social media algorithms, especially in the ‘manosphere’ community.

Another example is the reported case where a person was hospitalized for several weeks after following the incorrect AI-generated advice to replace sodium chloride (salt) with sodium bromide, a substance that has no dietary role and is toxic if consumed regularly. Online misinformation has also been linked to decisions to forgo life-saving cancer treatments in favor of unproven dietary alternatives.

This study contributes to the ongoing discussion about how digital platforms, public health authorities and policy makers should respond to the influence of misleading health advice online, especially in social media, search summaries and generative AI.

Ruani said: “In public health we assess exposure to risk factors. We believe misleading health information should be treated in the same way. Some misinformation can lead to serious harm, so the mitigation strategy must be proportionate to the level of risk. The more severe the potential harm, the stronger the response must be.

“When AI chatbots speak confidently, users can consider their advice to be safe. If we can properly measure how misleading a piece of advice is and how much harm it can cause, we can build strong safeguards into AI models and agents before deployment instead of reacting after the harm occurs.”

Co-author Professor Michael Reiss (UCL Institute of Education) said: “By spelling out typical patterns that distort diet, nutrition or supplement information, the risk assessment criteria of our tool can be taught and applied in education and professional training. This will help students understand not only why something is wrong, but how and why it can skew judgment, equipping them to recognize and challenge it.”

Source:

University College London

Journal Reference:

Ruani, A., and others. (2026). Development and validation of a tool to detect the risk of misinformation in diet, nutrition, and health content (Diet-MisRAT). Scientific Report. DOI: 10.1038/s41598-026-40534-2. https://www.nature.com/articles/s41598-026-40534-2

#tool #identifies #harmful #nutritional #misinformation #evaluates #potential #risks

Leave a Comment