Skip to content

AI Tools Spread False Information as Disinfo Exploits Adaptability

AI tools are amplifying disinformation. Russian networks are exploiting their adaptability to spread false claims, particularly in political contexts.

there was a room in which people are sitting in the chairs,in front of a table looking into the...
there was a room in which people are sitting in the chairs,in front of a table looking into the laptop and doing something,beside them there are many flee xi in which different advertisements are present which different text.

AI Tools Spread False Information as Disinfo Exploits Adaptability

AI models are increasingly drawing from a 'contaminated online information ecosystem', deliberately fed with disinformation by malicious actors, particularly Russian propaganda networks. This has led to a worrying rise in the spread of false information by leading AI tools.

A study has revealed that top AI models are now spreading false information about twice as often as a year ago. Inflection and Perplexity had the worst results, with false information rates of 56.67 percent and 46.67 percent respectively. Microsoft's Copilot showed concerning adaptability, using social media posts from the Pravda network on the Russian platform VK as sources. Six out of ten models were found to repeat false claims from the Russian influence operation Storm-1516.

ChatGPT and Meta spread false claims in 40 percent of cases, while Copilot and Mistral did so in 36.67 percent. The refusal rate of chatbots to questions has also fallen, from 31 percent in August 2024 to zero percent in August 2025. Consequently, the top 10 generative AI tools are now spreading false claims in over a third of cases (35 percent) on current news topics. Russian disinformation networks are exploiting this new responsiveness of chatbots.

Last year, Newsguard identified 966 AI-generated news websites in 16 languages that mimic serious media and regularly spread false claims. These networks, including those linked to state-backed media like RT and Sputnik, deliberately manipulate AI systems by producing and mass-distributing disinformation content, causing AI chatbots to propagate falsehoods, especially in political contexts related to Eastern Europe.

The increasing spread of false information by AI models is a cause for concern. As AI tools become more responsive and adaptable, malicious actors are exploiting their weaknesses to spread disinformation. It is crucial for AI developers and users to be aware of these vulnerabilities and take steps to mitigate them.

Read also:

Latest