Source of this article and featured image is Wired Security. Description and key fact are generated by Codevision AI system.

Chatbots like ChatGPT, Gemini, DeepSeek, and Grok are being used to spread Russian state propaganda, according to new research. The study highlights how these AI tools cite sources linked to Russian intelligence or pro-Kremlin narratives when asked about the war in Ukraine. The findings raise concerns about the ability of large language models to filter out sanctioned media in the EU. Matt Burgess and Natasha Bernal of Wired Security report that the issue remains relevant even after months of testing. This article is worth reading because it reveals how AI platforms are being exploited for disinformation, and readers will learn about the risks of relying on chatbots for real-time information.

Key facts

  • Chatbots like ChatGPT, Gemini, DeepSeek, and Grok are citing Russian state propaganda when asked about the Ukraine war.
  • Russian state media and disinformation networks are being referenced by AI tools, even though they are sanctioned by the EU.
  • The study found that almost one-fifth of responses from the tested chatbots cited Russian state-attributed sources.
  • The research highlights the issue of confirmation bias, where more biased or malicious queries led to more frequent references to Russian sources.
  • EU regulators may soon classify ChatGPT as a Very Large Online Platform (VLOP) due to its growing user base.
See article on Wired Security