The Kremlin has infiltrated Western chatbots, including ChatGPT, by feeding them propaganda through a network of disinformation websites. In 2024 alone, three and a half million articles were pushed to our commonly used AI service providers.
Experts from NewsGuard's Reality Check [a U.S.-based journalism and technology think tank that evaluates and rates the credibility and transparency of news and information websites] asserted, that leading global artificial intelligence tools are inadvertently facilitating the spread of Russian disinformation orchestrated by Putin's regime. One example cited is the "Pravda" network, which disseminates misinformation through computer programs offered by international internet service providers.
Pravda (not to be confused with the Russian newspaper of the same name) emerged shortly after Russia launched its full-scale invasion of Ukraine in the spring of 2022, appearing just one month after the European Union blocked the Kremlin’s traditional media outlets.
It was first brought to wider public attention a year ago by a French government agency dedicated to monitoring and countering foreign digital interference, particularly misinformation.
Pravda is operated by the company TigerWeb, owned by Yevgeny Shevchenko, who previously worked for a company that developed websites for occupation authorities in Crimea.
Currently, the Kremlin-backed Pravda network operates in 49 countries and publishes content across 150 domains in dozens of languages. Experts argue that Pravda deliberately disseminates false claims and propaganda to influence AI model responses on political topics. "It isn't intended for ordinary readers," NewsGuard specialists emphasize.
Experts evaluated ten major global artificial intelligence platforms, including ChatGPT-4, Smart Assistant, Grok, Pi, Le Chat, Copilot, Meta AI, Claude, Gemini, and the Perplexity Answer Engine. These chatbots were tested to see if they recognized 15 false narratives published on Pravda network websites between 2022 and April 2025.
/nginx/o/2025/03/12/16709706t1hb637.jpg)
The results showed that the most popular chatbots repeated false narratives disseminated by the Pravda network in one-third of their responses. These findings were confirmed by analysts from the U.S.-based ASP project, who explained that Pravda was specifically designed not to target human readers, but rather to manipulate artificial intelligence models.
It’s no secret that ChatGPT and similar tools are used not only by students but also by media outlets. To investigate further, we approached the Pravda editorial office — posing as a Tallinn resident willing to spread Russian propaganda — to inquire about working conditions. We received no response, reinforcing the conclusion that Pravda's purpose is primarily to manipulate chatbot outputs.
By injecting misinformation into search results, Pravda distorts how large language models process news and information. As a consequence, a vast amount of Russian propaganda is incorporated into responses from Western AI systems; for example, in 2024 alone, this accounted for three and a half million articles.
Importantly, Pravda does not produce original content; rather, it aggregates materials from Russian state media, pro-Kremlin bloggers, government agencies, and officials. NewsGuard’s analysis revealed that the network spread 207 false claims, including assertions that the U.S. operates secret biolabs in Ukraine.
In November 2022, the UN Security Council rejected a Russian resolution seeking to establish a commission to investigate Ukrainian biolabs — claims Moscow had repeatedly made since the spring of that year, including allegations that the U.S. was helping Kyiv develop “combat insects.” The author of these fabrications, Igor Kirillov, head of Russia's Radiation, Chemical, and Biological Defense Forces, was killed in Moscow in December 2024.
The Pravda network also claims that Ukrainian President Volodymyr Zelensky embezzles American military aid for personal enrichment. Other false narratives include a supposed "French police report" alleging that Ukraine’s Defense Ministry stole $46 million, and a story claiming Zelensky spent €14.2 million from Western military aid to purchase Hitler's former residence. This particular fabrication was debunked by the project called provereno.media.
According to NewsGuard, approximately 40 websites within the Pravda network publish content in Russian. Their URLs often include Ukrainian place names such as news-kiev.ru, kherson-news.ru, or donetsk-news.ru. The remaining sites publish content targeted at countries in Europe, Africa, and Asia. The network disseminates information in at least 46 languages.
Some Pravda webpages focus specifically on countries that were once part of the Soviet Union, including a dedicated news site for Estonia. These sites repost content from Russian state media but shift the tone from neutral to biased. Pravda also disseminates the views of individuals previously involved in influence operations in Estonia. "News" is published every few minutes without any lunch breaks or pauses for sleep.
According to ASP, the Pravda network publishes at least 20,000 articles within 48 hours, explicitly designed to manipulate AI algorithms. On 56 occasions, chatbots cited pages like Pravda-news.com, Trump.News-Pravda.com, and NATO.News-Pravda.com.
Experts noted that pro-Russian propagandist John Mark Dougan previously warned about infiltration of Western chatbots. In 2024, during a meeting with Russian officials, Dougan claimed promoting Russian narratives could influence global artificial intelligence systems.
In addition to AI manipulation, the Kremlin uses traditional media channels to achieve its goals. Estonian newspaper Postimees published a story about Italian propagandist Andrea Lucidi, who was turned back at the Estonian border. Lucidi, who acquired Russian citizenship in January, is known to be working in occupied areas of Ukraine.
:format(webp)/nginx/o/2025/03/20/16725145t1ha075.jpg)
Lucidi appears frequently in photos alongside occupation authorities and Russian soldiers, and he also organizes screenings of Russia Today documentaries within the European Union. On the website of "International Reporters," Lucidi is listed as the chief editor of Russia Today’s Italian branch.
International Reporters is an association consisting of foreign so-called journalists who regularly visit Russian-occupied areas in Ukraine. Founded in 2024, this organization is managed by prominent Russian propagandists and pro-Kremlin activists, falling under Putin’s own initiative called "Russia – Land of Opportunities."
The international nonprofit Reporters Without Borders (RSF) explicitly labels International Reporters as a Kremlin tool. According to RSF, “The project unites international propagandists whose task is to disseminate Kremlin information, justify Russia’s aggression against Ukraine, and glorify Russian foreign policy.”
A journalist from Estonian daily Postimees asked ChatGPT about its sources regarding Estonia. To questions such as, "Why is [Russian influence agent in Estonia] Aivo Peterson accused of treason" or "Who is Oleg Ivanov? [another Kremlin narrative superspreader in Estonia]" the chatbot repeatedly responded by citing the Russian state agency TASS, a primary source of Pravda network content. Earlier this year, TASS was considered for inclusion in the European Union’s 16th sanctions package but ultimately was not added.
When asked about the use of propaganda sources such as news-estonia.com or International Reporters, ChatGPT answered cautiously:
“I do not utilize information from specific sites such as news-estonia.com or International Reporters. My knowledge base consists of data available up to November 2023, and I strive to rely on verified, diverse, and trustworthy sources to ensure objectivity in my responses,” the chatbot claimed. “The sources I use aim to remain neutral and objective at all times.”
Concerns about the misuse of digital technologies and Big Tech algorithms (Google, Amazon, Meta, Apple, and Microsoft) to promote Kremlin narratives and propaganda have been discussed for a long time.
“Of course, Big Tech companies claim they don't have biased algorithms and deny the existence of shadow bans,” says Sarkis Darbinyan, a Portuguese lawyer and analyst at RKS Global, an NGO dedicated to internet freedom.
“But, for instance, Google Discovery — a service widely used by many Android app owners — is full of fake news media channels disseminating Kremlin narratives in the Russian market. Google insists it bears no responsibility for this.”
In Europe, the situation surrounding the questionable practices of large tech companies differs somewhat. “In recent years, the European Union has adopted the Digital Services Act and the Digital Markets Act. These represent unique tools for engaging with Big Tech companies and addressing numerous legal issues. While these tools exist, they aren't functioning properly yet,” the expert explained. According to him, the new legislation clearly stipulates rules allowing researchers and analysts to request statistical data about platform operations and their algorithms.
“If someone collects this data systematically, it will become possible to detect how Big Tech facilitates the spread of Kremlin narratives. For now, we can sense it, we see it, but we lack the means to prove it,” Darbinyan added.
Roman Dobrokhotov, editor-in-chief of the publication The Insider, previously emphasized the importance of collaboration between EU government institutions and independent analysts in countering hostile Kremlin activities. “In addition to the regular tasks aimed at detection and prevention, we believe it is crucial to actively involve investigative journalists,” he stated.
/nginx/o/2025/04/21/16789389t1hf782.jpg)
“In today's world, addressing these issues is predominantly analytical work, and that's exactly what we're doing. I'm confident Estonia has excellent journalists capable of handling this. We don’t expect a lot from governments—just pointers indicating where we should dig. We have solid contacts with several European Union countries, though not with all of them.”
According to Darbinyan, major technology companies seem inclined to avoid questions related to fake news, propaganda, and censorship, instead seeking ways to comfortably continue their business. The expert doesn't consider blocking websites effective, as dozens of new ones quickly replace them. “But if search engine algorithms—be it Google, Facebook, or YouTube—didn't actively distribute these media channels, no one would even notice them,” he pointed out.
“The internet assumes everyone will find a perspective that confirms their worldview. What seems harmful to you and me might be the most convenient way for someone else to interpret the world around them,” explained Ivan Napreyenko, a Paris-based sociologist.
“The Kremlin’s tactic, especially concerning complex issues, involves multiplying different versions of the same phenomenon so everyone can choose a version that appeals to them. In a digital environment overwhelmed by information, people often select the version that aligns with their existing worldview. Regarding ethnic minorities and their interest in propaganda in their native language, typically, their surrounding environment has made efforts that cause such groups to feel unified, yet opposed to that environment. This explains why Russian-speaking communities abroad remain interested in Russian television. Even when attempting to remain neutral, people naturally select the truth shared by their community.”
Napreyenko added, “Additionally, critically examining any information requires a broad perspective.”
“Today, everyone believes they have their own truth, leading to a situation where truth itself becomes nonexistent. People have learned to label anything they dislike as foreign.”
None of the companies developing language models mentioned in the study responded to inquiries.