AI chatbots quietly driving hundreds of thousands of users to Kremlin propaganda sites
The Insight News Media shared the investigation.New research analyzing the final quarter of 2025 reveals that AI chatbots directed at least 300,000 visits to eight Kremlin-linked websites, including RT and Sputnik, transforming these tools into unintended distribution channels for state-aligned messaging. The traffic came from widely used platforms like ChatGPT, Perplexity, Claude, and Mistral—services that increasingly serve as "answer engines" for millions of users seeking information.While these numbers represent only a fraction of overall traffic to major Russian outlets, the pattern signals an emerging blind spot in content moderation and sanctions enforcement.How AI tools route users to propaganda outletsTake RT as an example. The outlet attracted over 123 million page views during the three-month period, making AI-sourced visits a tiny percentage of its overall reach. Still, ChatGPT by itself channeled more than 88,000 users to the site, while Perplexity brought in just over 10,000.The pattern repeats across other major Russian-language platforms. RIA Novosti saw upward of 70,000 visitors arrive via AI tools, and Lenta.ru logged over 60,000. Notably, Perplexity emerged as a new traffic source for several sites during this timeframe, indicating that AI-powered referrals are still gaining momentum.Geography adds another layer of complexity. Even with sanctions in place, a notable portion of traffic to these restricted outlets continues flowing from European Union countries and the United States—the exact jurisdictions where access is supposed to be curtailed.Consider RT's readership composition: American users make up 10%, Germans account for 2.27%, Spanish readers for 1.48%, and UK visitors for 1.12%. These are the markets where regulatory barriers should be limiting exposure. AI platforms don't breach sanctions directly, but they can present forbidden sources as just another credible link in a response. Research design and data sourcesThe findings rely on SimilarWeb metrics from October through December 2025, tracking eight Russian state or Kremlin-affiliated propaganda sites penalized across Europe for spreading false information and backing Russia's military aggression in Ukraine. Investigators examined incoming traffic from leading AI platforms—ChatGPT, Perplexity, Claude, and Mistral—while also mapping total visitor counts and geographic origins for each website. Traffic breakdowns reveal:rt.com: 98,400 total (ChatGPT sent 88,300, Perplexity 10,100)ria.ru: 72,200 total (ChatGPT 52,400, Perplexity 19,800)lenta.ru: 61,200 total (ChatGPT 33,800, Perplexity 27,400)iz.ru: 22,100 (all from ChatGPT)rg.ru: 18,600 total (ChatGPT 13,200, Perplexity 5,400)reseauinternational.net: 5,800 from ChatGPT, Mistral below 5,000sputnikglobe.com: below 10,000 combined (ChatGPT and Claude each below 5,000)news-pravda.com: below 5,000 (Claude only)Any figure marked "below 5,000" means the platform registered activity but it didn't reach the five-thousand-visit mark.Major Russian news sites gain AI-powered back channelsLenta.ru and Ria.ru offer a window into how conversational AI creates secondary routes to sprawling Russian information hubs that function both as domestic messaging vehicles and as narrative exporters.Lenta.ru pulled in 232.7 million total views and 14.5 million unique visitors during Q4 2025, with nearly three-quarters originating in Russia. Yet despite legal barriers, it continues drawing readers from Germany (3%), the United States (2.55%), and several NATO members including the Netherlands, Lithuania, Norway, Sweden, the UK, and Poland. Against this backdrop, ChatGPT delivered 33,800 visits and Perplexity another 27,400—making them steady, if modest, traffic generators.Ria.ru follows a similar trajectory: 194.8 million views, 14.6 million visitors, heavily concentrated in Russia (77%), but with visible reach into Germany (1.2%), the United States (1%), Italy (0.8%), the Netherlands (0.73%), and Latvia (0.29%). ChatGPT produced 52,400 visits and Perplexity contributed 19,800, demonstrating that AI platforms feed reliable, ongoing traffic to a core Kremlin information source. Smaller outlets see disproportionate AI impactAI referral effects grow sharper when looking at niche propaganda operations. Sputnikglobe.com, a repackaged version of Sputnik that's prohibited in the EU, pulled 3.382 million views and 176,000 unique visitors over the quarter. Its audience skews international: Sweden tops the list at 16%, then Italy (11.79%), the United States (10%), Norway (6.8%), and the UK (3.7%), with further audiences in India, Pakistan, Australia, and Canada.For outlets operating at this scale, a few thousand referrals carry weight. On Sputnikglobe.com, both ChatGPT and Claude sent fewer than 5,000 visits each, yet combined they made up about 6% of all incoming referral traffic—a meaningful slice when total quarterly visitors number just 176,000. At News Pravda, a multilingual disinformation hub linked to the Kremlin and aimed heavily at European readers, Claude-driven traffic made up close to 10% of all referrals. French-language propaganda gets AI amplificationThe numbers also show how AI platforms can boost propaganda aimed at specific language communities. Reseau International, a French-language outlet recognized for advancing pro-Russian and anti-EU talking points, drew the bulk of its audience from France. ChatGPT alone was responsible for 7.5% of the site's referral traffic in the quarter.The site's audience is overwhelmingly French (80%), and ChatGPT brought 5,800 visitors—representing 7.50% of referral traffic—while Mistral, an AI system developed in France, added fewer than 5,000.The detail that stands out: a French-built AI assistant is pointing users toward a pro-Russian platform that consistently vilifies French and European Union leadership. This raises the prospect that AI systems may be strengthening foreign messaging within national conversations, especially when audiences lack the tools to identify influence operations involving domestic collaborators.From social streams to conversational interfacesThe evidence points to a structural shift: propaganda encounters are migrating from search results and social feeds into conversational question-and-answer exchanges.AI chatbots don't organize information into scrollable feeds or curated timelines. They embed links within answers that appear neutral and helpful. That presentation matters. Users can stumble onto sanctioned propaganda without realizing it, particularly when there's no labeling or disclaimer to signal the source's background. The dynamic is understated. Instead of pushing messages overtly, AI platforms can make them ordinary, slotting state-controlled outlets next to established journalism. For those studying information flows and policy enforcement, this marks a change in how exposure and persuasion should be tracked.Implications for monitoring and regulationThe data raises immediate questions for fact-checkers and government agencies. Existing oversight infrastructure centers on social networks, broadcasters, and ad platforms. AI assistants don't fit neatly into those categories, even though they're shaping how people access information at scale.Fact-checking groups may need to widen their lens to include routine audits of AI-generated responses and ongoing cataloging of problematic sources that appear frequently. Policymakers, meanwhile, confront the question of whether AI-driven traffic effectively weakens sanctions and whether new transparency or disclosure standards are warranted when automated systems surface banned outlets.When helpful tools double as propaganda distributorsThe traffic data makes one thing plain: AI chatbots are routing real, quantifiable audiences to sanctioned Russian propaganda websites. The conversational format of AI-driven discovery amplifies the significance of this traffic, even if raw numbers remain small compared to traditional channels.In absolute terms, this already translates to hundreds of thousands of visits each quarter. Structurally, what matters more is that restricted outlets are being woven back into the information ecosystem through interfaces that command more trust than social ads or search engine results.That trust differential is where the danger lies. When an AI chatbot references or links to a sanctioned propaganda site, the material typically appears without context—no disclaimer, no flag, no hint that the source is state-run or legally restricted. An average user asking a straightforward question may view the result as reliable by default.By weaving sanctioned outlets into routine exchanges, AI systems threaten to erase the distinction between verified reporting and state-coordinated messaging. Tackling that threat will take more than sharper prompts or better-informed users—it requires recognizing AI chatbots as infrastructure, complete with the scrutiny, transparency rules, and oversight that role demands.From the perspective of fact-checking, the challenge is evolving. Traditional approaches center on viral content, broadcast statements, or widely circulated false claims. AI-generated referrals are scattered, tailored to individual queries, and difficult to track in real time. Fact-checkers may need to monitor AI platforms directly: running test queries, cataloging repeat sources, and creating public explainers on source reliability rather than only debunking claims after they spread.For governments and regulators, the reality is stark. Sanctions frameworks were designed around broadcasters, digital platforms, financial institutions, and ad networks. AI systems don't fit cleanly into any category. Concrete responses might include explicit guidance on whether sanctioned sites can be surfaced as information sources, mandatory disclosures about high-risk domains in retrieval outputs, and coordinated watchlists of restricted sites that go beyond voluntary opt-in.The bottom line: AI chatbots now function as a distinct class of referrer for propaganda networks, including those under explicit legal restriction. Without active oversight, they risk making sanctioned narratives routine—not through persuasion or ideology, but through ease of access and assumed authority. Addressing this doesn't mean teaching better prompt-writing; it starts with governance, visibility, and the recognition that large language model answer engines are already part of the information infrastructure.