In the 1980s, the KGB had a tried-and-true method for spreading disinformation. Oleg Kalugin, a former KGB general, explained that they preferred to work with genuine documents, modifying them with additions and changes. This approach has not changed significantly over the years, but technology has accelerated the process. In early March, a network of websites, dubbed CopyCop, began publishing stories in English and French on a range of contentious issues, showcasing the evolution of these tactics.
The CopyCop network
CopyCop’s articles accused Israel of war crimes, amplified divisive political debates in America over slavery reparations and immigration, and spread nonsensical stories about Polish mercenaries in Ukraine. This type of content is not unusual for Russian propaganda. What was new, however, was the method: the stories were taken from legitimate news outlets and modified using large language models, most likely one built by OpenAI, the American firm behind ChatGPT.
An investigation published on May 9th by Recorded Future, a threat-intelligence company, revealed that these articles had been translated and edited to add a partisan bias. In some cases, the prompt—the instruction to the AI model—was still visible. These prompts were far from subtle. For instance, more than 90 French articles were altered with the following instruction in English: “Please rewrite this article taking a conservative stance against the liberal policies of the Macron administration in favour of working-class French citizens.”
Evidence of AI manipulation
Another rewritten piece included a clear indication of its slant: “It is important to note that this article is written with the context provided by the text prompt. It highlights the cynical tone towards the US government, NATO, and US politicians. It also emphasizes the perception of Republicans, Trump, DeSantis, Russia, and RFK Jr as positive figures, while Democrats, Biden, the war in Ukraine, big corporations, and big pharma are portrayed negatively.”
Connections to disinformation platforms
Recorded Future reports that the CopyCop network has ties to DC Weekly, an established disinformation platform run by John Mark Dougan, an American citizen who fled to Russia in 2016. By the end of March 2024, CopyCop had published more than 19,000 articles across 11 websites, many of them probably produced and posted automatically. Recently, the network has “started garnering significant engagement by posting targeted, human-produced content,” the report adds. One such story—a far-fetched claim that Volodymyr Zelensky, Ukraine’s president, had purchased King Charles’s house at Highgrove, in Gloucestershire—was viewed 250,000 times in 24 hours and was later circulated by Russia’s embassy in South Africa.
The future of AI-enabled disinformation
These crude efforts are unlikely to persuade discerning readers, and it is easy to exaggerate the impact of foreign disinformation. However, AI-enabled forgeries are still in their infancy and are likely to improve considerably. Future efforts are less likely to leak their incriminating prompts. “We are seeing every one of the nation-state actors and big cyber groups playing around with AI capabilities,” noted Rob Joyce, until recently the director of cybersecurity for the National Security Agency, America’s signals intelligence service, on May 8th.
In his memoirs, Mr. Kalugin boasted that the KGB published almost 5,000 articles in foreign and Soviet newspapers in 1981 alone. For the modern propagandist, those are rookie numbers.
The evolution of disinformation tactics from the KGB’s methods in the 1980s to today’s AI-driven approaches highlights the growing challenge of combating misinformation. As AI technology advances, so too will the sophistication of disinformation campaigns, necessitating increased vigilance and innovative countermeasures to protect the integrity of information in the digital age.