Today, Microsoft and OpenAI disclosed that hackers are leveraging large language models, such as ChatGPT, to enhance their cyberattacks. These enhancements include refining research on targets, scripting improvements, and bolstering social engineering strategies. According to a recent blog post by Microsoft, cybercriminals, including state-sponsored entities from Russia, North Korea, Iran, and China, are experimenting with AI technologies to boost their operational capabilities and evade security measures.
The research highlights how the Russian-linked Strontium group, also known as APT28 or Fancy Bear, has employed large language models to delve into satellite communication protocols, radar imaging, and other technical areas. This group, active in the context of the Russia-Ukraine conflict and previously implicated in the cyberattacks against Hillary Clinton’s 2016 presidential campaign, is also utilizing these models for scripting tasks aimed at technical operation automation or optimization.
North Korean hackers, identified as Thallium, have been using these AI tools for vulnerability research, scripting, and crafting phishing emails. Similarly, the Iranian Curium group has been generating phishing content and codes to dodge antivirus detection with the help of LLMs. Chinese hackers are also reported to be using these models for various purposes, including research, scripting, and refining their hacking tools.
The use of AI in cyberattacks has raised concerns, especially with the emergence of AI-powered tools designed to assist in creating malicious emails and hacking software. The National Security Agency has also highlighted the increased sophistication of phishing emails due to AI utilization.
While Microsoft and OpenAI have yet to observe any “significant attacks” leveraging large language models, they have been proactive in terminating accounts and assets linked to these malicious groups. The companies emphasize the importance of publishing their findings to alert the cybersecurity community about these early-stage tactics and share their countermeasures.
Microsoft warns of potential future threats, such as AI-powered voice impersonation, which could exploit even innocuous voice samples. In response to these evolving threats, Microsoft advocates for an AI-driven defence strategy. The company is developing Security Copilot, an AI assistant tailored for cybersecurity experts to help identify breaches and sift through vast quantities of data. Furthermore, Microsoft is enhancing its software security in light of recent major Azure cloud incidents and espionage activities targeting its executives.