Google recently outlined plans to enhance cybersecurity efforts with its new product, Google Threat Intelligence, which integrates contributions from its Mandiant cybersecurity team and VirusTotal threat intelligence with the capabilities of the Gemini 1.5 Pro large language model.
According to Google, the Gemini 1.5 Pro, which was released in February, is capable of rapidly analyzing and reverse engineering malware attacks. The model reportedly took just 34 seconds to dissect the code of the notorious 2017 WannaCry virus, even pinpointing a kill switch that could halt the malware. This speed demonstrates the potential for large language models in cybersecurity tasks.
Additionally, Google sees potential for the Gemini model in summarizing complex threat reports into more accessible natural language. This capability within the Google Threat Intelligence platform could help companies better understand the implications of potential security threats, enabling them to react appropriately without underestimating or overestimating the risks.
The Google Threat Intelligence platform also boasts a broad network for monitoring potential threats preemptively. It draws upon the extensive threat detection and analysis capabilities of Mandiant and the collective intelligence of the VirusTotal community, which regularly shares indicators of threats. This approach helps provide a comprehensive view of the cybersecurity landscape, helping prioritize critical security concerns.
Further integrating Mandiant’s expertise, Google plans to leverage the company’s specialists to assess vulnerabilities in AI projects. Part of this initiative is Google’s Secure AI Framework where Mandiant experts will test AI defenses and assist in red teaming exercises. This is crucial as AI models, while beneficial in analyzing and summarizing threats, are also vulnerable to attacks such as data poisoning, which undermines the models’ ability to function correctly.
While Google is advancing its use of AI in cybersecurity, it is not alone in this endeavor. Microsoft, for example, has also introduced AI-driven tools like Copilot for Security, which uses GPT-4 along with Microsoft’s specialized cybersecurity AI to help professionals navigate and respond to threats. The adoption of AI in these contexts shows a promising shift from conventional applications like image generation to more strategic and impactful uses such as strengthening cybersecurity defenses.