Google Reports State-Backed Hackers Using Gemini AI for Recon and Attack Support

The Dawn of AI-Driven Cyber Warfare: Google Reveals State-Backed Weaponization of Gemini
The era of experimental AI in cybercrime has ended. Google’s Threat Analysis Group (TAG) recently confirmed that state-sponsored actors are no longer merely testing the waters of generative AI—they are actively integrating it into their offensive pipelines. Most notably, the North Korea-linked group UNC2970 has been observed leveraging Google’s own Gemini model to conduct sophisticated reconnaissance and streamline the early stages of targeted intrusions.
This disclosure marks a pivotal moment in the digital landscape: the transition of Large Language Models (LLMs) from novelty productivity aids to potent force multipliers for Advanced Persistent Threats (APTs).
UNC2970: Gemini as an Automated Reconnaissance Engine
UNC2970, a threat actor historically tied to North Korean espionage and often overlapping with the "Lazarus Group" umbrella, has a long-standing history of targeting the media, aerospace, and defense sectors. According to Google, the group utilized Gemini to automate the identification of key personnel and to analyze the technical profiles of high-value targets.
By harnessing Gemini’s natural language processing, UNC2970 can parse vast amounts of public-facing data—ranging from LinkedIn profiles to technical white papers—to build highly detailed dossiers. This operational model effectively transforms Gemini into a powerful reconnaissance engine, allowing the group to craft hyper-personalized spear-phishing lures. These lures are significantly more difficult for automated email gateways and human targets to detect, as the AI eliminates the linguistic nuances and grammatical inconsistencies that previously hindered North Korean operations. Gemini enables the generation of flawless, context-aware communication that perfectly mimics the professional tone of industry recruiters or peers.
Accelerating the Attack Life Cycle
While UNC2970 provides a stark case study, Google’s findings indicate that these generative AI cyber attack methodologies are being adopted by a broader spectrum of state-backed groups. These actors are weaponizing LLMs to accelerate the entire attack life cycle through several key vectors:
- Vulnerability Research and Exploit Development: Actors are using Gemini to analyze open-source codebases for memory corruption vulnerabilities or logic flaws. While LLMs include safety filters to prevent direct malware generation, sophisticated prompting can bypass these restrictions to assist in writing "utility" code—such as obfuscation scripts or loaders—that forms the backbone of custom payloads.
- Malware Deobfuscation and Reverse Engineering: Conversely, attackers use Gemini to interpret defensive scripts or reverse-engineer security software. This "AI-assisted" reverse engineering helps them identify blind spots in modern Endpoint Detection and Response (EDR) solutions.
- Disinformation Campaigns and Information Operations (IO): Google noted that Gemini is being used to generate synthetic content for influence campaigns. This includes the creation of deepfake personas and the rapid-fire production of propaganda tailored to geopolitical events, significantly lowering the cost of entry for large-scale disinformation.

Model Extraction and the "AI on AI" Frontier
Perhaps most concerning is the emergence of "model extraction attacks." Google revealed that sophisticated actors are attempting to probe Gemini to reverse-engineer its internal logic or steal its underlying weights.
Model extraction represents a meta-threat: if a state-sponsored actor successfully extracts a model or its specific fine-tuning parameters, they can run a local, uncensored version of the AI. This would allow them to bypass all safety guardrails, enabling the automated generation of zero-day exploits and highly destructive malware without the oversight of the service provider.
Shifting to an AI-Native Security Posture
Google’s report emphasizes that while attackers are moving quickly, defenders are working to move faster. TAG has integrated Gemini-based detection tools to identify patterns of AI-generated malicious content, effectively using the "fingerprints" of LLM-generated code to stay ahead of the tools they created.
However, the report serves as a warning to the enterprise: the window between vulnerability discovery and exploitation is shrinking. To counter these AI in cybersecurity threats, organizations must move beyond legacy defenses toward an "AI-native" security posture.
In practice, this means:
- AI-Powered Anomaly Detection: Implementing security tools that use machine learning to identify the subtle behavioral shifts characteristic of AI-accelerated attacks.
- Proactive AI-Driven Threat Hunting: Utilizing LLMs to parse internal logs and identify hidden patterns of reconnaissance before an intrusion occurs.
- Continuous Threat Intelligence Integration: Moving from static blocklists to dynamic, AI-informed intelligence feeds that adapt to the adversary’s automated agility.
The era of AI-enhanced cyber warfare is not a future projection; it is the current reality of the threat landscape. As the speed of the adversary increases, the speed of the defense must match it.
To stay informed on the evolving AI threat landscape and learn how to secure your organization against next-generation APT tactics, subscribe to our latest intelligence reports or contact our security team for a comprehensive AI-readiness assessment.