Artificial intelligence is no longer just a tool being used by businesses, developers, researchers, and security teams. According to Google's Threat Intelligence Group, it is now also becoming part of the attacker's toolkit in a much more serious way. Google has revealed what it describes as the first confirmed case of hackers using AI to help develop a zero-day exploit for a planned large-scale cyberattack. In simple terms, this means attackers allegedly used an AI model to help discover and weaponise a software vulnerability that was previously unknown to the affected organisation.
That is a significant development because zero-day vulnerabilities are already among the most dangerous weaknesses in cybersecurity. They are called "zero-day" because the software vendor has had zero days to fix the issue before attackers discover or attempt to use it. When AI is added into that process, the concern is that attackers may be able to find weaknesses faster, automate parts of the attack process, and scale their operations more easily.
A Serious Attempt Before It Became A Wider Attack
According to Google's Threat Intelligence Group, the attackers were preparing to use the exploit in what the company described as a mass exploitation event. That suggests the goal was not just to target one small system or carry out a limited test. Instead, the attackers may have been preparing to use the vulnerability more broadly once the exploit was ready.
The vulnerability reportedly could have allowed attackers to bypass two-factor authentication. That detail is especially concerning because two-factor authentication is commonly seen as an important extra layer of protection. If attackers are able to bypass it, they may gain access even when users believe their accounts or systems are properly secured.
Google said it detected the activity before the exploit could be used at larger scale. The company also notified the affected organisation, which has since patched the vulnerability. While Google did not publicly identify the threat actor, the affected company, or the specific software involved, the case is still notable because of how AI was allegedly used in the attack preparation process.
Why AI-Assisted Exploitation Is A Big Concern
Cybercriminals have always looked for ways to speed up their work. In the past, this involved automated scanners, leaked tools, exploit kits, phishing templates, and malware builders. AI adds another layer to that evolution because it can potentially help attackers analyse code, identify weaknesses, generate exploit ideas, and even assist in writing malicious code.
That does not mean AI can magically hack any system by itself. Real-world exploitation still requires skill, testing, infrastructure, and planning. However, AI can reduce the effort required for certain tasks. It can also make some technical steps more accessible to less experienced attackers.
This is where security teams are becoming increasingly concerned. If AI tools can help attackers discover vulnerabilities faster, then organisations may face a shorter window between a weakness being found and that weakness being exploited. For defenders, that increases the pressure to patch quickly, monitor more aggressively, and improve detection before an attack becomes widespread.
Google Says Gemini Was Not Involved
Google also clarified that it does not believe its own AI model, Gemini, was used in this incident. Instead, the attackers allegedly relied on other publicly available AI tools to help with the discovery and weaponisation process.
That distinction matters because the cybersecurity conversation around AI is not only about one company or one model. The larger issue is that powerful AI tools are becoming more widely available across the internet. Some are commercial tools. Some are open-source. Some are modified or repurposed by threat actors.
As AI capabilities continue to improve, the risk is not limited to one platform. Attackers may experiment with multiple models, combine different tools, or use AI as part of a wider workflow that includes traditional hacking techniques.
A Taste Of What May Be Coming
Google's report also mentioned tools such as OpenClaw, which attackers allegedly used to identify vulnerabilities, create malware, and support cyberattack development. This reflects a broader shift where AI is being tested across multiple stages of the attack chain.
Instead of using AI only to write phishing emails or automate basic tasks, threat actors are now exploring more advanced use cases. These can include vulnerability research, target analysis, exploit development, malware generation, and operational planning.
Google also warned that threat groups linked to China and North Korea have shown significant interest in using AI for vulnerability discovery and exploitation. This is not surprising. State-linked and advanced persistent threat groups are always looking for tools that can improve speed, scale, and effectiveness. AI gives them another possible advantage, especially when combined with existing technical expertise.
GTIG chief analyst John Hultquist described the discovery as a preview of what may come next. His comments reflect a growing view in the cybersecurity industry that this incident is not likely to be an isolated case. Instead, it may be an early sign of how AI-assisted attacks will develop over time.
Defenders Are Also Using AI
While the risks are serious, AI is not only useful to attackers. Security teams are also using AI to improve defensive capabilities. This is where the situation becomes more balanced, although not necessarily less challenging.
AI can help defenders process large amounts of security data, detect unusual activity, analyse suspicious files, identify vulnerable code, and respond to incidents more quickly. For large organisations, this can be especially valuable because security teams often deal with huge volumes of alerts every day.
In vulnerability management, AI can also help prioritise which weaknesses matter most. Not every vulnerability carries the same level of risk, and not every system has the same exposure. If AI can help security teams understand which issues are most urgent, it may reduce response time and help prevent real-world exploitation.
This is the "fighting fire with fire" side of the story. Attackers may use AI to speed up exploitation, but defenders can also use AI to detect, patch, and respond faster.
Similar Concerns Are Appearing Across The Industry
Google is not the only company raising concerns about the misuse of advanced AI systems. Anthropic has also reportedly taken a cautious approach with some of its AI development after concerns that powerful models could be used to discover and exploit software vulnerabilities.
The company's Project Glasswing initiative reflects how AI firms are beginning to think more seriously about cybersecurity safety. Instead of only focusing on what AI can do, companies are also being pushed to consider how those capabilities might be abused.
By giving selected testers access to AI systems for defensive research, companies such as Anthropic are trying to support responsible vulnerability discovery before criminals get there first. This kind of controlled testing may become more common as AI models become more capable in technical fields such as software development and cybersecurity.
What This Means For Organisations
For businesses and IT teams, this development should be treated as another reminder that cybersecurity cannot stay static. Attackers are evolving, and defensive strategies need to evolve as well.
Basic controls still matter. Strong patch management, proper access control, multi-factor authentication, network monitoring, endpoint protection, user awareness, and incident response planning remain important. However, organisations may also need to prepare for a future where attackers can move faster and test more ideas using AI assistance.
This does not mean every company needs to panic or assume that AI will suddenly make all existing security controls useless. But it does mean that response time, visibility, and proactive security testing will become more important. The faster a vulnerability can be found and fixed, the smaller the window of opportunity for attackers.
Final Thoughts
Google's discovery of an AI-assisted zero-day exploitation attempt may become an important marker in the evolution of cybersecurity. For years, the industry has warned that AI could eventually help attackers discover vulnerabilities and build exploits more efficiently. Now, according to Google, there is tangible evidence that this is already starting to happen.
The encouraging part is that the attack was detected before it could become a large-scale incident. The affected organisation was notified, the vulnerability was patched, and the wider damage appears to have been prevented. That shows defensive work still matters, especially when security teams are proactive.
At the same time, the incident is a warning sign. AI is changing both sides of cybersecurity. Attackers will continue experimenting with it, while defenders will need to use the same technology to improve detection, response, and vulnerability management. The next phase of cybersecurity may not simply be humans versus hackers, but defenders using AI to keep up with attackers who are learning to do the same.


Comments