Chinese hackers conducted the first cyberattack using artificial intelligence.
Menu planning, therapy, essay writing, and even complex global cyberattacks – people are constantly finding new and innovative ways to use modern AI chatbots.
This week, Anthropic announced a worrying milestone, stating that their flagship AI assistant Claude has become a tool for Chinese hackers in what the company calls 'the first recorded campaign of cyber espionage organized by AI'.
AI Cyber Espionage
According to a report by Anthropic, in mid-September they discovered a large-scale cyber espionage operation by a group they named GTG-1002. Its targets included 'major tech corporations, financial institutions, chemical companies, and government agencies across multiple countries.'
Although such attacks are not new, this campaign stands out as 80-90% of the actions were carried out using AI. After human operators selected target organizations, they actively used Claude to identify valuable databases, check vulnerabilities, and generate code to access information. Human oversight was limited to just a few critical moments – issuing commands to the AI and checking the results.
Claude, like other powerful language models, has built-in security measures intended to prevent such activities. However, criminals were able to bypass these mechanisms by framing tasks as innocent requests and posing as a cybersecurity company conducting tests. This raises serious questions about how easily security systems for models like Claude and ChatGPT can be circumvented, considering the potential threat of their use to create bioweapons or other hazardous materials.
Anthropic notes that Claude sometimes 'hallucinated' data about accounts or claimed to have extracted sensitive information that was actually publicly accessible. Even state-sponsored hackers should be cautious, as AI can fabricate information.
Dangers of AI in Cyberattacks
The report raises important questions: AI tools can significantly simplify and accelerate the execution of cyberattacks, increasing the vulnerability of both national security systems and the financial institutions of ordinary citizens.
But we are not yet in complete cyber chaos. The technical knowledge required to use Claude in such situations still exceeds the capabilities of the average internet troll. However, experts have long warned of the threat of using AI to generate malicious code, a phenomenon known as 'web hacking'. In February, Anthropic's competitor – OpenAI – reported Chinese, Iranian, North Korean, and Russian adversaries using their AI tools to conduct cyber operations.
In September, the Center for a New American Security (CNAS) released a report on the threat of cyberattacks enabled by AI. It noted that the most costly stages of most cyberattacks relate to planning, reconnaissance, and tool development. Automating these tasks using AI could dramatically change the landscape of offensive cyberattacks – and this has already occurred in this campaign.
Caleb Uzi, the author of the CNAS report, emphasized that Anthropic's announcement has become 'topical' in light of recent advancements in AI. He also noted that 'the level of complexity with which this can be done almost autonomously using AI will only continue to rise.'
China's Cyber War
Anthropic claims that the hackers left enough evidence to determine that they were Chinese, although the Chinese embassy in the U.S. dismissed these allegations as 'slander and insult.'
This is an ironic achievement for Anthropic and the AI industry in the U.S. as a whole. Earlier this year, the Chinese language model DeepSeek created a stir in Washington and Silicon Valley, demonstrating that despite U.S. efforts to limit China's access to modern semiconductor chips necessary for developing AI language models, China is quite close to American advancements. Thus, it is symbolically noteworthy that even Chinese hackers prefer chatbots developed in the U.S. for their cyber investigations.
Over the past year, concerns have grown about the scale and complexity of Chinese cyberattacks targeting the U.S. Among the examples is the Volt Typhoon campaign, which aims to place state cyber actors in U.S. IT systems for potential attacks in the event of a major crisis or conflict, as well as the Salt Typhoon campaign, which targeted telecommunications companies in various countries and the communications of high-profile officials, including former President Donald Trump.
Officials claim that the scale and complexity of these attacks greatly exceed anything we have seen before. This may just be the beginning of new threats in the age of AI.
Read also
- Honda Pilot 2026: the first facelift and fully digital dashboard
- Nissan has unveiled a new plug-in hybrid Rogue: what it has inherited from Mitsubishi
- Poland Launches Its First Military Satellite: Mission Details with SpaceX
- Artificial Intelligence at Work: How Employees Can Resist Pressure
- OpenAI launches group chats in ChatGPT: what changes for users
- Diya temporarily not working due to rush for 1000 hryvnias: what to do

