Chinese hackers hijack Anthropic AI in 1st ‘large scale’ cyberattack

0

Chinese hackers hijack Anthropic AI in 1st 'large scale' cyberattack

Chinese hackers hijack Anthropic AI in 1st 'large scale' cyberattack

Chinese workers pictured Nov. 2019 cleaning the exterior of the China’s 2nd office of Cyberspace Administration in Beijing. On Thursday, tech giant Anthropic confirmed it detected “suspicious activity” that was later determined to be a “highly sophisticated” espionage campaign allegedly sponsored by the Chinese government. File Photo by Stephen Shaver/UPI | License Photo

Tech giant Anthropic confirmed Chinese actors managed to seize control of its AI model Claude to execute a large cyberattack with little human interaction.

On Thursday, Anthropic officials said in a blog post in mid-September it detected “suspicious activity” that a later investigation determined was a “highly sophisticated espionage campaign.”

It added Anthropic had “high” confidence it was a China-backed cyber group.

The Chinese state-sponsored syndicate, which Anthropic called “GTG-1002,” reportedly hijacked its artificial intelligence tool Claude in order to handle between 80% to 90% of a cyberattack on about 30 global targets.

According to Anthropic, it targeted a slew of government agencies, financial institutions, chemical-manufacturing plants and big tech firms.

In a “small number” of cases, the company added, the cyber infiltration was successful.

AI-related hacking has been seen in recent years to a limited degree. But Amazon-backed Anthropic says it believed this recent episode is the first documented “large-scale” case primarily run by AI capability.

Anthropic claimed safeguards in place were designed to prevent abuse of its product.

But it said hijackers, claiming to be acting as defense testing for a legitimate cybersecurity firm, jailbroke Claude by breaking down prompts into smaller requests to avoid detection.

Anthropic said it opted to share the information in order to help the cybersecurity industry improve its defense mechanisms against similar attacks in the future by AI hackers.

“The sheer amount of work performed by the AI would have taken vast amounts of time for a human team,” according to California-based Anthropic.

The tech company said it’s likely the attack only required sporadic human interaction at “perhaps” four to six “critical decision points” per hacking campaign.

“The AI made thousands of requests per second — an attack speed that would have been, for human hackers, simply impossible to match,” the blog post continued.

“Automated cyber-attacks can scale much faster than human-led operations and are able to overwhelm traditional defenses,” Jake Moore, global cybersecurity advisor for internet security firm ESET, told Business Insider.

Last year in February, Microsoft and OpenAI publicly revealed that its artificial intelligence tools were being deployed by foreign government hackers in China, Russia, Iran and North Korea to improve cyber warfare.

Moore indicated Thursday that not only is an example of Anthropic’s attack what many have feared, but the “wider impact is now how these attacks allow very low-skilled actors to launch complex intrusions at relatively low costs.”

“AI is used in defense as well as offensively, so security equally now depends on automation and speed rather than just human expertise across organizations,” he stated.

Source

Leave A Reply

Your email address will not be published.