Microsoft and OpenAI said Wednesday nation-state hackers are using AI tools to improve their cyberattacks. The companies said they have disrupted five attempts to use AI for hacking by groups affiliated with China, Iran, Russia and North Korea. File photo by Wu Hao/ EPA-EFE
Microsoft and OpenAI said Wednesday that artificial intelligence tools are being used by foreign government hackers to improve their cyberattacks.
In a blog post, OpenAI said the companies have disrupted five hacking attempts from China, Russia, Iran and North Korea, where nation-state supported hackers are using ChatGPT and other AI tools for cyberattacks. Advertisement
According to OpenAI, the five attempts were “two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon; the Iran-affiliated threat actor known as Crimson Sandstorm; the North Korea-affiliated actor known as Emerald Sleet; and the Russia-affiliated actor known as Forest Blizzard.”
These nation-state backed hackers tried to use OpenAI services for “querying open-source information, translating, finding coding errors, and running basic coding tasks.”
OpenAI said accounts identified as being associated with these hackers were eliminated.
The Chinese hacking attempts were trying to use AI to research various companies and cybersecurity tools, debug code and create phishing campaign content.
They also sought to translate technical papers, get public information on intelligence agencies and use AI to research for ways to hide hacking processes on a system.
The Iranian AI hacking effort focused on spear-phishing campaigns, scripting related to web and app development and to research various ways for malware to evade detection. Advertisement
OpenAI said North Korean hackers tried “to identify experts and organizations focused on defense issues in the Asia-Pacific region, understand publicly available vulnerabilities, help with basic scripting tasks, and draft content that could be used in phishing campaigns.”
Russia-affiliated hackers, according to OpenAI, used AI for research into satellite communication protocols and radar imaging technology as well as scripting tasks.
Microsoft said in its own blog post, that it has witnessed attackers using large language models and other forms of AI as “another productivity tool on the offensive landscape” but that the companies, “have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors’ usage of AI.
“Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent,” Microsoft said.
Microsoft said it is taking measures together with OpenAI to “disrupt assets and accounts associated with threat actors, improve the protection of OpenAI LLM technology and users from attack or abuse, and shape the guardrails and safety mechanisms around our models.”
Microsoft added that the company is deeply committed to using generative AI to disrupt threat actors and leverage the power of new tools to enhance hacking defense efforts everywhere. Advertisement
According to Microsoft, it is tracking more than 300 hackers including 160 nation-state actors and 50 ransomware groups.
“Microsoft will continue to track threat actors and malicious activity misusing LLMs, and work with OpenAI and other partners to share intelligence, improve protections for customers and aid the broader security community,” Microsoft’s Wednesday statement said.