Multiple state-sponsored cyber threat organizations have begun writing phishing emails using ChatGPT.
A report released by Microsoft in conjunction with OpenAI states that hackers are utilizing large-scale language models (LLMs) such as ChatGPT to improve their cyberattack strategies.
According to the study, through data cherry-picking, file manipulation and multitasking, some state-sponsored hacker groups have upgraded their attack methods.
The report comes shortly after news broke last month that Microsoft had been breached by a Russian-backed hacking team, in which the email accounts of some of Microsoft's senior leadership team were compromised.
## What was found in the report?
[This report from Microsoft], was conducted in conjunction with its partner OpenAI, which aims to explore how to safely and responsibly utilize generative AI technologies.
The report reveals that some adversaries are integrating AI capabilities into their strategies and operational procedures.
The report cites a cyberattack by the Strontium organization, which is linked to Russian military intelligence and considered an "extremely effective threat actor." The study found that with the assistance of AI, they were able to employ LLM-enabled detection and LLM-enhanced scripting techniques.
Simply put, LLM-enabled reconnaissance refers to the use of generative AI to learn satellite communication protocols and radar imaging tools, which can provide deeper understanding and valuable insights into potential targets.
Similarly, LLM-enhanced scripting techniques refer to the use of AI models to create code snippets that can perform specific tasks during an attack.
Such uses of LLM exemplify a larger, worrisome trend of cybercriminals using generative AI to write code in order to disable anti-virus systems and delete directory files to avoid data breaches from being flagged abnormally.
## Industries at risk
Several threat organizations were mentioned in the study, and these organizations' goals cover a wide range of industry sectors, such as Strontium, Charcoal Typhoon, and Salmon Typhoon. Their goals include:
- **Defense
- renewable energy
- government organization
- Non-governmental organizations (NGOs)
- Oil & Gas
- science and technology
- Transportation and Logistics
It is worth noting that Salmon Typhoon (also known as Sodium) is known to have carried out attacks on U.S. defense departments in the past.
The report also highlights that different hacker groups will target specific regions, for example, Charcoal Typhoon (aka Chromium) primarily targeted organizations in Taiwan, Thailand, Mongolia, Malaysia, France and Nepal.
How Microsoft is fighting back
The report shows that the most straightforward answer to fighting AI attacks is to use AI technology to fight back.
> "AI can help attackers increase the sophistication of their attacks, and they have ample resources to carry them out. Microsoft tracks more than 300 threat organizations, and we're also using AI technology for protection, detection and response." - Homa Hayatyfar, Microsoft's Manager of Primary Detection Analytics
Following this strategy, Microsoft is developing a new [GPT-4 AI Assistant, Security Copilot], is designed to detect cyber threats and security risks faster and summarize them to avoid the potential harm they cause. The tool will be able to target and strengthen security measures accordingly.
The tech giant is still in the process of raising security standards for legacy systems after a breach of company executives' email accounts and a breach of its Azure cloud service last month.