Thursday, February 15, 2024

 

North Korean hackers take phishing efforts to next level with AI tools: Report

Microsoft and OpenAI reports of DPRK AI use confirm growing concerns, with one expert calling development ‘frightening’

North Korean cybercriminals have turned to artificial intelligence (AI) to advance their spear-phishing efforts targeting DPRK-focused experts and organizations, Microsoft and OpenAI announced Wednesday, a move that one expert called “frightening.”

Microsoft reported that it observed North Korean threat group Kimsuky, also known as “Emerald Sleet,” using large language models (LLMs) to research potential targets with expertise in DPRK defense and nuclear issues and generate content for phishing campaigns.

The software company’s report noted the cybercrime network’s focus on gathering intelligence from prominent experts on North Korea through phishing emails, particularly through campaigns impersonating academic institutions and non-profit organizations to lure targets into replying with insights about DPRK-related policies.

Microsoft and OpenAI, the developer of tools such as ChatGPT, also highlighted Kimsuky’s use of generative AI services to identify defense-focused experts and organizations in the Asia-Pacific region, learn more about publicly reported vulnerabilities and carry out basic coding and draft content for social engineering campaigns.

The two companies added that they have disabled all accounts and assets associated with Kimsuky, as well as those used by threat actors linked to other countries, including China and Iran.

Dennis Desmond, a lecturer in cybersecurity at the University of the Sunshine Coast, said North Korea’s relatively early adoption of AI for cybercrime is “not surprising but frightening.”

“Perpetrators that are engaged in early technology adoption will be ahead of defenders in many respects, and they’ll also be able to leverage these capabilities by going after nation-states as well as organizations, small businesses and individuals,” he told NK News.

He explained that such technologies are already rendering conventional search engines obsolete, making it easier for cybercriminals to find and research potential targets and critical vulnerabilities while improving their coding skills and attack techniques.

Another benefit lies in the cost-effectiveness of these tools, which offer attackers significant savings on time and development costs, he added.

Microsoft and OpenAI’s reports on North Korean cybercriminals’ use of these emerging technologies confirm growing concerns that Pyongyang could supercharge its already prolific illicit cyber operations as it builds on its decades-long research into AI. 

South Korea’s National Intelligence Service (NIS) made similar claims last month that North Korean cybercriminals have been using generative AI to research potential targets and enhance their skills.

The spy agency stated at the time that it had yet to observe North Korean threat actors using these tools in actual cybercrime operations but warned they could divide national opinion in South Korea by “spreading fake news or deep-fake videos” ahead of parliamentary elections in April.

An NK Pro analysis last year highlighted the potential for North Korean cybercriminals to leverage AI tools to improve the language of their phishing campaigns and generate visual content to mask their identities online.

Possible improvements were already apparent in campaigns over the past year as phishing emails from groups like Kimsuky increasingly featured cleaner language compared to the clumsily written lures of past campaigns. 

The University of the Sunshine Coast’s Desmond said North Korean threat actors would continue developing these capabilities as they pursue other priority targets, including cryptocurrency and financial services, critical infrastructure, and software supply chains.

But as Pyongyang continues advancing its cyber operations, Desmond stated that those looking to defend against its attacks must strengthen their own AI capabilities.

“You have to fight fire with fire,” he said, calling for the development of capabilities that can recognize patterns and malicious AI-generated content more effectively.

“We’ve got to get better at detection and prevention and I think that the use of AI obviously provides us the opportunity to develop these capabilities.”

Edited by Alannah Hill

POST GOOGLE CEO VISIT 


No comments: