North Korean APTs Leverage AI to Enhance IT Worker Scams
Summary
Hide ▲
Show ▼
North Korea's state-linked APTs—particularly Jasper Sleet and Coral Sleet—continue to expand their IT worker scams using AI to fabricate identities, automate social engineering, and deploy malware, while simultaneously diversifying revenue streams to fund weapons programs. OFAC sanctions now confirm the scheme's scale and structure, revealing a multi-tiered network of recruiters, facilitators, IT workers, and collaborators that has infiltrated U.S. and international companies to steal sensitive data and extort victims. The use of AI tools like Faceswap for identity fabrication and Astrill VPN for geographic obfuscation underscores the sophistication of these operations, which are deeply embedded in North Korea's sanctions-evasion and revenue-generation machinery. Initial reporting by Microsoft documented how Jasper Sleet and Coral Sleet leverage AI to research job postings, generate fake resumes, create culturally tailored digital personas, and develop web infrastructure for malicious purposes. These groups use AI coding tools to refine malware and jailbreak LLMs to generate malicious code, complicating detection while enabling long-term persistence as insider threats. The scheme's expansion into malware deployment and extortion activities further increases its impact, with a significant portion of earnings funneled back to North Korea to support its missile programs.
Timeline
-
06.03.2026 19:49 3 articles · 12d ago
North Korean APTs Use AI to Enhance IT Worker Scams
North Korean threat actors, specifically clusters Jasper Sleet and Coral Sleet, are using AI to enhance their IT worker scams. These groups employ AI to fabricate identities, maintain digital personas, and socially engineer potential employers. The AI tools help them research job postings, generate fake resumes, and create convincing digital identities. Once employed, they use AI to perform job tasks, generate code snippets, and develop web infrastructure for malicious purposes. The use of AI complicates detection and response efforts, making these scams more effective and harder to detect. New reporting confirms the scheme's scale and funding mechanisms: OFAC sanctions six individuals and two entities for their involvement in the DPRK IT worker network, which defrauds U.S. businesses and generates illicit revenue to fund North Korea's WMD programs. The fraudulent scheme, also called Coral Sleet/Jasper Sleet, PurpleDelta and Wagemole, relies on bogus documentation, stolen identities, and fabricated personas to obscure operatives' true origins and land jobs at legitimate companies. A disproportionate portion of salaries is funneled back to North Korea in violation of international sanctions. Additional operational details emerge, including the use of Astrill VPN to bypass geographic restrictions by tunneling traffic through U.S. exit nodes, and the AI application Faceswap to insert North Korean workers' faces into stolen identity documents for polished resume headshots. The scheme's multi-tiered structure involves recruiters, facilitators, IT workers, and collaborators—many recruited from LinkedIn and GitHub—to penetrate organizations more deeply and sustain long-term access. Collaborators donate identities to help IT workers obtain company-issued laptops and maintain credibility. Threat actors also deploy malware to steal proprietary data and extort victims, further escalating the scheme's impact.
Show sources
- North Korean APTs Use AI to Enhance IT Worker Scams — www.darkreading.com — 06.03.2026 19:49
- Microsoft: Hackers abusing AI at every stage of cyberattacks — www.bleepingcomputer.com — 07.03.2026 17:15
- OFAC Sanctions DPRK IT Worker Network Funding WMD Programs Through Fake Remote Jobs — thehackernews.com — 18.03.2026 19:26
Information Snippets
-
Jasper Sleet and Coral Sleet use AI to research job postings and generate fake resumes and cover letters.
First reported: 06.03.2026 19:492 sources, 2 articlesShow sources
- North Korean APTs Use AI to Enhance IT Worker Scams — www.darkreading.com — 06.03.2026 19:49
- Microsoft: Hackers abusing AI at every stage of cyberattacks — www.bleepingcomputer.com — 07.03.2026 17:15
-
AI tools help create convincing digital personas, including fake names, email addresses, and social media handles.
First reported: 06.03.2026 19:492 sources, 2 articlesShow sources
- North Korean APTs Use AI to Enhance IT Worker Scams — www.darkreading.com — 06.03.2026 19:49
- Microsoft: Hackers abusing AI at every stage of cyberattacks — www.bleepingcomputer.com — 07.03.2026 17:15
-
Threat actors use AI to maintain their personas during interviews and daily communication.
First reported: 06.03.2026 19:492 sources, 2 articlesShow sources
- North Korean APTs Use AI to Enhance IT Worker Scams — www.darkreading.com — 06.03.2026 19:49
- Microsoft: Hackers abusing AI at every stage of cyberattacks — www.bleepingcomputer.com — 07.03.2026 17:15
-
AI is used to generate code snippets, develop web infrastructure, and automate cyberattack workflows.
First reported: 06.03.2026 19:492 sources, 2 articlesShow sources
- North Korean APTs Use AI to Enhance IT Worker Scams — www.darkreading.com — 06.03.2026 19:49
- Microsoft: Hackers abusing AI at every stage of cyberattacks — www.bleepingcomputer.com — 07.03.2026 17:15
-
Organizations are becoming more vigilant, incorporating verification questions during remote interviews.
First reported: 06.03.2026 19:491 source, 1 articleShow sources
- North Korean APTs Use AI to Enhance IT Worker Scams — www.darkreading.com — 06.03.2026 19:49
-
Jasper Sleet uses generative AI platforms to streamline the development of fraudulent digital personas, including generating culturally appropriate name lists and email address formats.
First reported: 07.03.2026 17:152 sources, 2 articlesShow sources
- Microsoft: Hackers abusing AI at every stage of cyberattacks — www.bleepingcomputer.com — 07.03.2026 17:15
- OFAC Sanctions DPRK IT Worker Network Funding WMD Programs Through Fake Remote Jobs — thehackernews.com — 18.03.2026 19:26
-
Jasper Sleet uses AI to review job postings for software development and IT-related roles on professional platforms, prompting the tools to extract and summarize required skills.
First reported: 07.03.2026 17:152 sources, 2 articlesShow sources
- Microsoft: Hackers abusing AI at every stage of cyberattacks — www.bleepingcomputer.com — 07.03.2026 17:15
- OFAC Sanctions DPRK IT Worker Network Funding WMD Programs Through Fake Remote Jobs — thehackernews.com — 18.03.2026 19:26
-
Coral Sleet uses AI to quickly generate fake company sites, provision infrastructure, and test and troubleshoot their deployments.
First reported: 07.03.2026 17:152 sources, 2 articlesShow sources
- Microsoft: Hackers abusing AI at every stage of cyberattacks — www.bleepingcomputer.com — 07.03.2026 17:15
- OFAC Sanctions DPRK IT Worker Network Funding WMD Programs Through Fake Remote Jobs — thehackernews.com — 18.03.2026 19:26
-
Threat actors use AI coding tools to generate and refine malicious code, troubleshoot errors, or port malware components to different programming languages.
First reported: 07.03.2026 17:152 sources, 2 articlesShow sources
- Microsoft: Hackers abusing AI at every stage of cyberattacks — www.bleepingcomputer.com — 07.03.2026 17:15
- OFAC Sanctions DPRK IT Worker Network Funding WMD Programs Through Fake Remote Jobs — thehackernews.com — 18.03.2026 19:26
-
Some malware experiments show signs of AI-enabled malware that dynamically generate scripts or modify behavior at runtime.
First reported: 07.03.2026 17:151 source, 1 articleShow sources
- Microsoft: Hackers abusing AI at every stage of cyberattacks — www.bleepingcomputer.com — 07.03.2026 17:15
-
Threat actors are using jailbreaking techniques to trick LLMs into generating malicious code or content.
First reported: 07.03.2026 17:152 sources, 2 articlesShow sources
- Microsoft: Hackers abusing AI at every stage of cyberattacks — www.bleepingcomputer.com — 07.03.2026 17:15
- OFAC Sanctions DPRK IT Worker Network Funding WMD Programs Through Fake Remote Jobs — thehackernews.com — 18.03.2026 19:26
-
Microsoft advises organizations to treat these schemes and similar activity as insider risks and focus on detecting abnormal credential use, hardening identity systems against phishing, and securing AI systems.
First reported: 07.03.2026 17:152 sources, 2 articlesShow sources
- Microsoft: Hackers abusing AI at every stage of cyberattacks — www.bleepingcomputer.com — 07.03.2026 17:15
- OFAC Sanctions DPRK IT Worker Network Funding WMD Programs Through Fake Remote Jobs — thehackernews.com — 18.03.2026 19:26
-
The U.S. Department of the Treasury's Office of Foreign Assets Control (OFAC) has sanctioned six individuals and two entities for their involvement in the Democratic People's Republic of Korea (DPRK) IT worker scheme to defraud U.S. businesses and generate illicit revenue for North Korea's weapons of mass destruction (WMD) programs.
First reported: 18.03.2026 19:261 source, 1 articleShow sources
- OFAC Sanctions DPRK IT Worker Network Funding WMD Programs Through Fake Remote Jobs — thehackernews.com — 18.03.2026 19:26
-
The fraudulent scheme is also called Coral Sleet/Jasper Sleet, PurpleDelta and Wagemole, relying on bogus documentation, stolen identities, and fabricated personas to help IT workers obscure their true origins and land jobs at legitimate companies in the U.S. and elsewhere.
First reported: 18.03.2026 19:261 source, 1 articleShow sources
- OFAC Sanctions DPRK IT Worker Network Funding WMD Programs Through Fake Remote Jobs — thehackernews.com — 18.03.2026 19:26
-
A disproportionate portion of the salaries earned by these IT workers is funneled back to North Korea to facilitate its missile programs in violation of international sanctions.
First reported: 18.03.2026 19:261 source, 1 articleShow sources
- OFAC Sanctions DPRK IT Worker Network Funding WMD Programs Through Fake Remote Jobs — thehackernews.com — 18.03.2026 19:26
-
In some cases, these efforts are complemented by the deployment of malware to steal proprietary and sensitive information, as well as engaging in extortion efforts by demanding ransoms in return for not publicly leaking the stolen data.
First reported: 18.03.2026 19:261 source, 1 articleShow sources
- OFAC Sanctions DPRK IT Worker Network Funding WMD Programs Through Fake Remote Jobs — thehackernews.com — 18.03.2026 19:26
-
Amnokgang Technology Development Company is an IT company that manages delegations of overseas IT workers and conducts illicit procurement activities to obtain and sell military and commercial technology through overseas networks.
First reported: 18.03.2026 19:261 source, 1 articleShow sources
- OFAC Sanctions DPRK IT Worker Network Funding WMD Programs Through Fake Remote Jobs — thehackernews.com — 18.03.2026 19:26
-
Nguyen Quang Viet, CEO of Vietnamese company Quangvietdnbg International Services Company Limited, facilitated currency conversion services for North Koreans, converting about $2.5 million into cryptocurrency between mid-2023 and mid-2025.
First reported: 18.03.2026 19:261 source, 1 articleShow sources
- OFAC Sanctions DPRK IT Worker Network Funding WMD Programs Through Fake Remote Jobs — thehackernews.com — 18.03.2026 19:26
-
Do Phi Khanh acted as a proxy for Kim Se Un (sanctioned in July 2025) by allowing Kim to use his identity to open bank accounts and launder proceeds from IT workers.
First reported: 18.03.2026 19:261 source, 1 articleShow sources
- OFAC Sanctions DPRK IT Worker Network Funding WMD Programs Through Fake Remote Jobs — thehackernews.com — 18.03.2026 19:26
-
Yun Song Guk, a North Korean national, led a group of IT workers conducting freelance IT work from Boten, Laos, since at least 2023, coordinating financial transactions exceeding $70,000 related to IT services.
First reported: 18.03.2026 19:261 source, 1 articleShow sources
- OFAC Sanctions DPRK IT Worker Network Funding WMD Programs Through Fake Remote Jobs — thehackernews.com — 18.03.2026 19:26
-
The IT worker scheme uses Astrill VPN to bypass China's Great Firewall and tunnel traffic through U.S. exit nodes, allowing operatives to masquerade as domestic employees.
First reported: 18.03.2026 19:261 source, 1 articleShow sources
- OFAC Sanctions DPRK IT Worker Network Funding WMD Programs Through Fake Remote Jobs — thehackernews.com — 18.03.2026 19:26
-
Threat actors use an AI application called Faceswap to insert the faces of North Korean IT workers into stolen identity documents and generate polished headshots for resumes.
First reported: 18.03.2026 19:261 source, 1 articleShow sources
- OFAC Sanctions DPRK IT Worker Network Funding WMD Programs Through Fake Remote Jobs — thehackernews.com — 18.03.2026 19:26
-
The IT worker scheme involves a multi-tiered operational structure including recruiters, facilitators, IT workers, and collaborators, with collaborators providing their personal identities to help IT workers complete the hiring process and receive company-issued laptops.
First reported: 18.03.2026 19:261 source, 1 articleShow sources
- OFAC Sanctions DPRK IT Worker Network Funding WMD Programs Through Fake Remote Jobs — thehackernews.com — 18.03.2026 19:26
-
Collaborators are primarily recruited from LinkedIn and GitHub, either willingly or unwillingly, to donate their identities for use in the IT worker fraud scheme.
First reported: 18.03.2026 19:261 source, 1 articleShow sources
- OFAC Sanctions DPRK IT Worker Network Funding WMD Programs Through Fake Remote Jobs — thehackernews.com — 18.03.2026 19:26
-
North Korea's IT worker operations are assessed as widespread and deeply integrated within the DPRK party-state, serving as a core revenue-generation and sanctions-evasion mechanism.
First reported: 18.03.2026 19:261 source, 1 articleShow sources
- OFAC Sanctions DPRK IT Worker Network Funding WMD Programs Through Fake Remote Jobs — thehackernews.com — 18.03.2026 19:26
Similar Happenings
AI Tools Lower Barrier to Entry for Sophisticated Cyber Attacks
Cloudflare's 2026 Threat Report highlights how AI tools, particularly large language models (LLMs), have significantly lowered the barrier to entry for cybercriminals, enabling them to conduct more effective and impactful attacks rapidly and at scale. The report details how various threat actors, including state-sponsored groups, financially motivated cybercriminals, and hacktivists, are leveraging AI to enhance phishing emails, write malware, and map networks in real-time. Additionally, AI-generated deepfakes are being used to bypass hiring filters and embed malicious insiders within organizations. The report warns of the 'total industrialization of cyber threats' and emphasizes the need for organizations to adopt real-time, actionable intelligence to stay ahead of evolving attack tactics.
AI-Driven Cyberattacks Exploit Network Vulnerabilities
Adversarial AI-based attacks, such as those by Scattered Spider, are accelerating and leveraging living-off-the-land methods to spread and evade detection. These attacks use AI orchestration to perform network reconnaissance, discover vulnerabilities, move laterally, and harvest data at speeds that overwhelm manual detection methods. The Cloud Security Alliance report highlights over 70 ways autonomous AI-based agents can attack enterprise systems, expanding the attack surface beyond traditional security practices. Network Detection and Response (NDR) systems are increasingly being adopted to counter these AI-driven threats by providing real-time monitoring, analyzing network data, and identifying abnormal traffic patterns. NDR solutions can detect fast-moving, polymorphic attacks, summarize network activities, and render verdicts on potential threats, reducing the pressure on SOC analysts. Recent reports from Google's Threat Intelligence Group and Anthropic have revealed new AI-fueled attack methods, including the use of LLMs to generate malicious scripts and AI-orchestrated cyber espionage campaigns. Adversaries are also exploiting AV exclusion rules and using steganography techniques to evade detection. The combined use of NDR and EDR is essential for detecting and mitigating these sophisticated attacks.
State-Backed Hackers Abuse AI Models for Advanced Cyber Attacks
Google's Threat Intelligence Group (GTIG) has identified new malware families that leverage artificial intelligence (AI) and large language models (LLMs) for dynamic self-modification during execution. These malware families, including PromptFlux, PromptSteal, FruitShell, QuietVault, and PromptLock, demonstrate advanced capabilities for evading detection and maintaining persistence. PromptFlux, an experimental VBScript dropper, uses Google's LLM Gemini to generate obfuscated VBScript variants and evade antivirus software. It attempts persistence via Startup folder entries and spreads laterally on removable drives and mapped network shares. The malware is under development or testing phase and is assessed to be financially motivated. PromptSteal is a data miner written in Python that queries the LLM Qwen2.5-Coder-32B-Instruct to generate one-line Windows commands to collect information and documents in specific folders and send the data to a command-and-control (C2) server. It is used by the Russian state-sponsored actor APT28 in attacks targeting Ukraine. State-backed hackers from China (APT31, Temp.HEX), Iran (APT42), North Korea (UNC2970), and Russia have used Gemini AI for all stages of an attack, including reconnaissance, phishing lure creation, C2 development, and data exfiltration. Chinese threat actors used Gemini to automate vulnerability analysis and provide targeted testing plans against specific US-based targets. Iranian adversary APT42 leveraged Gemini for social engineering campaigns and to speed up the creation of tailored malicious tools. The use of AI in malware enables adversaries to create more versatile and adaptive threats, posing significant challenges for cybersecurity defenses. Various threat actors, including those from China, Iran, and North Korea, have been observed abusing AI models like Gemini across different stages of the attack lifecycle. The underground market for AI-powered cybercrime tools is also growing, with offerings ranging from deepfake generation to malware development and vulnerability exploitation.
Microsoft reports surge in AI-driven cyber threats and defenses
Microsoft's Digital Defense Report 2025 highlights a dramatic escalation in AI-driven cyber attacks. Microsoft systems analyze over 100 trillion security signals daily, indicating the growing sophistication and volume of cyber threats. Adversaries are leveraging generative AI to automate phishing, scale social engineering, and discover vulnerabilities faster than humans can patch them. Autonomous malware adapts tactics in real-time to bypass security systems, and AI tools themselves are becoming high-value targets. Microsoft's AI-powered defenses have reduced response times from hours to seconds, but defenders must remain vigilant as AI increases the speed and impact of cyber operations. Identity compromise remains a dominant attack vector, with phishing and social engineering accounting for 28% of breaches. Multi-factor authentication (MFA) prevents over 99% of unauthorized access attempts, but adoption rates are uneven. The rise of infostealers has fueled credential-based intrusions. The United States accounted for 24.8% of all observed attacks between January and June 2025, followed by the United Kingdom, Israel, and Germany. Government agencies, IT providers, and research institutions were among the most frequently targeted sectors. Ransomware remains a primary threat, with over 40% of recent cases involving hybrid cloud components.
ChatGPT Misuse by Nation-State Actors for Malware Development and Influence Operations
OpenAI has disrupted multiple activity clusters misusing its ChatGPT AI tool for cyberattacks, including nation-state actors from Russia, North Korea, and China. These actors have used ChatGPT to develop malware, conduct phishing campaigns, and engage in influence operations. The Russian threat actor developed a remote access trojan (RAT) and credential stealer, while the North Korean group created malware and command-and-control (C2) infrastructure. The Chinese group, UNK_DropPitch, generated phishing content and tooling for routine tasks. Additionally, Chinese law enforcement used ChatGPT to draft and edit reports on smear campaigns against Chinese dissidents and Japanese Prime Minister Sanae Takaichi. OpenAI also blocked accounts used for scams, influence operations, and surveillance, including networks from Cambodia, Myanmar, Nigeria, and individuals linked to Chinese government entities.