CyberHappenings logo

Track cybersecurity events as they unfold. Sourced timelines. Filter, sort, and browse. Fast, privacy‑respecting. No invasive ads, no tracking.

North Korean APTs Leverage AI to Enhance IT Worker Scams

First reported
Last updated
3 unique sources, 3 articles

Summary

Hide ▲

North Korea's state-linked APTs—particularly Jasper Sleet and Coral Sleet—continue to expand their IT worker scams using AI to fabricate identities, automate social engineering, and deploy malware, while simultaneously diversifying revenue streams to fund weapons programs. OFAC sanctions now confirm the scheme's scale and structure, revealing a multi-tiered network of recruiters, facilitators, IT workers, and collaborators that has infiltrated U.S. and international companies to steal sensitive data and extort victims. The use of AI tools like Faceswap for identity fabrication and Astrill VPN for geographic obfuscation underscores the sophistication of these operations, which are deeply embedded in North Korea's sanctions-evasion and revenue-generation machinery. Initial reporting by Microsoft documented how Jasper Sleet and Coral Sleet leverage AI to research job postings, generate fake resumes, create culturally tailored digital personas, and develop web infrastructure for malicious purposes. These groups use AI coding tools to refine malware and jailbreak LLMs to generate malicious code, complicating detection while enabling long-term persistence as insider threats. The scheme's expansion into malware deployment and extortion activities further increases its impact, with a significant portion of earnings funneled back to North Korea to support its missile programs.

Timeline

  1. 06.03.2026 19:49 3 articles · 12d ago

    North Korean APTs Use AI to Enhance IT Worker Scams

    North Korean threat actors, specifically clusters Jasper Sleet and Coral Sleet, are using AI to enhance their IT worker scams. These groups employ AI to fabricate identities, maintain digital personas, and socially engineer potential employers. The AI tools help them research job postings, generate fake resumes, and create convincing digital identities. Once employed, they use AI to perform job tasks, generate code snippets, and develop web infrastructure for malicious purposes. The use of AI complicates detection and response efforts, making these scams more effective and harder to detect. New reporting confirms the scheme's scale and funding mechanisms: OFAC sanctions six individuals and two entities for their involvement in the DPRK IT worker network, which defrauds U.S. businesses and generates illicit revenue to fund North Korea's WMD programs. The fraudulent scheme, also called Coral Sleet/Jasper Sleet, PurpleDelta and Wagemole, relies on bogus documentation, stolen identities, and fabricated personas to obscure operatives' true origins and land jobs at legitimate companies. A disproportionate portion of salaries is funneled back to North Korea in violation of international sanctions. Additional operational details emerge, including the use of Astrill VPN to bypass geographic restrictions by tunneling traffic through U.S. exit nodes, and the AI application Faceswap to insert North Korean workers' faces into stolen identity documents for polished resume headshots. The scheme's multi-tiered structure involves recruiters, facilitators, IT workers, and collaborators—many recruited from LinkedIn and GitHub—to penetrate organizations more deeply and sustain long-term access. Collaborators donate identities to help IT workers obtain company-issued laptops and maintain credibility. Threat actors also deploy malware to steal proprietary data and extort victims, further escalating the scheme's impact.

    Show sources

Information Snippets

Similar Happenings

AI Tools Lower Barrier to Entry for Sophisticated Cyber Attacks

Cloudflare's 2026 Threat Report highlights how AI tools, particularly large language models (LLMs), have significantly lowered the barrier to entry for cybercriminals, enabling them to conduct more effective and impactful attacks rapidly and at scale. The report details how various threat actors, including state-sponsored groups, financially motivated cybercriminals, and hacktivists, are leveraging AI to enhance phishing emails, write malware, and map networks in real-time. Additionally, AI-generated deepfakes are being used to bypass hiring filters and embed malicious insiders within organizations. The report warns of the 'total industrialization of cyber threats' and emphasizes the need for organizations to adopt real-time, actionable intelligence to stay ahead of evolving attack tactics.

AI-Driven Cyberattacks Exploit Network Vulnerabilities

Adversarial AI-based attacks, such as those by Scattered Spider, are accelerating and leveraging living-off-the-land methods to spread and evade detection. These attacks use AI orchestration to perform network reconnaissance, discover vulnerabilities, move laterally, and harvest data at speeds that overwhelm manual detection methods. The Cloud Security Alliance report highlights over 70 ways autonomous AI-based agents can attack enterprise systems, expanding the attack surface beyond traditional security practices. Network Detection and Response (NDR) systems are increasingly being adopted to counter these AI-driven threats by providing real-time monitoring, analyzing network data, and identifying abnormal traffic patterns. NDR solutions can detect fast-moving, polymorphic attacks, summarize network activities, and render verdicts on potential threats, reducing the pressure on SOC analysts. Recent reports from Google's Threat Intelligence Group and Anthropic have revealed new AI-fueled attack methods, including the use of LLMs to generate malicious scripts and AI-orchestrated cyber espionage campaigns. Adversaries are also exploiting AV exclusion rules and using steganography techniques to evade detection. The combined use of NDR and EDR is essential for detecting and mitigating these sophisticated attacks.

State-Backed Hackers Abuse AI Models for Advanced Cyber Attacks

Google's Threat Intelligence Group (GTIG) has identified new malware families that leverage artificial intelligence (AI) and large language models (LLMs) for dynamic self-modification during execution. These malware families, including PromptFlux, PromptSteal, FruitShell, QuietVault, and PromptLock, demonstrate advanced capabilities for evading detection and maintaining persistence. PromptFlux, an experimental VBScript dropper, uses Google's LLM Gemini to generate obfuscated VBScript variants and evade antivirus software. It attempts persistence via Startup folder entries and spreads laterally on removable drives and mapped network shares. The malware is under development or testing phase and is assessed to be financially motivated. PromptSteal is a data miner written in Python that queries the LLM Qwen2.5-Coder-32B-Instruct to generate one-line Windows commands to collect information and documents in specific folders and send the data to a command-and-control (C2) server. It is used by the Russian state-sponsored actor APT28 in attacks targeting Ukraine. State-backed hackers from China (APT31, Temp.HEX), Iran (APT42), North Korea (UNC2970), and Russia have used Gemini AI for all stages of an attack, including reconnaissance, phishing lure creation, C2 development, and data exfiltration. Chinese threat actors used Gemini to automate vulnerability analysis and provide targeted testing plans against specific US-based targets. Iranian adversary APT42 leveraged Gemini for social engineering campaigns and to speed up the creation of tailored malicious tools. The use of AI in malware enables adversaries to create more versatile and adaptive threats, posing significant challenges for cybersecurity defenses. Various threat actors, including those from China, Iran, and North Korea, have been observed abusing AI models like Gemini across different stages of the attack lifecycle. The underground market for AI-powered cybercrime tools is also growing, with offerings ranging from deepfake generation to malware development and vulnerability exploitation.

Microsoft reports surge in AI-driven cyber threats and defenses

Microsoft's Digital Defense Report 2025 highlights a dramatic escalation in AI-driven cyber attacks. Microsoft systems analyze over 100 trillion security signals daily, indicating the growing sophistication and volume of cyber threats. Adversaries are leveraging generative AI to automate phishing, scale social engineering, and discover vulnerabilities faster than humans can patch them. Autonomous malware adapts tactics in real-time to bypass security systems, and AI tools themselves are becoming high-value targets. Microsoft's AI-powered defenses have reduced response times from hours to seconds, but defenders must remain vigilant as AI increases the speed and impact of cyber operations. Identity compromise remains a dominant attack vector, with phishing and social engineering accounting for 28% of breaches. Multi-factor authentication (MFA) prevents over 99% of unauthorized access attempts, but adoption rates are uneven. The rise of infostealers has fueled credential-based intrusions. The United States accounted for 24.8% of all observed attacks between January and June 2025, followed by the United Kingdom, Israel, and Germany. Government agencies, IT providers, and research institutions were among the most frequently targeted sectors. Ransomware remains a primary threat, with over 40% of recent cases involving hybrid cloud components.

ChatGPT Misuse by Nation-State Actors for Malware Development and Influence Operations

OpenAI has disrupted multiple activity clusters misusing its ChatGPT AI tool for cyberattacks, including nation-state actors from Russia, North Korea, and China. These actors have used ChatGPT to develop malware, conduct phishing campaigns, and engage in influence operations. The Russian threat actor developed a remote access trojan (RAT) and credential stealer, while the North Korean group created malware and command-and-control (C2) infrastructure. The Chinese group, UNK_DropPitch, generated phishing content and tooling for routine tasks. Additionally, Chinese law enforcement used ChatGPT to draft and edit reports on smear campaigns against Chinese dissidents and Japanese Prime Minister Sanae Takaichi. OpenAI also blocked accounts used for scams, influence operations, and surveillance, including networks from Cambodia, Myanmar, Nigeria, and individuals linked to Chinese government entities.