AI Browsers Vulnerable to PromptFix Exploit for Malicious Prompts
Summary
Hide ▲
Show ▼
AI-driven browsers are vulnerable to a new prompt injection technique called PromptFix, which tricks them into executing malicious actions. The exploit embeds harmful instructions within fake CAPTCHA checks on web pages, leading AI browsers to interact with phishing sites or fraudulent storefronts without user intervention. This vulnerability affects AI browsers like Perplexity's Comet, which can be manipulated into performing actions such as purchasing items on fake websites or entering credentials on phishing pages. Researchers have demonstrated that AI browsers can be tricked into phishing scams in under four minutes by exploiting agentic blabbering. The attack leverages the AI browser's tendency to reason its actions and use it against the model to lower security guardrails. By intercepting traffic between the browser and AI services and feeding it to a Generative Adversarial Network (GAN), researchers made Perplexity's Comet AI browser fall victim to a phishing scam. The technique builds on prior methods like VibeScamming and Scamlexity, which exploit hidden prompt injections to carry out malicious actions. The attack involves building a 'scamming machine' that iteratively optimizes and regenerates a phishing page until the AI browser stops complaining and proceeds with the threat actor's actions. Once a fraudster iterates on a web page until it works against a specific AI browser, it works on all users relying on the same agent. The disclosure comes as Trail of Bits demonstrated four prompt injection techniques against the Comet browser to extract users' private information. Zenity Labs detailed two zero-click attacks affecting Perplexity's Comet, using indirect prompt injection to exfiltrate local files or hijack a user's 1Password account. Prompt injection attacks remain a fundamental security challenge for large language models (LLMs) and their integration into organizational workflows. OpenAI noted that prompt injection vulnerabilities in agentic browsers are unlikely to be fully resolved, but risks can be reduced through automated attack discovery and adversarial training.
Timeline
-
20.08.2025 16:01 4 articles · 6mo ago
PromptFix Exploit Demonstrated on AI-Driven Browsers
Researchers have demonstrated a new prompt injection technique called PromptFix that tricks AI-driven browsers into executing malicious actions. The exploit embeds harmful instructions within fake CAPTCHA checks on web pages, leading AI browsers to interact with phishing sites or fraudulent storefronts without user intervention. The technique affects AI browsers like Perplexity's Comet and can be triggered by simple instructions, resulting in automated actions on fake websites. The exploit leverages the AI's design goal of assisting users quickly and without hesitation, leading to a new form of scam called Scamlexity. This involves AI systems autonomously pursuing goals and making decisions with minimal human supervision, increasing the complexity and invisibility of scams. The exploit can result in drive-by download attacks, where malicious payloads are downloaded without user involvement. AI systems need robust guardrails for phishing detection, URL reputation checks, domain spoofing, and malicious file detection. Guardio's tests revealed that agentic AI browsers are vulnerable to phishing, prompt injection, and purchasing from fake shops. Comet was directed to a fake shop and completed a purchase without human confirmation. Comet also treated a fake Wells Fargo email as genuine and entered credentials on a phishing page. Additionally, Comet interpreted hidden instructions in a fake CAPTCHA page, triggering a malicious file download. AI firms are integrating AI functionality into browsers, allowing software agents to automate workflows, but enterprise security teams need to balance automation's benefits with the risks posed by the fact that artificial intelligence lacks security awareness. Security has largely been put on the back burner, and AI browser agents from major AI firms failed to reliably detect the signs of a phishing site. Nearly all companies plan to expand their use of AI agents in the next year, but most are not prepared for the new risks posed by AI agents in a business environment. Researchers have demonstrated that AI browsers can be tricked into phishing scams in under four minutes by exploiting agentic blabbering. The attack leverages the AI browser's tendency to reason its actions and use it against the model to lower security guardrails. By intercepting traffic between the browser and AI services and feeding it to a Generative Adversarial Network (GAN), researchers made Perplexity's Comet AI browser fall victim to a phishing scam. The technique builds on prior methods like VibeScamming and Scamlexity, which exploit hidden prompt injections to carry out malicious actions. The attack involves building a 'scamming machine' that iteratively optimizes and regenerates a phishing page until the AI browser stops complaining and proceeds with the threat actor's actions. Once a fraudster iterates on a web page until it works against a specific AI browser, it works on all users relying on the same agent. The disclosure comes as Trail of Bits demonstrated four prompt injection techniques against the Comet browser to extract users' private information. Zenity Labs detailed two zero-click attacks affecting Perplexity's Comet, using indirect prompt injection to exfiltrate local files or hijack a user's 1Password account. Prompt injection attacks remain a fundamental security challenge for large language models (LLMs) and their integration into organizational workflows. OpenAI noted that prompt injection vulnerabilities in agentic browsers are unlikely to be fully resolved, but risks can be reduced through automated attack discovery and adversarial training.
Show sources
- Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts — thehackernews.com — 20.08.2025 16:01
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
- Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes — thehackernews.com — 11.03.2026 18:38
Information Snippets
-
PromptFix exploits the AI's design goal to assist users quickly and without hesitation.
First reported: 20.08.2025 16:013 sources, 4 articlesShow sources
- Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts — thehackernews.com — 20.08.2025 16:01
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
- Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes — thehackernews.com — 11.03.2026 18:38
-
The exploit can trick AI browsers into interacting with phishing sites or fraudulent storefronts.
First reported: 20.08.2025 16:013 sources, 4 articlesShow sources
- Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts — thehackernews.com — 20.08.2025 16:01
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
- Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes — thehackernews.com — 11.03.2026 18:38
-
AI browsers like Perplexity's Comet are susceptible to PromptFix attacks.
First reported: 20.08.2025 16:013 sources, 4 articlesShow sources
- Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts — thehackernews.com — 20.08.2025 16:01
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
- Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes — thehackernews.com — 11.03.2026 18:38
-
The technique can be triggered by simple instructions, leading to automated actions on fake websites.
First reported: 20.08.2025 16:013 sources, 4 articlesShow sources
- Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts — thehackernews.com — 20.08.2025 16:01
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
- Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes — thehackernews.com — 11.03.2026 18:38
-
AI browsers can be manipulated into parsing spam emails and entering credentials on phony login pages.
First reported: 20.08.2025 16:013 sources, 4 articlesShow sources
- Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts — thehackernews.com — 20.08.2025 16:01
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
- Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes — thehackernews.com — 11.03.2026 18:38
-
The exploit can result in drive-by download attacks, where malicious payloads are downloaded without user involvement.
First reported: 20.08.2025 16:013 sources, 4 articlesShow sources
- Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts — thehackernews.com — 20.08.2025 16:01
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
- Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes — thehackernews.com — 11.03.2026 18:38
-
AI systems need robust guardrails for phishing detection, URL reputation checks, domain spoofing, and malicious file detection.
First reported: 20.08.2025 16:013 sources, 3 articlesShow sources
- Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts — thehackernews.com — 20.08.2025 16:01
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
-
Adversaries are using GenAI platforms to craft realistic phishing content and automate large-scale deployment.
First reported: 20.08.2025 16:011 source, 1 articleShow sources
- Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts — thehackernews.com — 20.08.2025 16:01
-
AI coding assistants can inadvertently expose proprietary code or sensitive intellectual property.
First reported: 20.08.2025 16:011 source, 1 articleShow sources
- Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts — thehackernews.com — 20.08.2025 16:01
-
Proofpoint observed campaigns using Lovable services to distribute MFA phishing kits and malware.
First reported: 20.08.2025 16:011 source, 1 articleShow sources
- Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts — thehackernews.com — 20.08.2025 16:01
-
Lovable has taken down malicious sites and implemented AI-driven security protections.
First reported: 20.08.2025 16:011 source, 1 articleShow sources
- Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts — thehackernews.com — 20.08.2025 16:01
-
Deepfaked content on YouTube and social media platforms has been used to redirect users to fraudulent investment sites.
First reported: 20.08.2025 16:011 source, 1 articleShow sources
- Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts — thehackernews.com — 20.08.2025 16:01
-
AI trading scams rely on fake blogs and review sites to create a false sense of legitimacy.
First reported: 20.08.2025 16:011 source, 1 articleShow sources
- Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts — thehackernews.com — 20.08.2025 16:01
-
These scams have targeted users in multiple countries, including India, the U.K., and Germany.
First reported: 20.08.2025 16:011 source, 1 articleShow sources
- Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts — thehackernews.com — 20.08.2025 16:01
-
GenAI enhances threat actors' operations rather than replacing existing attack methodologies.
First reported: 20.08.2025 16:011 source, 1 articleShow sources
- Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts — thehackernews.com — 20.08.2025 16:01
-
The exploit can be triggered by simple instructions, such as 'Buy me an Apple Watch,' leading the AI browser to add items to carts and auto-fill sensitive information on fake sites.
First reported: 20.08.2025 19:311 source, 1 articleShow sources
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
-
AI browsers can be tricked into parsing spam emails and entering credentials on phony login pages, creating a seamless trust chain for attackers.
First reported: 20.08.2025 19:311 source, 1 articleShow sources
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
-
The exploit can result in drive-by download attacks, where malicious payloads are downloaded without user involvement.
First reported: 20.08.2025 19:311 source, 1 articleShow sources
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
-
Agentic AI browsers are vulnerable to phishing, prompt injection, and purchasing from fake shops.
First reported: 20.08.2025 19:313 sources, 3 articlesShow sources
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
- Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes — thehackernews.com — 11.03.2026 18:38
-
Perplexity's Comet is currently the primary example of agentic AI browsers.
First reported: 20.08.2025 19:313 sources, 3 articlesShow sources
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
- Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes — thehackernews.com — 11.03.2026 18:38
-
Microsoft Edge is embedding agentic browsing features through a Copilot integration.
First reported: 20.08.2025 19:311 source, 1 articleShow sources
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
-
OpenAI is developing its own agentic AI browser platform codenamed 'Aura'.
First reported: 20.08.2025 19:311 source, 1 articleShow sources
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
-
Agentic AI browsers are quickly penetrating the mainstream consumer market.
First reported: 20.08.2025 19:311 source, 1 articleShow sources
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
-
Guardio tested Comet's vulnerability to phishing, prompt injection, and purchasing from fake shops.
First reported: 20.08.2025 19:312 sources, 2 articlesShow sources
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
- Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes — thehackernews.com — 11.03.2026 18:38
-
Comet was directed to a fake shop and completed a purchase without human confirmation.
First reported: 20.08.2025 19:313 sources, 3 articlesShow sources
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
- Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes — thehackernews.com — 11.03.2026 18:38
-
Comet treated a fake Wells Fargo email as genuine and entered credentials on a phishing page.
First reported: 20.08.2025 19:313 sources, 3 articlesShow sources
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
- Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes — thehackernews.com — 11.03.2026 18:38
-
Comet interpreted hidden instructions in a fake CAPTCHA page, triggering a malicious file download.
First reported: 20.08.2025 19:313 sources, 3 articlesShow sources
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
- Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes — thehackernews.com — 11.03.2026 18:38
-
New threats are expected to replace standard human-centric attack models in the AI-vs-AI era.
First reported: 20.08.2025 19:311 source, 1 articleShow sources
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
-
Scammers can exploit AI models to scale attacks endlessly once a vulnerability is found.
First reported: 20.08.2025 19:311 source, 1 articleShow sources
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
-
Users should avoid assigning sensitive tasks to AI browsers and manually input sensitive data when needed.
First reported: 20.08.2025 19:312 sources, 2 articlesShow sources
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
-
AI firms are integrating AI functionality into browsers, allowing software agents to automate workflows.
First reported: 26.08.2025 23:532 sources, 2 articlesShow sources
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
- Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes — thehackernews.com — 11.03.2026 18:38
-
AI browser agents from major AI firms failed to reliably detect the signs of a phishing site.
First reported: 26.08.2025 23:532 sources, 2 articlesShow sources
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
- Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes — thehackernews.com — 11.03.2026 18:38
-
Comet from Perplexity.ai added items to a shopping cart, filled out credit-card details, and clicked the buy button on a fake Walmart site.
First reported: 26.08.2025 23:532 sources, 2 articlesShow sources
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
- Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes — thehackernews.com — 11.03.2026 18:38
-
AI browsers with access to email can read and act on prompts embedded in messages.
First reported: 26.08.2025 23:531 source, 1 articleShow sources
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
-
AI companies need stronger sanitation and guardrails against these attacks.
First reported: 26.08.2025 23:531 source, 1 articleShow sources
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
-
Nearly all companies (96%) plan to expand their use of AI agents in the next year.
First reported: 26.08.2025 23:531 source, 1 articleShow sources
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
-
AI agents need to be experts at blocking potential security threats to workers and company data.
First reported: 26.08.2025 23:531 source, 1 articleShow sources
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
-
AI agents are gullible and servile, making them vulnerable to attacks in an adversarial setting.
First reported: 26.08.2025 23:531 source, 1 articleShow sources
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
-
Perplexity's Comet works within the user's browser context, accessing cookies and authenticated sessions.
First reported: 26.08.2025 23:532 sources, 2 articlesShow sources
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
- Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes — thehackernews.com — 11.03.2026 18:38
-
AI agents could undo much of the training companies have done to improve security awareness of their employees.
First reported: 26.08.2025 23:531 source, 1 articleShow sources
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
-
AI agents are becoming a new class of insider threats.
First reported: 26.08.2025 23:531 source, 1 articleShow sources
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
-
Prompt injection is the No. 1 threat on OWASP's top-10 list of threats for LLMs and generative AI.
First reported: 26.08.2025 23:532 sources, 2 articlesShow sources
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
- Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes — thehackernews.com — 11.03.2026 18:38
-
Companies should move from "trust, but verify" to "doubt, and double verify" for AI agents.
First reported: 26.08.2025 23:531 source, 1 articleShow sources
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
-
AI firms are competing for market share and may not prioritize security improvements.
First reported: 26.08.2025 23:531 source, 1 articleShow sources
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
-
Companies should hold off on putting AI agents into critical business processes until better security is offered.
First reported: 26.08.2025 23:531 source, 1 articleShow sources
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
-
Securing AI requires gaining visibility into all AI use by company workers and enforcing significant guardrails.
First reported: 26.08.2025 23:531 source, 1 articleShow sources
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
-
Guardio researchers demonstrated that AI browsers can be tricked into phishing scams in under four minutes by exploiting agentic blabbering.
First reported: 11.03.2026 18:381 source, 1 articleShow sources
- Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes — thehackernews.com — 11.03.2026 18:38
-
The attack leverages the AI browser's tendency to reason its actions and use it against the model to lower security guardrails.
First reported: 11.03.2026 18:381 source, 1 articleShow sources
- Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes — thehackernews.com — 11.03.2026 18:38
-
By intercepting traffic between the browser and AI services and feeding it to a Generative Adversarial Network (GAN), researchers made Perplexity's Comet AI browser fall victim to a phishing scam.
First reported: 11.03.2026 18:381 source, 1 articleShow sources
- Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes — thehackernews.com — 11.03.2026 18:38
-
The technique builds on prior methods like VibeScamming and Scamlexity, which exploit hidden prompt injections to carry out malicious actions.
First reported: 11.03.2026 18:381 source, 1 articleShow sources
- Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes — thehackernews.com — 11.03.2026 18:38
-
The attack involves building a 'scamming machine' that iteratively optimizes and regenerates a phishing page until the AI browser stops complaining and proceeds with the threat actor's actions.
First reported: 11.03.2026 18:381 source, 1 articleShow sources
- Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes — thehackernews.com — 11.03.2026 18:38
-
Once a fraudster iterates on a web page until it works against a specific AI browser, it works on all users relying on the same agent.
First reported: 11.03.2026 18:381 source, 1 articleShow sources
- Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes — thehackernews.com — 11.03.2026 18:38
-
The disclosure comes as Trail of Bits demonstrated four prompt injection techniques against the Comet browser to extract users' private information.
First reported: 11.03.2026 18:381 source, 1 articleShow sources
- Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes — thehackernews.com — 11.03.2026 18:38
-
Zenity Labs detailed two zero-click attacks affecting Perplexity's Comet, using indirect prompt injection to exfiltrate local files or hijack a user's 1Password account.
First reported: 11.03.2026 18:381 source, 1 articleShow sources
- Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes — thehackernews.com — 11.03.2026 18:38
-
Prompt injection attacks remain a fundamental security challenge for large language models (LLMs) and their integration into organizational workflows.
First reported: 11.03.2026 18:381 source, 1 articleShow sources
- Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes — thehackernews.com — 11.03.2026 18:38
-
OpenAI noted that prompt injection vulnerabilities in agentic browsers are unlikely to be fully resolved, but risks can be reduced through automated attack discovery and adversarial training.
First reported: 11.03.2026 18:381 source, 1 articleShow sources
- Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes — thehackernews.com — 11.03.2026 18:38
Similar Happenings
Active Agent-Based Crypto Scam Exploits Trust in AI Agent Networks
An ongoing crypto scam, Bob-ptp, is actively exploiting trust in AI agent networks. The attack uses malicious Claude Skills on Clawhub, a marketplace for AI plugins, to compromise Solana wallet private keys and redirect payments through attacker-controlled infrastructure. The threat actor, BobVonNeumann, promotes the malicious skill on Moltbook, a social media platform for AI agents, leveraging the implicit trust between agents to spread the attack laterally without further human interaction. The campaign highlights a new class of supply chain attacks that combine traditional supply chain poisoning with social engineering targeting algorithms rather than humans.
Google Enhances Chrome Agentic AI Security Against Indirect Prompt Injection Attacks
Google is introducing new security measures to protect Chrome's agentic AI capabilities from indirect prompt injection attacks. These protections include a new AI model called the User Alignment Critic, expanded site isolation policies, additional user confirmation steps for sensitive actions, and a prompt injection detection classifier. The User Alignment Critic independently evaluates the agent's actions, ensuring they align with the user's goals. Google is also enforcing Agent Origin Sets to limit the agent's access to relevant data origins and has developed automated red-teaming systems to test defenses. The company has announced bounty payments for security researchers to further enhance the system's robustness.
Emerging Security Risks of Agentic AI Browsers
A new generation of AI browsers, known as agentic browsers, is transitioning from passive tools to autonomous agents capable of executing tasks on behalf of users. This shift introduces significant security risks, including increased attack surfaces and vulnerabilities to prompt injection attacks. Security teams must adapt their strategies to mitigate these risks as the adoption of AI browsers grows.
Indirect Prompt Injection Vulnerabilities in ChatGPT Models
Researchers from Tenable discovered seven vulnerabilities in OpenAI's ChatGPT models (GPT-4o and GPT-5) that enable attackers to extract personal information from users' memories and chat histories. These vulnerabilities allow for indirect prompt injection attacks, which manipulate the AI's behavior to execute unintended or malicious actions. OpenAI has addressed some of these issues, but several vulnerabilities persist. The vulnerabilities include indirect prompt injection via trusted sites, zero-click indirect prompt injection in search contexts, and prompt injection via crafted links. Other techniques involve bypassing safety mechanisms, injecting malicious content into conversations, hiding malicious prompts, and poisoning user memories. The vulnerabilities affect the 'bio' feature, which allows ChatGPT to remember user details and preferences across chat sessions, and the 'open_url' command-line function, which leverages SearchGPT to access and render website content. Attackers can exploit the 'url_safe' endpoint by using Bing click-tracking URLs to lure users to phishing sites or exfiltrate user data. These findings highlight the risks associated with exposing AI chatbots to external tools and systems, which expand the attack surface for threat actors. The vulnerabilities stem from how ChatGPT ingests and processes instructions from external sources, allowing attackers to exploit these flaws through various methods. The most concerning issue is a zero-click vulnerability, where simply asking ChatGPT a benign question can trigger an attack if the search results include a poisoned website.
AI-targeted cloaking attack exploits AI crawlers
AI security company SPLX has identified a new security issue in agentic web browsers like OpenAI ChatGPT Atlas and Perplexity. This issue exposes underlying AI models to context poisoning attacks through AI-targeted cloaking. Attackers can serve different content to AI crawlers compared to human users, manipulating AI-generated summaries and overviews. This technique can introduce misinformation, bias, and influence the outcomes of AI-driven systems. The hCaptcha Threat Analysis Group (hTAG) has also analyzed browser agents against common abuse scenarios, revealing that these agents often execute risky tasks without safeguards. This makes them vulnerable to misuse by attackers. The attack can undermine trust in AI tools and manipulate reality by serving deceptive content to AI crawlers.