CyberHappenings logo

Track cybersecurity events as they unfold. Sourced timelines. Filter, sort, and browse. Fast, privacy‑respecting. No invasive ads, no tracking.

Google Gemini AI Vulnerabilities Allowing Prompt Injection and Data Exfiltration

First reported
Last updated
4 unique sources, 7 articles

Summary

Hide ▲

Researchers disclosed multiple vulnerabilities in Google's Gemini AI assistant that could have exposed users to privacy risks and data theft. The flaws, collectively named the Gemini Trifecta, affected Gemini Cloud Assist, the Search Personalization Model, and the Browsing Tool. These vulnerabilities allowed for prompt injection attacks, search-injection attacks, and data exfiltration. Google has since patched the issues and implemented additional security measures. Additionally, a zero-click vulnerability in Gemini Enterprise, dubbed 'GeminiJack', was discovered in June 2025, allowing attackers to exfiltrate corporate data via indirect prompt injection. Google addressed this flaw by separating Vertex AI Search from Gemini Enterprise and updating their interaction with retrieval and indexing systems. A new prompt injection flaw in Google Gemini allowed attackers to bypass authorization guardrails and use Google Calendar as a data extraction mechanism. The flaw enabled unauthorized access to private meeting data and the creation of deceptive calendar events without any direct user interaction. The attack involved a malicious payload hidden within a standard calendar invite, which was activated when a user asked Gemini about their schedule. The flaw allowed Gemini to create a new calendar event and write a full summary of the target user's private meetings in the event's description. The issue was addressed following responsible disclosure, highlighting the need for evaluating large language models across key safety and security dimensions. Additionally, a high-severity flaw in Google's implementation of Gemini AI in the Chrome browser, tracked as CVE-2026-0628, could allow attackers to escalate privileges, violate user privacy, and access sensitive system resources. The flaw was discovered by researchers from Palo Alto Networks' Unit 42 and was patched by Google in early January. The vulnerabilities highlight the potential risks of AI tools being used as attack vectors rather than just targets.

Timeline

  1. 02.03.2026 12:27 2 articles · 1d ago

    High-severity flaw in Gemini AI Chrome implementation

    The vulnerability, tracked as CVE-2026-0628, was described as a case of insufficient policy enforcement in the WebView tag. It was patched by Google in early January 2026 in version 143.0.7499.192/.193 for Windows/Mac and 143.0.7499.192 for Linux. The issue could have permitted malicious extensions with basic permissions to seize control of the new Gemini Live panel in Chrome. The attack could have been abused by an attacker to achieve privilege escalation, enabling them to access the victim's camera and microphone without their permission, take screenshots of any website, and access local files. The problem is the need for granting these AI agents privileged access to the browsing environment to perform multi-step operations, thereby becoming a double-edged sword when an attacker embeds hidden prompts in a malicious web page. The prompt could instruct the AI assistant to perform actions that would otherwise be blocked by the browser, leading to data exfiltration or code execution. The web page could manipulate the agent to store the instructions in memory, causing it to persist across sessions. The integration of an AI side panel in agentic browsers brings back classic browser security risks, including vulnerabilities related to cross-site scripting (XSS), privilege escalation, and side-channel attacks. An extension with access to a basic permission set through the declarativeNetRequest API allowed permissions that could have enabled an attacker to inject JavaScript code into the new Gemini panel. When the Gemini app is loaded within this new panel component, Chrome hooks it with access to powerful capabilities. The declarativeNetRequest API allows extensions to intercept and change properties of HTTPS web requests and responses, used by ad-blocking extensions to stop issuing requests to load ads on web pages. All it takes for an attacker is to trick an unsuspecting user into installing a specially crafted extension, which could then inject arbitrary JavaScript code into the Gemini side panel to interact with the file system, take screenshots, access the camera, turn on the microphone.

    Show sources
  2. 19.01.2026 19:21 2 articles · 1mo ago

    New prompt injection flaw in Google Gemini

    Researchers at Miggo Security discovered a new prompt injection flaw in Google Gemini that could leak private Calendar data. The attack involved sending a malicious Calendar invite with a prompt-injection payload in the description. When the victim asked Gemini about their schedule, the assistant would execute the embedded instructions, creating a new event and writing a summary of private meetings in its description. The attack bypassed Google's defenses because the instructions appeared safe. Google has added new mitigations to block such attacks following responsible disclosure by Miggo Security.

    Show sources
  3. 10.12.2025 14:05 1 articles · 2mo ago

    GeminiJack zero-click vulnerability disclosed and patched

    A zero-click vulnerability in Gemini Enterprise, dubbed 'GeminiJack', was discovered in June 2025 by Noma Security. This flaw allowed attackers to exfiltrate corporate data via indirect prompt injection by exploiting the Retrieval-Augmented Generation (RAG) architecture. The attack chain involved content poisoning, triggering AI execution, and data exfiltration via a malicious image tag. Google addressed the flaw by separating Vertex AI Search from Gemini Enterprise and updating their interaction with retrieval and indexing systems.

    Show sources
  4. 30.09.2025 16:18 4 articles · 5mo ago

    Google Gemini AI vulnerabilities disclosed and patched

    Researchers disclosed three vulnerabilities in Google's Gemini AI assistant that could have exposed users to privacy risks and data theft. The flaws, collectively named the Gemini Trifecta, affected Gemini Cloud Assist, the Search Personalization Model, and the Browsing Tool. These vulnerabilities allowed for prompt injection attacks, search-injection attacks, and data exfiltration. Google has since patched the issues and implemented additional security measures. The Gemini Search Personalization model's flaw allowed attackers to manipulate AI behavior and leak user data by injecting malicious search queries via JavaScript from a malicious website. The Gemini Cloud Assist flaw allowed attackers to execute instructions via prompt injections hidden in log content, potentially compromising cloud resources and enabling phishing attacks. The Gemini Browsing Tool flaw allowed attackers to exfiltrate a user's saved information and location data by exploiting the tool's 'Show thinking' feature. Google has made specific changes to mitigate each flaw, including rolling back vulnerable models, hardening search personalization features, and preventing data exfiltration from browsing in indirect prompt injections. Additionally, a zero-click vulnerability in Gemini Enterprise, dubbed 'GeminiJack', was discovered in June 2025, allowing attackers to exfiltrate corporate data via indirect prompt injection. Google addressed this flaw by separating Vertex AI Search from Gemini Enterprise and updating their interaction with retrieval and indexing systems. A new prompt injection flaw in Google Gemini allowed attackers to bypass authorization guardrails and use Google Calendar as a data extraction mechanism. The flaw enabled unauthorized access to private meeting data and the creation of deceptive calendar events without any direct user interaction. The attack involved a malicious payload hidden within a standard calendar invite, which was activated when a user asked Gemini about their schedule. The flaw allowed Gemini to create a new calendar event and write a full summary of the target user's private meetings in the event's description. The issue was addressed following responsible disclosure, highlighting the need for evaluating large language models across key safety and security dimensions.

    Show sources

Information Snippets

Similar Happenings

Google API Keys Expose Gemini AI Data

Google API keys, previously considered harmless, now expose Gemini AI data due to a privilege escalation. Researchers found nearly 3,000 exposed keys across various sectors, including Google itself. These keys can authenticate to Gemini AI and access private data, potentially leading to significant financial losses for victims. New research from Truffle Security and Quokka has revealed the extent of this issue, with thousands of API keys embedded in client-side code and Android apps. Google has implemented measures to block leaked API keys and notify affected parties.

PromptSpy Android Malware Uses Gemini AI for Persistence

PromptSpy, an advanced Android malware, uses Google's Gemini AI to maintain persistence by pinning itself in the recent apps list. The malware captures lockscreen data, blocks uninstallation, gathers device information, takes screenshots, and records screen activity. It communicates with a hard-coded C2 server and is distributed via a dedicated website targeting users in Argentina. PromptSpy is the first known Android malware to use generative AI in its execution flow, sending screen data to Gemini to receive instructions for maintaining persistence. The malware is an advanced version of VNCSpy and is likely financially motivated. Researchers have discovered that PromptSpy was first found in February 2026, with initial samples uploaded to VirusTotal from Hong Kong and Argentina. ESET has not observed the malware in its telemetry, suggesting it may be a proof-of-concept. ESET attributed PromptSpy to Chinese developers with medium confidence, but has not linked it to any known threat actor. PromptSpy deploys a VNC module on compromised systems, enabling operators to view the victim’s screen and take full control of the Android device. The malware saves both its previous prompts and Gemini’s responses, allowing Gemini to understand context and coordinate multistep interactions.

AI Assistants Abused as Command-and-Control Proxies

Researchers have demonstrated that AI assistants like Microsoft Copilot and xAI Grok can be exploited as command-and-control (C2) proxies. This technique leverages the AI's web-browsing capabilities to create a bidirectional communication channel for malware operations, enabling attackers to blend into legitimate enterprise communications and evade detection. The method, codenamed AI as a C2 proxy, allows attackers to generate reconnaissance workflows, script actions, and dynamically decide the next steps during an intrusion. The attack requires prior compromise of a machine and installation of malware, which then uses the AI assistant as a C2 channel through specially crafted prompts. This approach bypasses traditional defenses like API key revocation or account suspension. According to new findings from Check Point Research (CPR), platforms including Grok and Microsoft Copilot can be manipulated through their public web interfaces to fetch attacker-controlled URLs and return responses. The AI service acts as a proxy, relaying commands to infected machines and sending stolen data back out, without requiring an API key or even a registered account. The method relies on AI assistants that support URL fetching and content summarization, allowing attackers to tunnel encoded data through query parameters and receive embedded commands in the AI's reply. Malware can interact with the AI interface invisibly using a WebView2 browser component inside a C++ program. The research also outlined a broader trend: malware that integrates AI into its runtime decision-making, sending host information to a model and receiving guidance on actions to prioritize.

Fake AI Assistant Extensions in Google Chrome Web Store Exfiltrate Credentials and Monitor Emails

Over 260,000 Google Chrome users downloaded fake AI assistant extensions that steal login credentials, monitor emails, and enable remote access. Researchers at LayerX identified over 30 malicious extensions as part of a coordinated campaign called AiFrame. The extensions mimicked popular AI assistants like Claude AI, ChatGPT, Grok, and Google Gemini. The campaign used extension spraying to evade takedowns, directing users to remote infrastructure to avoid detection. The extensions exfiltrate data from Chrome and Gmail to attacker-controlled servers. LayerX warns that these extensions act as general-purpose access brokers, capable of harvesting data and monitoring user behavior. Many extensions have been removed from the Chrome Web Store, but users who downloaded them remain at risk. Additionally, cybersecurity researchers have discovered a malicious Google Chrome extension named CL Suite by @CLMasters (ID: jkphinfhmfkckkcnifhjiplhfoiefffl) that steals TOTP codes for Facebook and Meta Business accounts, Business Manager contact lists, and analytics data. The extension exfiltrates data to infrastructure controlled by the threat actor, including a backend at getauth[.]pro and a Telegram channel. The extension has 33 users as of writing and was first uploaded to the Chrome Web Store on March 1, 2025. About 500,000 VKontakte users have had their accounts silently hijacked through Chrome extensions masquerading as VK customization tools. The large-scale campaign has been codenamed VK Styles. The malware embedded in the extensions is designed to engage in active account manipulation by automatically subscribing users to the attacker's VK groups, resetting account settings every 30 days to override user preferences, manipulating CSRF tokens to bypass VK's security protections, and maintaining persistent control. A report published by Q Continuum found a huge collection of 287 Chrome extensions that exfiltrate browsing history to data brokers. These extensions have 37.4 million installations, representing roughly 1% of the global Chrome userbase.

Google Enhances Chrome Agentic AI Security Against Indirect Prompt Injection Attacks

Google is introducing new security measures to protect Chrome's agentic AI capabilities from indirect prompt injection attacks. These protections include a new AI model called the User Alignment Critic, expanded site isolation policies, additional user confirmation steps for sensitive actions, and a prompt injection detection classifier. The User Alignment Critic independently evaluates the agent's actions, ensuring they align with the user's goals. Google is also enforcing Agent Origin Sets to limit the agent's access to relevant data origins and has developed automated red-teaming systems to test defenses. The company has announced bounty payments for security researchers to further enhance the system's robustness.