# Google Patches Critical Prompt Injection Vulnerability in Antigravity AI IDE


Security researchers have identified and Google has patched a critical vulnerability in Antigravity, the company's agentic integrated development environment, that could allow attackers to achieve arbitrary code execution through prompt injection. The flaw exploited insufficient input sanitization combined with the IDE's file-creation capabilities to bypass the platform's security restrictions.


## The Threat


The vulnerability represents a significant security concern for developers and organizations using Antigravity for AI-assisted code development. By crafting specially formatted prompts, an attacker could potentially:


  • Execute arbitrary code on a developer's machine or within the IDE environment
  • Access sensitive files through the IDE's file-searching tools
  • Compromise development workflows and inject malicious code into projects
  • Gain persistence through modified source files or configuration

  • The flaw has since been patched by Google, but the discovery underscores the evolving security challenges in agentic AI development tools.


    ## Background and Context


    Antigravity is Google's experimental agentic IDE designed to assist developers by leveraging AI models to understand code, generate suggestions, and automate routine development tasks. Unlike traditional code editors, Antigravity operates with elevated permissions to perform autonomous tasks such as:


  • Creating and modifying files
  • Searching codebases
  • Executing commands within the development environment
  • Analyzing project structure

  • These capabilities make Antigravity powerful for productivity but also create a larger attack surface if security boundaries are not carefully maintained.


    The vulnerability is particularly significant because agentic tools are increasingly trusted with direct access to developer systems. As organizations adopt AI-powered development platforms, ensuring these tools have proper input validation and sandboxing becomes critical infrastructure security.


    ## Technical Details: How the Attack Works


    The vulnerability chains together two distinct security weaknesses:


    ### 1. File-Creation Capabilities

    Antigravity permits users to request file creation through natural language prompts. This is a core feature—developers can ask the IDE to "create a new configuration file" or "add a utility function," and the tool will handle file operations.


    ### 2. Insufficient Input Sanitization in find_by_name

    The find_by_name tool, which searches for files by name within a project, does not properly sanitize user input. This tool processes search queries without adequate validation of special characters or path traversal sequences.


    ### The Exploitation Chain


    The attack works as follows:


    1. Attacker crafts a malicious prompt containing both a file creation request and a specially formatted search query

    2. The file is created with the attacker's specified content (e.g., a malicious script, modified configuration, or code with backdoor functionality)

    3. The prompt injects commands into the find_by_name tool using unsanitized input, allowing it to execute beyond its intended scope

    4. Strict mode is bypassed because the security checks did not account for this chained attack vector

    5. Code execution occurs either through the created files being imported/executed, through shell command injection in the file-search tool, or through manipulation of the development environment


    Researchers demonstrated the flaw by injecting path traversal sequences and shell metacharacters into file search queries, effectively turning the search tool into a command execution vector.


    ## Security Model Implications


    This vulnerability highlights a critical principle in secure tool design: individual components with weak input validation can be chained to create catastrophic failures, even when each component is intended to be sandboxed.


    Google's Antigravity security model relied on:

  • Individual tool restrictions (each tool does one thing)
  • Strict mode enforcement
  • Assumed isolation between components

  • However, the attack bypassed these assumptions by:

  • Combining multiple permitted operations
  • Exploiting a validation gap in a secondary tool
  • Using the IDE's own capabilities against its security model

  • This pattern has been observed in previous attacks on AI agent sandboxes and highlights why defense-in-depth is essential when granting agentic systems elevated privileges.


    ## Affected Users and Scope


    The vulnerability affected:

  • Developers using Antigravity for code generation and project management
  • Organizations with Antigravity deployed in development pipelines
  • Anyone relying on Antigravity's security guarantees for handling untrusted code or prompts

  • The exact number of affected users has not been publicly disclosed, though Google has stated that the patch was rolled out automatically to all Antigravity instances.


    ## Google's Response


    Google addressed the vulnerability through:


    1. Immediate patching of the find_by_name input sanitization

    2. Enhancement of Strict mode validation to prevent chained attacks across multiple tools

    3. Security audit of other agentic IDE components to identify similar weaknesses

    4. Publication of security advisory (though details remain limited to protect users during the patch rollout period)


    The company has not disclosed whether any exploitation of this vulnerability occurred in the wild before patching.


    ## Broader Implications for AI-Powered Development Tools


    This discovery has implications beyond Antigravity:


    | Concern | Impact |

    |---------|--------|

    | Agentic Tool Security | All AI-powered IDEs with file access must implement stronger input validation across ALL tools, not just primary ones |

    | Supply Chain Risk | Compromised development environments could inject malicious code into software supply chains affecting millions of users |

    | Privilege Escalation | Agentic tools trusted with high permissions become attractive targets for attackers seeking to compromise development workflows |

    | Vendor Responsibility | Cloud-based development tools must maintain rigorous security standards and rapid patch deployment |


    ## Recommendations


    ### For Developers and Organizations Using Antigravity

  • Update immediately to the patched version of Antigravity
  • Review recent project history to identify if any suspicious files were created or modified
  • Audit generated code for unexpected patterns or malicious additions
  • Monitor for related vulnerabilities in other agentic tools in your development stack
  • Implement code review processes even for AI-generated code, treating it as untrusted input

  • ### For Developers Building Agentic Tools

  • Validate all inputs exhaustively, even in "secondary" tools that seem less critical
  • Implement defense-in-depth security: don't rely on a single layer of validation
  • Test tool chaining during security reviews—attackers will attempt to combine capabilities
  • Sandbox agentic operations with minimal required permissions, not maximal granted permissions
  • Publish security advisories promptly and establish clear vulnerability disclosure processes

  • ### For Security Teams

  • Inventory agentic tools in use across development teams
  • Establish baseline security requirements for AI-powered development tools before adoption
  • Create incident response plans for potential agentic tool compromise
  • Monitor for anomalous code creation and file modifications in development environments

  • ## Conclusion


    The Antigravity vulnerability demonstrates that even carefully designed security models can fail when individual components are not sufficiently hardened. As agentic AI tools become more prevalent in software development, security must be embedded at every layer—not assumed to emerge from the combination of individually restricted tools.


    Google's swift patching is commendable, but the broader lesson is clear: agentic systems require exceptional rigor in input validation and security testing. Development teams should treat all outputs from these tools with appropriate skepticism and maintain strong code review and testing practices.


    ---


    For more cybersecurity coverage, follow HackWire for weekly updates on emerging threats, vulnerability disclosures, and security research.