Vulnerabilities

Palo Alto Researchers Expose Over-Privileged AI Agents in Google's Vertex AI Platform

April 10, 2026 23:10 · 5 min read
Palo Alto Researchers Expose Over-Privileged AI Agents in Google's Vertex AI Platform

AI Agents Are Becoming a New Attack Surface

Organizations across industries have been rapidly deploying AI agents to automate complex business and operational workflows. But new research from Palo Alto Networks reveals a troubling reality: those same agents can be quietly turned against the organizations running them if they are not configured with appropriately scoped permissions.

The research focuses on Google Cloud's Vertex AI platform, where excessive default permissions create a pathway for attackers to hijack deployed AI agents and weaponize them to steal sensitive data, infiltrate restricted internal infrastructure, and carry out other unauthorized actions — all while appearing to function normally from the outside.

What Is Vertex AI and Why Does It Matter?

Vertex AI is Google Cloud's platform for building, deploying, and managing AI-powered applications. It provides developers with an Agent Engine and an Application Development Kit that can be used to create autonomous agents capable of querying databases, interacting with APIs, managing files, and making automated decisions with minimal human involvement.

Enterprises rely heavily on these agents — and similar offerings from other cloud vendors — to automate workflows, analyze data, power customer service tools, and AI-enable existing cloud services. In doing so, they typically grant these agents broad access permissions, and it is precisely that broad access that Palo Alto's research targets.

The Over-Privilege Problem: P4SA Service Accounts

At the heart of the issue is a default service account that Google attaches to every deployed Vertex AI agent. Known as the Per-Project, Per-Product Service Agent (P4SA), this account carries excessive default permissions that go well beyond what most agents need to do their jobs.

Palo Alto Networks researcher Ofir Shaty documented how an attacker who can extract the agent's service account credentials can use them to:

Shaty described the severity in stark terms:

"This level of access constitutes a significant security risk, transforming the AI agent from a helpful tool into an insider threat. The scopes set by default on the Agent Engine could potentially extend access beyond the GCP environment and into an organization's Google Workspace, including services such as Gmail, Google Calendar, and Google Drive."

A Working Proof of Concept

To validate the threat, the Palo Alto research team built a proof-of-concept Vertex AI agent. Once deployed, this agent sent a request to Google's internal metadata service to extract live credentials belonging to the P4SA service agent operating beneath it. The team then used the permissions tied to those credentials to break out of the AI agent's sandboxed environment and move laterally into both the customer's broader Google Cloud Project and Google's own internal infrastructure.

The demonstration underscored how a seemingly routine AI deployment could serve as a launchpad for a much broader compromise of cloud environments — without triggering obvious alarms.

Google's Response and Recommended Mitigations

After Palo Alto Networks disclosed its findings to Google, the search and cloud giant updated its official documentation to more explicitly explain how Vertex AI uses agents and the permissions those agents hold by default.

Google has also recommended that organizations wishing to enforce least-privilege access in their agentic AI environments replace the default service agent on Vertex Agent Engine with their own custom dedicated service account. A Google spokeswoman quoted the Palo Alto report directly in describing the company's recommended mitigation:

"A key best practice for securing Agent Engine and ensuring least-privilege execution is Bring Your Own Service Account (BYOSA). Using BYOSA, Agent Engine users can enforce the principle of least privilege, granting the agent only the specific permissions it requires to function and effectively mitigating the risk of excessive privileges."

The Broader Implications for AI Security

While the research centers on Vertex AI, the broader takeaway has implications far beyond any single platform. Ian Swanson, VP of AI Security at Palo Alto Networks, framed the risk as a fundamental shift in how enterprises should think about AI-related threats.

"Agents represent a shift in enterprise productivity from AI that talks to AI that acts," Swanson said. That distinction carries significant security weight. The risks are no longer confined to data leakage — they now include the possibility of agents taking unauthorized action on behalf of an attacker.

Swanson emphasized that security teams must treat agent deployments with the same rigor applied to any other piece of enterprise infrastructure: "When deploying agents, organizations must realize that there can be no AI without security of AI. Security teams must be able to discover agents wherever they live in enterprise environments, assess potential risk before deployment, and protect agents at runtime as they enter business and operational workflows."

What Organizations Should Do Now

Based on the findings published by Palo Alto Networks and Google's subsequent guidance, organizations using Vertex AI — or any cloud-based AI agent platform — should consider the following steps:

  1. Audit default service account permissions associated with any deployed AI agents and identify where permissions exceed actual operational requirements.
  2. Adopt the BYOSA (Bring Your Own Service Account) model in Vertex AI to enforce least-privilege access at the agent level.
  3. Monitor agent behavior at runtime to detect anomalous credential requests or unexpected lateral movement within cloud environments.
  4. Review access scopes carefully, particularly any that could extend agent permissions into connected services like Google Workspace, Gmail, Google Calendar, or Google Drive.
  5. Treat AI agent deployments as a security event, not just a development milestone, ensuring security teams are involved from the outset.

The Palo Alto research was published on March 31, 2026, and serves as a timely reminder that as AI agents become more deeply embedded in enterprise infrastructure, the attack surface they represent grows commensurately — and must be managed accordingly.


Source: Dark Reading

Source: Dark Reading

Powered by ZeroBot

Protect your website from bots, scrapers, and automated threats.

Try ZeroBot Free