San Francisco Sets the Stage for an AI-Defined Security Landscape
In March 2026, Moscone Center in San Francisco once again hosted the RSA Conference, drawing thousands of security practitioners, vendors, and investors from across the globe. While past conferences have centered on cloud security, ransomware, and zero trust, this year a single concept commanded every keynote, panel discussion, and vendor booth: agentic AI. Not AI as a passive assistant, but AI as an independent actor capable of initiating complex operations without human direction.
The emergence of Mythos, a next-generation AI framework designed to orchestrate multi-step cyber operations, crystallized both the extraordinary potential and the profound danger this technological shift represents. The conversations in San Francisco made one thing unmistakably clear: cybersecurity must fundamentally rethink how it approaches defense when the threat landscape includes entities that can reason, adapt, and act on their own.
Market Forces Are Accelerating the Transition
The scale of investment flowing into AI underscores just how rapidly this transition is occurring. Gartner forecasts AI spending to grow by 44 percent in 2026 and to reach $47 trillion by 2029 — a figure that dwarfs its projected $238 billion for information security and risk management solutions in 2026 alone. The implication is stark: AI is not a feature of cybersecurity; it is becoming the dominant force shaping the entire digital economy.
In response to growing demand from defenders, OpenAI has scaled its Trusted Access for Cyber program to support thousands of verified defenders and hundreds of security teams. Meanwhile, the Cloud Security Alliance has issued an urgent call for organizations to fight AI-powered attacks with AI-powered defenses, predicting a surge in simultaneous, coordinated AI-driven campaigns.
The Dual-Use Reality of Autonomous Capabilities
Technologies like Mythos lay bare a fundamental truth that security professionals cannot afford to ignore: the same capabilities that make agentic AI valuable for defenders make it equally powerful — and dangerous — in the hands of adversaries.
Attackers are already leveraging AI to accomplish the following without meaningful human involvement:
- Autonomous reconnaissance and lateral movement across target environments
- Real-time adaptation to defensive measures as they are encountered
- Scalable, low-cost attacks that require minimal human oversight or expertise
This is not a theoretical or future-tense concern. Early rogue AI agents have already been observed probing environments, exploiting misconfigurations, and mimicking the behavior of legitimate users. Critically, attackers no longer need to control every step of an intrusion. They can deploy agents that behave like authenticated identities, operating within normal-looking parameters while carrying out malicious objectives.
The Familiar Trap: Point Solutions and Tool Sprawl
Every significant shift in the cybersecurity landscape has historically triggered a wave of narrowly focused point solutions. The pattern is well established: vendors rush to market with specialized tools, organizations adopt multiple overlapping platforms, and the result is tool sprawl, siloed visibility, and mounting operational complexity. Ironically, these gaps in coherence often benefit attackers more than defenders.
Agentic AI is already following the same trajectory. Early signs of fragmentation are visible in the emergence of:
- AI security posture management tools
- AI runtime protection platforms
- AI-specific anomaly detection engines
- AI governance solutions
Each of these categories may offer discrete value, but collectively they increase friction and add yet more dashboards to already overwhelmed security teams. The real need is not more tools — it is better context and centralized control over every entity operating within an environment, whether that entity is human or machine.
A Pragmatic Framework: AI as an Identity
At the parallel AGC Cybersecurity Investor Conference held alongside RSA, AI experts and industry leaders converged on a more grounded conclusion. Rather than treating agentic AI as an entirely new tool category demanding separate security stacks, organizations should treat AI the same way they treat any other actor in their environment: as an identity.
This framing cuts through considerable hype by placing AI squarely within the established and well-understood domain of identity security. The logic is compelling because agentic AI behaves in all the ways that identities behave:
- It authenticates via APIs, tokens, or credentials
- It accesses systems and sensitive data
- It performs actions within an environment on behalf of itself or others
- It can be compromised, misused, or go rogue
Once this parallel is accepted, the path toward a coherent and less fragmented defense becomes considerably clearer.
Identity Threat Detection as the Logical Control Plane
If agentic AI is treated as an identity, then identity threat detection and risk mitigation solutions naturally become the primary control plane for managing and securing it. These frameworks are already built around analyzing behavior across credentials and systems, combining adaptive verification, behavioral analytics, device intelligence, and risk scoring within a unified platform.
Applied specifically to AI agents, this approach enables several critical defensive capabilities:
- Behavioral visibility to detect anomalies such as unusual access patterns, privilege escalation attempts, or unexpected data exfiltration
- Risk-based access controls that adjust permissions dynamically, enforce additional verification steps, or isolate suspicious agents in real time
- Unified policy enforcement across both human and machine identities, eliminating the need for parallel governance structures
- Lifecycle management to identify and remediate orphaned or unmanaged AI agents before they can be exploited
As rogue AI agents proliferate — whether compromised by external adversaries or deployed with malicious intent from the outset — identity-driven security provides a practical and extensible defense. It enforces least-privilege principles, continuously validates access rights, detects abnormal behavioral patterns, and automates response actions. Crucially, these capabilities already exist within modern identity security frameworks and can be extended to cover AI agents without introducing new organizational silos or requiring wholesale platform replacement.
Why This Approach Matters Now
The urgency of this strategic reorientation cannot be overstated. The capabilities demonstrated by frameworks like Mythos are not years away — they are being actively developed and, in some cases, already deployed. The Cloud Security Alliance's warning about accelerating AI threats reflects a real and present danger, not a distant hypothetical.
Organizations that wait for the market to consolidate around dedicated AI security tools risk repeating the mistakes of previous technology cycles. Each new silo adds latency, complexity, and potential blind spots. By contrast, anchoring AI security within identity threat detection frameworks leverages existing investments, established expertise, and proven methodologies.
The broader message from the security community gathered in San Francisco this past March is deceptively simple but strategically profound: if it can act, it should be treated like an identity. In a world where autonomous agents are becoming both the most powerful defensive asset and the most dangerous offensive weapon, that principle may prove to be the most durable foundation available to defenders navigating an increasingly autonomous threat landscape.
Source: SecurityWeek