Analysis

RSAC 2026: AI Takes the Spotlight While Human Community Proves Irreplaceable

April 10, 2026 23:50 · 7 min read
RSAC 2026: AI Takes the Spotlight While Human Community Proves Irreplaceable

AI and Community Collide at RSAC 2026

The RSAC 2026 Conference convened cybersecurity professionals from across the globe in San Francisco to wrestle with the most consequential questions shaping digital security today. Artificial intelligence dominated nearly every conversation on the expo floor and in breakout sessions, yet the official conference theme — "The Power of Community" — was a deliberate counterweight, stressing that human collaboration and oversight remain indispensable even as automation accelerates.

Dark Reading News Director Rob Wright, TechTarget SearchSecurity Senior Site Editor Alissa Irei, and Cybersecurity Dive Senior Reporter Eric Geller convened after the event to share their on-the-ground impressions. Their discussion, part of the ongoing Reporters' Notebook collaboration series from Informa TechTarget's network of cybersecurity sister sites, surfaced several defining narratives: the transformative potential of AI in security operations, the dangers of deploying it without guardrails, and the unsettling gap left by the US government's withdrawal from the event.

The Theme That Pushed Back Against the Hype

Irei noted the deliberate tension baked into this year's conference design. "The theme of the conference was community, which was an interesting and pointed choice because the acronym on everyone's lips at the conference and in general is AI," she explained. Organizers were making a clear argument: artificial intelligence, however capable, is not truly intelligent without human operators guiding and auditing its work. In a moment of widespread anxiety about job displacement — felt not just in cybersecurity but across virtually every industry — the decision to center the theme on human collaboration carried real weight.

More than two-thirds of RSAC 2026 sessions included some AI component or were devoted entirely to the technology. Sessions ranged from understanding the evolving AI-driven threat landscape to evaluating new defensive tools powered by machine learning. Even panels that were not explicitly billed as AI discussions found themselves drawn into the conversation.

A Striking Absence: The US Federal Government

Perhaps the single most discussed development at RSAC 2026 was not a product launch or a research disclosure — it was an empty chair. The US federal government, a perennial presence at the conference where public-private dialogue traditionally flourishes, withdrew its participation several weeks before the event began.

Geller described the impact on attendees: "Every year, government representatives attend to listen to the community and discuss their own plans. This is one of the places where those conversations are the most fruitful." The absence raised immediate questions about whether federal agencies remain as committed to engaging with the security research and business communities as they once were, particularly amid reported budget cuts at agencies that work closely with those groups.

The void was made more conspicuous by the fact that many attendees had anticipated RSAC would be the ideal venue for the government to elaborate on its recently released cybersecurity strategy. That rollout never materialized, leaving practitioners without the strategic clarity they were hoping for.

Wright pointed out that the EU and other foreign governments did send cybersecurity delegations, further highlighting the gap left by Washington. His colleague Becky Bracken at Dark Reading reported on those international contributions. Wright also referenced a story he wrote weeks before the conference about spyware policy and a potential shift in US government posture, noting that civil society organizations, cybersecurity researchers, and vendors in the anti-spyware space felt they were "flying blind, with no clear strategy or direction" when it came to government engagement.

Irei framed the moment starkly: "Ideally, this would be a time for public-private partnerships, cooperation, and input from the private sector on public regulations and legislation. The absence of the federal government is notable and unlikely to ease anyone's anxieties about AI, which are already plentiful."

Agentic AI: Promise and Peril in the SOC

Nowhere was the debate over AI more concrete than in discussions about security operations centers. SOC managers face relentless alert fatigue, talent shortages, and an ever-expanding attack surface. Agentic AI — systems capable of taking autonomous actions rather than simply surfacing recommendations — has emerged as one proposed solution.

Irei highlighted a case study presented by the CISO of Exabeam, in which an agentic AI system deployed within a SOC autonomously identified a North Korean malicious insider on his first day at the organization. According to the CISO, the AI flagged the suspicious activity within hours, if not minutes, of the individual first logging into his account. The example illustrated the genuine upside of autonomous threat detection: catching threats that fatigued human analysts might miss, and doing so at machine speed.

Yet the enthusiasm for automation is not uncomplicated. Wright observed a clear divide at the conference between C-level executives pushing for rapid AI adoption and security researchers urging caution. Executives argued that human oversight slows down the very processes AI is meant to accelerate. Researchers countered that removing human supervision entirely opens the door to miscategorizations, cascading errors, and exploitable blind spots.

The Risks of Moving Fast Without Guardrails

Geller connected the push for automation to a less altruistic motivation: cost reduction. "This hunger for automation" is, he argued, "also a hunger for, frankly, profit margins. The fewer people you can pay to do this work, the more money you're going to make, the better you're going to look to shareholders, the more venture funding you can raise. This is really only partly about security. It's largely about looking profitable by shedding some of that labor cost."

The consequences of unsupervised AI were a recurring subject in conference sessions. Geller noted that AI systems left to run without human check-ins can miscategorize threats, and those errors can be costly. He also pointed to growing concern about vibe coding — a practice in which developers rely heavily on AI coding assistants with minimal human review — and the security vulnerabilities such approaches introduce.

On the business side, Irei described an "ask for forgiveness, not permission" attitude toward AI experimentation that she encountered repeatedly, warning that this posture creates real opportunities for bad outcomes.

Finding the Balance: Human Supervisors for AI Workers

Despite the friction, a constructive consensus did emerge from many sessions. Rather than treating AI autonomy and human oversight as mutually exclusive, speakers argued for a model in which both operate simultaneously. Agentic systems handle high-volume, repetitive tasks — freeing specialized human analysts to focus on complex judgment calls — while human operators periodically audit the AI's outputs and course-correct as needed.

Geller articulated the principle clearly: "Just as you need human supervisors for human workers, you're going to need human supervisors for AI workers because nothing human or machine is infallible." He suggested that governance frameworks requiring regular human review of AI activity would surface signs of misbehavior before they compound into serious incidents.

This framing recast the role of the SOC analyst — not as someone replaced by AI, but as someone whose function evolves toward oversight, validation, and strategic interpretation of AI-generated findings.

Looking Ahead

RSAC 2026 captured a cybersecurity community at an inflection point. AI is no longer a future prospect; it is already embedded in threat detection, incident response, and software development pipelines. The pressure to adopt it faster than competitors — and faster than regulators can keep up — is intense and, in many cases, overriding sound security practice.

At the same time, the conference underscored that technology alone cannot secure complex systems. The US government's absence was a reminder that community — real, sustained collaboration between government, private industry, and independent researchers — is not a soft concept. It is a structural requirement for a coherent national and global cybersecurity posture. Whether that community can hold together in an era defined by rapid automation and political uncertainty remains, as of RSAC 2026, an open question.


Source: Dark Reading

Source: Dark Reading

Powered by ZeroBot

Protect your website from bots, scrapers, and automated threats.

Try ZeroBot Free