Security Leaders Are Embracing AI — and Seeing Results
For years, artificial intelligence in cybersecurity was met with a mixture of skepticism and cautious optimism. That sentiment has shifted dramatically. In a recent conversation hosted by Dark Reading, Reddit CISO Frederick Lee — widely known by his nickname "Flee" — and Dave Gruber, principal analyst for cybersecurity at Omdia, discussed where AI is genuinely delivering value in security operations, what the next phase of adoption looks like, and where the real risks lie.
The discussion, which took place as part of the Dark Reading Confidential podcast, reflected a broader industry inflection point: after two to three years of experimentation, security teams are moving past fear and into confident, targeted deployment of AI and large language model (LLM) tools.
From Hype to Hands-On Value
Lee was direct about tempering expectations while acknowledging tangible gains. "It's not just smoke and mirrors with regards to some of the new technology," he said, adding that security teams — particularly those in Silicon Valley — are extracting meaningful utility from LLMs, especially around automation.
One prominent example he cited was the workflow automation platform Tines. According to Lee, AI integration has made Tines significantly more accessible, enabling analysts to interact with it conversationally rather than requiring deep technical familiarity. "People can now actually talk to Tines effectively like you would another human," he explained.
Another concrete use case Lee highlighted involves feeding existing security runbooks into LLMs and converting them into autonomous agents capable of sustaining operational continuity. This approach, he noted, extends not just an organization's coverage of its attack surface but also the hours during which analysts can receive responsive support.
Lee also pointed to a capability that is quietly transforming analyst workflows: natural language interfaces for query languages. Tools like BigQuery and Splunk have traditionally required analysts to learn proprietary query syntax. With LLM integration, an analyst can now simply type a plain-language request — such as asking for information about a specific IP address — and the model translates that into the appropriate query automatically.
Two Phases of AI Adoption in Security
Gruber offered a broader industry framing, describing the past 18 to 24 months of AI adoption in cybersecurity as unfolding in two distinct phases.
The first phase involved horizontal use cases — capabilities that cut across many security functions regardless of specialization. These include:
- Task-specific automation, such as data enrichment
- More dynamic handling of malware sandboxing
- Summarization of security incidents, enabling analysts to quickly produce and share case write-ups that previously required significant manual effort
The second phase centers on vertical use cases — applications tailored to specific, high-complexity security disciplines. Gruber zeroed in on threat intelligence analysis as a prime example. Operationalizing threat intelligence has historically been one of the more challenging aspects of security work. Any delay between gaining access to threat intelligence and being able to act on it adds organizational risk. AI, Gruber argued, is compressing that gap.
"As we put AI to work to help us speed up that process, do more analysis faster, understand what's relevant specifically to my organization contextually, and then get that into the cycle — now we're more threat aware and we can respond faster and more accurately to threats as they happen," he said.
Gruber noted that he has been conducting research in this space on a quarterly basis over the course of the past year and described the pace of change as moving "through the traditional net new tech adoption cycle, except on steroids."
A Rare Moment: Security Practitioners Actually Excited About New Tech
Lee made an observation that stood out: the arrival of AI tools may represent one of the first times in recent memory that security practitioners have greeted a new technology category with genuine enthusiasm rather than dread.
"This is probably one of the first times or maybe one of the rarer cases where a new technology came out that security practitioners were excited about," Lee said, "where they were actually seeing, 'Hey, there's actually something here that might be promising to me, not just another bit of technology you have to actually figure out how to secure.'"
He acknowledged that securing LLMs themselves remains an open challenge, but emphasized that the immediate reaction from practitioners — seeing AI as something that expands their own capabilities — has been a key driver of the unusually rapid adoption cycle in security.
Gruber corroborated this shift in attitude, noting that roughly a year ago, security leaders were more enthusiastic than practitioners. Front-line analysts exhibited notable nervousness and caution. But after three research cycles, that dynamic has flipped. "Now there's — I'll call it what it is — it's excitement about what's possible," Gruber said, adding that practitioners are now motivated not just by operational improvements but by how AI might enhance their career prospects as well.
The Risks That Still Demand Attention
Despite the optimism, neither Lee nor Gruber dismissed the risks. Gruber proposed a framework for thinking about them by distinguishing between hard risks and soft risks.
Hard risks are concrete and technical. The most prominent example is AI-generated code introducing security vulnerabilities — a well-documented issue in the context of vibe coding and AI-assisted development tools, where the absence of an experienced human engineer reviewing output before production deployment can quietly introduce exploitable flaws.
Soft risks, by contrast, are subtler but equally dangerous. Gruber described these as stemming from a lack of understanding about the boundaries of what AI systems are actually capable of. When practitioners don't yet understand the decision-making logic of an AI tool, they risk either over-trusting its outputs or failing to establish appropriate monitoring and review processes.
"If I don't pay attention, and I don't understand what's happening, and how the tech is being applied and how to configure it properly," Gruber cautioned, risks accumulate quietly — not through dramatic failure but through a gradual erosion of oversight.
What This Means for Security Teams Going Forward
The picture that emerges from this conversation is one of cautious but accelerating momentum. AI is not yet delivering on every promise attached to it, particularly the more ambitious visions for autonomous security operations. But it is solving real problems today — reducing analyst toil, broadening operational coverage, making automation more accessible, and accelerating the operationalization of threat intelligence.
The organizations seeing the most benefit appear to be those that approached deployment methodically: understanding what specific problems they wanted to solve, learning the boundaries of the tools they adopted, and building appropriate human oversight into the process.
As both Gruber and Lee made clear, the security community's embrace of AI is not blind enthusiasm. It is grounded in operational experience — and that, perhaps more than any vendor promise, is what is driving adoption forward.
Source: Dark Reading