From Shadow IT to Shadow AI: A Familiar Problem Gets Worse
Shadow IT has been a persistent thorn in the side of security teams for years. The root cause is straightforward: employees want to work more efficiently and reach for whatever tools help them do that, regardless of whether IT has sanctioned those tools. The downstream consequence is unmanaged, unmonitored risk entering the enterprise environment.
That same dynamic is now playing out with artificial intelligence. The tools workers quietly adopt to boost their productivity are increasingly AI-powered, giving rise to what security practitioners now call shadow AI. By definition, shadow AI is invisible to both IT and security departments, and the risks it introduces are arguably more severe than those posed by traditional shadow IT — particularly because of the autonomous, agentic capabilities that modern AI systems can wield.
What CoChat Is and Why It Was Built
CoChat launched during the first week of April 2026 with a stated mission to bring visibility and governance to the enterprise AI shadow. Rather than simply prohibiting employees from using AI tools, the platform takes a different approach: give workers sanctioned, centralized access to the major foundational large language models (LLMs) so they no longer need to build disconnected, unmanaged silos of generative and agentic AI on their own.
Marcel Folaron, CEO of CoChat, explained the problem the platform is designed to solve:
"People feel the pain of needing to get the most out of AI, wanting to increase their performance productivity. So, they turn to automated AI tooling, such as OpenClaw and other locally installed tools, but not necessarily with IT's knowledge. This can be very dangerous. These tools have access to everything on your system, and without the proper control mechanisms, they can run amok."
One specific example cited is OpenClaw, an autonomous personal assistant that has accumulated an estimated 3 million active users. Tools like OpenClaw are purpose-built to maximize personal performance, but they operate autonomously on a user's system with broad access and minimal built-in restraint.
The Dual Risks: Inaccurate LLMs and Runaway Agents
The LLM Accuracy Problem
One of the less-discussed dangers of shadow AI is the inconsistency of LLM outputs. Different employees may rely on different models, and those models may return conflicting answers to identical questions. Worse still, the broader organization may have no awareness that its employees are not exercising independent professional judgment — they are instead deferring to an external, unvetted AI system whose responses are not guaranteed to be accurate. LLMs are also known for their tendency to tell users what they want to hear, a form of sycophancy that can mislead individuals who lack a benchmark for comparison.
Agentic AI: When Autonomy Becomes Liability
The risk escalates significantly when employees begin deploying agentic AI systems. These agents are dynamic, adaptive, and stateful — they take actions based on the reasoning output of an underlying LLM, often without any further input from the user. The LLM reasons; the agent acts. If that reasoning is flawed or leads toward harmful outcomes — such as exposing sensitive data to third parties or deleting enterprise files — there may be no human checkpoint to catch it.
CoChat addresses this by inserting a control layer between the LLM and the agent. When that layer detects that an instruction could result in a dangerous action, it pauses the autonomous process and requires explicit user approval or rejection before proceeding.
As Folaron described it:
"If we identify an action we deem to be dangerous, we delay that action. We ask the user to approve or reject that action, and the next action is directed by the user rather than automatically enacted by the agentic system."
In other words, CoChat enforces a human-in-the-loop requirement even when the underlying agentic system was designed to operate without one.
A Slack-Like Model for AI Transparency
The platform's collaborative design draws a useful analogy to how teams use Slack. Just as Slack channels bring individuals together around shared work and allow colleagues to surface concerns, CoChat creates a shared workspace where the outputs and behaviors of different LLMs and agentic systems can be observed, compared, and questioned by multiple team members.
This multi-user visibility is significant. Where a single user might be misled by an LLM's tendency to please — providing the answer it assumes the user wants rather than the most accurate response — colleagues on the same platform can review AI outputs and raise red flags. The platform doesn't just ensure one human in the loop; it enables multiple humans in each loop.
Users retain the freedom to run whichever LLM or agentic system they prefer. CoChat actively encourages the use of multiple LLMs in parallel to cross-check responses and detect potential hallucinations or misdirection before they influence downstream actions.
Governance Without Prohibition
Folaron has described CoChat as a platform that "brings the top AI solutions seamlessly into a secure workspace so teams can collaborate more effectively and use these tools with greater transparency and confidence." The AI accessed through the platform may technically still qualify as shadow AI from a definitional standpoint, but a layer of visibility, transparency, and governance is applied on top of it — transforming an invisible liability into a manageable one.
The platform is designed to connect AI workflows to tools teams already use, while retaining the ability to interrupt potentially dangerous autonomous actions before they cause harm. Its core value proposition rests on three pillars:
- Visibility: Surfacing AI usage that would otherwise remain hidden from IT and security teams
- Governance: Imposing policy-based controls over agentic actions that could cause harm
- Collaboration: Replacing isolated individual AI silos with shared, team-based AI workspaces
The Broader Context: Shadow AI Is Already Widespread
The problem CoChat is targeting is not hypothetical. Research has found that 50% of workers use unapproved AI tools, and SaaS applications are quietly enabling significant security exposure as a result. The emergence of powerful agentic systems like OpenClaw — with its estimated 3 million active users — illustrates how quickly unsanctioned AI adoption can scale before organizations even become aware of the risk.
Enterprise governance frameworks for agentic AI are still nascent, and the speed at which these tools proliferate is outpacing most organizations' ability to respond. CoChat's April 2026 launch positions it in a market where the need is real but the solutions are still emerging, offering one concrete answer to the question of how enterprises can regain control of an AI landscape that is increasingly operating outside their line of sight.
Source: SecurityWeek