A Quiet Gathering With Loud Concerns
A congressional subcommittee convened Thursday for a roundtable discussion focused on the potential of artificial intelligence — but what began as a measured policy conversation quickly spiraled into existential territory, with lawmaker after lawmaker voicing deep anxieties about where the technology is heading.
The House Oversight Committee's subcommittee on "Artificial Intelligence and American Power" brought together AI company executives, academics, and corporate technology implementers alongside members of Congress. The session unfolded even as other major debates consumed Capitol Hill, including the scope of federal surveillance authorities, the conflict with Iran, and funding for the Department of Homeland Security.
Lawmakers Voice a Wide Range of Fears
The concerns raised by individual representatives spanned an unusually broad spectrum of issues, reflecting just how wide-ranging AI's potential impacts are perceived to be.
- Rep. James Walkinshaw (D-Va.) raised the alarm over federal employees potentially using AI chatbots to process sensitive government data.
- Rep. William Timmons (R-S.C.) questioned whether it should be made illegal for AI systems to use someone's likeness to generate non-consensual pornographic images.
- Rep. John McGuire (R-Va.) worried that AI models could prevent U.S. military forces from carrying out lethal operations by reaching their own conclusions about what constitutes "moral" behavior.
- Rep. Yassamin Ansari (D-Ariz.) raised concerns about the Trump administration's deployment of AI during the conflict with Iran, as well as AI's heavy energy consumption and potential environmental consequences.
'A Revolution on Our Hands'
Perhaps the sharpest warning came from Rep. Dave Min (D-Calif.), who cautioned that communities across the country would begin experiencing the technology's impacts in the near term.
"People in our districts across this country are going to start feeling impacts very soon, and if we don't start thinking properly and aggressively and proactively about the challenges that AI creates, I fear that we're going to have a revolution on our hands."
Rep. Maxwell Frost (D-Fla.), the subcommittee's ranking Democrat and currently the youngest member of Congress, acknowledged AI's potential to cure diseases and drive economic growth. But he expressed deep skepticism about whether Congress would act quickly enough to prevent catastrophic outcomes.
"I don't have faith in this institution to actually put the common sense guardrails in place. And then we fast forward ten years, and the house is on fire. That won't be good for anybody, whether it's the industry or working families and people, or this institution itself."
Praise Alongside Alarm
Not every voice in the room was pessimistic. Rep. Eric Burlison (R-Mo.) opened the session with enthusiasm, marveling at how one panelist's company had used AI to automate and accelerate manufacturing processes in its factories. Burlison compared the experience to science fiction, saying it was "the closest thing to Star Trek I've ever seen." He also asked what congressional districts could do to make themselves more attractive to AI companies seeking to expand.
Still, even Burlison's optimism was tempered by the broader mood in the room, which grew increasingly tense as the discussion moved toward national security and broader civilizational risk.
The Question Nobody Could Fully Answer
Rep. Eli Crane (R-Ariz.), a former Navy SEAL with combat experience, posed what may have been the session's most striking question to the assembled panel of experts and industry leaders.
"I recognize AI is not going anywhere. That being said, does anyone on this panel feel or believe, in any way, that as we are going down the road in this AI race, we might be simultaneously engineering our own destruction?"
The experts present did not issue reassurances without conditions. They highlighted AI's vast and growing capabilities while urging lawmakers to approach policy-making in a thoughtful and well-informed manner.
Experts Push for Action on Safety and Competitiveness
Mark Beall, president of government affairs at the AI Policy Network Inc. and a former Pentagon official, warned that inaction on key national security concerns could cost the United States its competitive edge in AI development globally.
Robert Atkinson, founder of the Information Technology and Innovation Foundation, a technology think tank, offered cautious optimism while calling for meaningful investment.
"I don't think it's going to kill us. At the same time, I do think it's important for the federal government to seriously fund AI safety research. We need to know a lot more about how the models work."
Spencer Overton, a law professor at George Washington University, challenged lawmakers directly when asked whether AI companies were acting as responsible stewards. While he acknowledged that incentives for AI companies were generally aligned correctly, he placed the ultimate burden of accountability on elected officials.
"Constituents are looking for you, not for companies, to step up and protect them. They're trusting you, the person that they voted for, to do that, as opposed to companies. That's the way the system works, right?"
Anthropic's Mythos Model Looms Over the Discussion
Several lawmakers openly discussed disclosures from AI firm Anthropic, which recently announced its Mythos AI model — a system the company says has capabilities so powerful that it has limited its availability to select customers. Anthropic claims the model has demonstrated an apparent ability to bypass traditional cybersecurity defenses and potentially compromise major institutions, including banks, government agencies, and large corporations.
The announcement added a tangible, near-term edge to what might otherwise have remained an abstract policy debate, underscoring that the AI systems lawmakers are trying to govern are already approaching capabilities that could fundamentally alter the cybersecurity landscape.
Congress Races to Keep Pace
Thursday's roundtable reflects a broader struggle on Capitol Hill to match the dizzying speed of global AI development with coherent, actionable policy. The conversation made clear that while lawmakers are increasingly aware of the stakes involved, significant uncertainty remains about whether the legislative process can move fast enough — or boldly enough — to shape outcomes before the technology does it for them.
The subcommittee's discussion is expected to inform ongoing efforts to craft AI governance frameworks, though no specific legislative proposals were announced as a result of Thursday's session.
Source: SecurityWeek