Threats

FTC Moves to Expand AI Enforcement: Deepfakes, Voice Cloning, and the Take It Down Act

April 21, 2026 00:01 · 7 min read
FTC Moves to Expand AI Enforcement: Deepfakes, Voice Cloning, and the Take It Down Act

FTC Prepares to Tackle AI-Enabled Harm Head-On

The Federal Trade Commission is positioning itself to play a significantly larger role in combating the misuse of artificial intelligence, with particular attention to nonconsensual sexualized deepfakes and voice cloning fraud. The agency's expanding mandate stems in large part from the Take It Down Act, legislation passed by Congress last year that opened the door to criminal prosecution of anyone who shares or distributes nonconsensual intimate images and digital forgeries — including those created using AI tools.

At a Senate oversight hearing last week, FTC Chair Andrew Ferguson described the Take It Down Act as one of the "greatest legislative achievements" of the current Congress and President Donald Trump's administration. Ferguson indicated the agency was actively preparing for "robust enforcement" of the new law.

First Criminal Conviction Under the New Law

Earlier this month, the Department of Justice secured its first successful conviction under the Take It Down Act. James Strahler, a 37-year-old resident of Columbus, Ohio, pleaded guilty to deploying AI-generated deepfake nude images as part of a targeted harassment campaign against at least six women. Strahler, who has yet to be sentenced, also admitted to using photographs of children in his neighborhood to generate deepfake pornography — underscoring the law's relevance to child safety as well as adult victims.

The Take-Down Provision: What Activates in May

A critical section of the Take It Down Act is set to come into force in May. Once active, it will allow individuals to submit formal "take down" notices to websites that publish or host sexual deepfakes. Platforms will then have just 48 hours to remove the flagged content or face investigation and enforcement action by the FTC.

At a March 30 conference in Washington D.C., Commissioner Mark Meador acknowledged that while he hopes the FTC will "never have to enforce it," the commission is treating Take It Down enforcement as a top priority and is "actively spinning everything up that we need" to implement the take-down provision.

xAI and Grok Under Scrutiny

The forthcoming enforcement authority could set the stage for one of the first major confrontations between the FTC and the technology sector, especially companies like xAI. Its AI chatbot Grok has continued to be used to create and host nonconsensual deepfake images of real individuals, even following the controversy it faced earlier this year over what critics described as a mass nudification feature.

Following Meador's remarks at the March conference, CyberScoop asked the commissioner how the take-down provisions might apply to Grok's behavior. Meador clarified that the law prevents the FTC from taking action against any company until formal complaints are received — a process that cannot begin until May.

"This is coming into place, and then if they don't [remove the content] we would get the complaints and then we would go after them at that point. So, we kind of have to wait and see how…companies respond to complaints and requests being made, and my hope would be that every company that gets a request to take something down would immediately take it down." — Commissioner Mark Meador

xAI's press office did not respond to CyberScoop's request for comment on its preparations to comply with the Take It Down Act.

Child Online Safety as a Core Priority

The FTC's recently published strategic plan flagged protecting children online as a "key concern" for the commission, calling for expanded consumer tools and resources. The plan states the commission is "dedicated to exploring other ways the FTC can protect children and support families, including through its new authority under the Take It Down Act."

Casey Waughn, a privacy lawyer and senior associate at Armstrong Teasdale, told CyberScoop that the current commission's emphasis on child online safety creates considerable room for the Take It Down Act to be applied creatively.

"We've seen enforcing technology and privacy violations related to youth children is a priority, so I think it's relatively easy to parlay that into some Take it Down Act enforcement." — Casey Waughn, Armstrong Teasdale

Waughn noted that the one-year delay before the take-down provision's enforcement was intended to give platforms time to prepare. However, she also argued the FTC could do more to publicly communicate what lawful compliance actually looks like — similar to the guidance resources the agency provides around major privacy statutes.

"I think what would be helpful for all organizations…would be guidance explaining what constitutes a good faith effort, for example, to attempt to address a take down request." — Casey Waughn

AI-Powered Scams: A Growing Threat Landscape

Beyond deepfakes, the FTC is also confronting the rapid expansion of AI-enabled fraud targeting American consumers. Chair Ferguson told lawmakers that AI is "increasing both the sophistication of the actual mechanisms by which the scams are accomplished, but it's also making it easier for scammers to choose their targets."

The agency's enforcement powers, however, remain constrained. The Federal Communications Commission holds regulatory authority over the telephone and internet providers that carry most scam traffic. Ferguson also pointed out that many call center operations are based overseas "where they don't bat an eye at the risk of civil enforcement from the FTC," and he indicated the commission would welcome additional legislative authority to address the problem.

Voice Cloning Fraud: Nearly $900 Million in Losses

At the March conference, Commissioner Meador described AI-fueled deception as something the commission thinks about "daily," warning it is lowering the barrier to entry for a wide range of criminal schemes. According to the FBI, voice cloning scams that impersonate distressed family members bilked Americans out of nearly $900 million last year. The same technology has been weaponized to impersonate senior Trump administration officials in conversations with businesses and political leaders.

In response to the growing threat, Senator Maggie Hassan wrote to four AI voice cloning companies — ElevenLabs, LOVO, Speechify, and VEED — demanding information on what policies and programs each had in place to prevent or deter fraud enabled by their platforms.

Defining Deception in the Age of AI

One of the more complex challenges facing the FTC is determining when AI-generated content crosses the line into actionable deception. Meador acknowledged the difficulty, noting that many deepfakes are consumed by large online audiences with the same "willing suspension of disbelief" people bring to computer-generated visual effects in films. As a result, the FTC will likely have to evaluate cases individually rather than through "broad brush strokes."

"I think we'll see a lot of that in the AI context, where if you know something wasn't meant to be real or authentic, that's not a concern. The question is then, what are those situations where there is an expectation that you're being shown something authentic and quote, unquote 'real' as opposed to being AI generated and was there misrepresentation or material omission to disclose that?" — Commissioner Mark Meador

The FTC's expanding AI portfolio reflects broader recognition that existing legal frameworks are struggling to keep pace with the speed at which AI-enabled harms are emerging. With the Take It Down Act's take-down provisions activating in May, the coming months are likely to test both the commission's enforcement resolve and the willingness of major technology platforms to comply.


Source: CyberScoop

Source: CyberScoop

Powered by ZeroBot

Protect your website from bots, scrapers, and automated threats.

Try ZeroBot Free