News Arena

Home

Nation

States

International

Politics

Defence & Security

Opinion

Economy

Sports

Entertainment

Trending:

Home
/

can-we-stop-grok-an-expert-explains-what-to-do

Technology

Can we stop Grok? Here's what to do

The Take It Down Act criminalises both the non-consensual publication of “intimate visual depictions” of identifiable people and AI, or otherwise computer-generated depictions of identifiable people.

News Arena Network - New York - UPDATED: January 9, 2026, 05:31 PM - 2 min read

thumbnail image

Grok is making it easy for users to flood X with non-consensual sexualised images.


Since the end of December, 2025, X’s artificial intelligence chatbot, Grok, has responded to many users’ requests to undress real people by turning photos of the people into sexually explicit material. After people began using the feature, the social platform company faced global scrutiny for enabling users to generate non-consensual sexually explicit depictions of real people.

 

The Grok account has posted thousands of “nudified” and sexually suggestive images per hour. Even more disturbing, Grok has generated sexualised images and sexually explicit material of minors.

 

X’s response: Blame the platform’s users, not us. The company issued a statement on January 3, 2026, saying that “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.” It’s not clear what action, if any, X has taken against any users.

 

Legal experts who study the intersection of law and emerging technologies see this flurry of non-consensual imagery as a predictable outcome of the combination of X’s lax content moderation policies and the accessibility of powerful generative AI tools.

 

Targeting users

 

The rapid rise in generative AI has led to countless websites, apps and chatbots that allow users to produce sexually explicit material, including “nudification” of real children’s images. But these apps and websites are not as widely known or used as any of the major social media platforms, like X.

 

State legislatures and Congress were somewhat quick to respond. In May 2025, Congress enacted the Take It Down Act, which makes it a criminal offense to publish non-consensual sexually explicit material of real people. The Take It Down Act criminalises both the non-consensual publication of “intimate visual depictions” of identifiable people and AI, or otherwise computer-generated depictions of identifiable people.

 

Also read: Obscene AI content: X to submit report to government today

 

Those criminal provisions apply only to any individuals who post the sexually explicit content, not to the platforms that distribute the content, such as social media websites.

 

Other provisions of the “Take It Down Act”, however, require platforms to establish a process for the people depicted to request the removal of the imagery. Once a “Take It Down Request” is submitted, a platform must remove the sexually explicit depiction within 48 hours. But these requirements do not take effect until May 19, 2026.

 

Problems with platforms

 

Meanwhile, user requests to take down the sexually explicit imagery produced by Grok have apparently gone unanswered. Even the mother of one of Elon Musk’s children, Ashley St. Clair, has not been able to get X to remove the fake sexualised images of her that Musk’s fans produced using Grok. According to reports that St. Clair said her “complaints to X staff went nowhere.”

 

This does not surprise experts because Musk gutted then-Twitter’s Trust and Safety advisory group shortly after he acquired the platform and fired 80 per cent of the company’s engineers dedicated to trust and safety. Trust and safety teams are typically responsible for content moderation and initiatives to prevent abuse at tech companies.

 

Via The Conversation

TOP CATEGORIES

  • Nation

QUICK LINKS

About us Rss FeedSitemapPrivacy PolicyTerms & Condition
logo

2026 News Arena India Pvt Ltd | All rights reserved | The Ideaz Factory