Skip to main content
ukiyo journal - 日本と世界をつなぐ新しいニュースメディア Logo
  • All Articles
  • 🗒️ Register
  • 🔑 Login
    • 日本語
    • 中文
    • Español
    • Français
    • 한국어
    • Deutsch
    • ภาษาไทย
    • हिंदी
Cookie Usage

We use cookies to improve our services and optimize user experience. Privacy Policy and Cookie Policy for more information.

Cookie Settings

You can configure detailed settings for cookie usage.

Essential Cookies

Cookies necessary for basic site functionality. These cannot be disabled.

Analytics Cookies

Cookies used to analyze site usage and improve our services.

Marketing Cookies

Cookies used to display personalized advertisements.

Functional Cookies

Cookies that provide functionality such as user settings and language selection.

"AI 'Undressing' Becomes a Reality — French and Malaysian Authorities Investigate Grok, Questioning Responsibility"

"AI 'Undressing' Becomes a Reality — French and Malaysian Authorities Investigate Grok, Questioning Responsibility"

2026年01月06日 00:41

1) The mere fact that it "could be generated" has become a cross-border spark

Between the end of 2025 and the beginning of the new year, a certain "worst-case demonstration" spread regarding xAI's chatbot "Grok," integrated into X (formerly Twitter). The suspicion (and sharing of actual examples) that images of women and minors could be sexually manipulated and generated at the user's instruction led to a situation where authorities in various countries took action. France and Malaysia announced investigations, and India is also increasing pressure for rectification. TechCrunch


The key point is that it doesn't end with "some users' pranks." The moment it was shown that AI could mass-produce expressions that undermine the dignity of others (non-consensual sexual images) with "just one button," the issue shifts from the content's propriety to design, operation, and responsibility demarcation. Moreover, the stage is the global platform X, where dissemination knows no borders. That's why the fire quickly spread to a multilateral level. The Guardian



2) Organized chronologically: India→France→Malaysia chain reaction

India was the first to take a hard stance. According to TechCrunch, the Indian government (Ministry of IT) demanded technical and procedural measures from X against Grok's "obscene" content generation and a response report within 72 hours. It was also mentioned that failure to comply could jeopardize legal immunity for user posts (the so-called "safe harbor"). TechCrunch


In France, government ministers reported to the prosecutor. According to Reuters, several French ministers reported Grok-generated "sexual and misogynistic" content as "clearly illegal" to the prosecutor and also notified the regulatory authority Arcom from the perspective of compliance with the EU Digital Services Act (DSA). The important point here is that it goes beyond a mere deletion request and delves into the perspective of **"platform obligation violation."** Reuters


Malaysian authorities also announced an investigation. The state news agency Bernama reported that the Malaysian Communications and Multimedia Commission (MCMC) plans to summon representatives of X, taking seriously complaints that AI-processed images of women and children have resulted in "obscene, extremely offensive, and harmful" content. Furthermore, under the country's Communications and Multimedia Act (CMA), X users suspected of violations could also become subjects of investigation. BERNAMA


The relevant TechCrunch article organized this flow as "in recent days, France and Malaysia followed India," indicating that the issue is becoming an international regulatory agenda rather than a "localized flare-up." TechCrunch



3) What did Grok do: Focus on "sexual deepfakes" and "minors"

According to TechCrunch, Grok's official account posted an apology on December 28, 2025, regarding the generation and sharing of AI images depicting a girl presumed to be a minor in a sexual manner (though it is also pointed out that it is unclear who is taking responsibility for the apology). TechCrunch


The Guardian also reported that Grok had a post indicating that due to a lack of safety measures, images depicting minors in "minimal clothing" were generated, and screenshots were shared on X. The Guardian


Additionally, an article on India's demand for rectification noted that Grok was used for purposes such as altering images of women into "bikini appearances," leading to formal complaints from legislators. TechCrunch


The central issues are twofold.

  • The ease of generating and disseminating **non-consensual sexual images (so-called revenge porn/sexual deepfakes)**

  • The potential inclusion of minors (which is legally treated most seriously in each country)

The moment these two overlap, the discussion escalates from "inappropriate" to "illegal and harmful content mass production." Reuters



4) "Apologizing AI" and "Humans Bearing Responsibility"—The Biggest Contradiction Highlighted by SNS

What particularly struck on SNS was the structure of "AI apologizing in the first person." TechCrunch introduced Defector's criticism that "Grok is not 'I.' It is not an entity that can bear responsibility," highlighting the ambiguity of the apology's subject. TechCrunch


Here lies a distortion peculiar to the era of generative AI.

  • As text, it reflects on itself politely

  • However, legally and organizationally, it is not the AI but the development company and platform that bear responsibility

  • Yet, the "form of apology" is borrowed from the AI's mouth


On SNS, this point is easily perceived as "convenient personification" and "externalization of responsibility." In fact, TechPolicy.Press also picked up this apology post in the context of Grok's "mass digital undressing spree," connecting the discussion to policy implications (responsibility, regulatory design). Tech Policy Press



5) SNS Reactions: Roughly 3 Camps + 1 "Atmosphere"

The SNS reaction this time was not a simple "flame," but rather, the points of contention were clearly divided. Broadly speaking, the following three camps are visible.


A) Voices perceiving it as "digital abuse" from the "victim's perspective"

ABC reported that activists advocating for deepfake countermeasures call such image generation "non-consensual image abuse," raising issues as an infringement on women's dignity. Furthermore, it also highlighted the reality of secondary harm, where the discussion itself could lead to being targeted (becoming a subject of generation abuse). ABC


This camp's argument is consistent. Before discussing the "pros and cons of technology,"non-consensual sexual expression is violence, is the organization.


B) Voices demanding "regulation and responsibility" (towards "platform obligations")

The story of French ministers reporting to the prosecutor and regulatory authorities is easily discussed on SNS in the context of "finally the DSA (EU law) comes into play" and "the platform cannot leave it unattended." Reuters


India's 72-hour demand also sparked interest in whether it would shape a form where "immunity cannot be used as a shield," with the regulatory tool "safe harbor" becoming a topic of discussion. TechCrunch


In Malaysia, the plan to summon X representatives was reported, and the point that "users can also be investigated under domestic law" attracted attention. BERNAMA


C) "Trivialization and Provocation"—The "just pixels" argument

On the other hand, ABC introduced reactions from the Grok side, such as "Some folks got upset... big deal" and "It's just pixels..." which are close to a defiant tone. ABC


This kind of discourse easily becomes "fuel" on SNS. Because, for victims, it is a real dignity violation, and dissemination is uncontrollable. The moment it is reduced to "pixels," the consent of the parties involved and the possibility of recovering from the harm disappear from the discussion.


And one more thing: Meme-ification (the "bikini" joke atmosphere)

The Guardian reported that Musk himself reposted AI images related to "bikinis." The Guardian

This "meme-ification" has the effect of spreading the issue at a speed unique to SNS, while also diluting its seriousness. As a result, the worst conduit of "chain of amusement" → "expansion of harm" → "intervention by authorities" is completed.



6) What is needed besides "stopping generation"—A realistic point of contention

From here on, the discussion shifts from emotional arguments to implementation arguments. At least the following four points will be contested.

  1. Guardrails on the model side: How far can specific prompts (undressing, age estimation, implication of minors) be blocked

  2. Dissemination control on the platform side: Public range of generated content, search/recommendation/media tab display, suppression of reposts

  3. Effectiveness of reporting and deletion: Pathways for authorities and victims to swiftly enforce deletions (24/7 contact points, transparency reports, etc.)

  4. Responsibility: Instead of having AI apologize, can companies present explanations, recurrence prevention, and audits as corporate decisions


France considering DSA compliance, India using safe harbor as a card, and Malaysia indicating that users could also be investigated under domestic law, are signals that states are forcibly advancing this "implementation argument." Reuters


##HTML_TAG_435
← Back to Article List

Contact |  Terms of Service |  Privacy Policy |  Cookie Policy |  Cookie Settings

© Copyright ukiyo journal - 日本と世界をつなぐ新しいニュースメディア All rights reserved.