Investigation Network Expands to Grok and X - "Freedom of Expression" or "Illegal Content": French Prosecutors Search X, Summon Mr. Musk

Investigation Network Expands to Grok and X - "Freedom of Expression" or "Illegal Content": French Prosecutors Search X, Summon Mr. Musk

On February 3rd (local time), French prosecutors and investigative authorities conducted a search of the Paris headquarters of the social media platform "X". According to reports, this was not a "mere entry" but part of an investigation centered on the cybercrime division, with support from the European police agency Europol.


What happened: The focus is on "search" and "request for interviews"

The notable aspect of this move is that authorities reportedly (1) searched X-related offices in France and (2) requested "voluntary interviews" with Mr. Musk and former CEO Linda Yaccarino. The dates mentioned are around April 20, with the possibility that employees may be interviewed as "witnesses" in the same week.


A common misunderstanding here is the word "voluntary." While it may sound "spontaneous" in some articles, it is clear that the authorities are seeking explanations, including specific measures for compliance with laws. In fact, the prosecutors have indicated that "as long as X operates within French territory, it must comply with French law."


When did the investigation start, and why did it expand?

According to multiple reports, the investigation began in January 2025. Initially, the focus was on suspicions related to "algorithmic bias," "unauthorized manipulation of automated data processing systems," and "illegal data extraction," concerning the "platform's mechanisms."


However, the issues later expanded to "content." Specifically, the investigation reportedly widened to include suspicions related to the AI chatbot "Grok" offered and used on X, involving Holocaust denial (which can be a crime in France), sexual deepfakes, and child sexual abuse material (CSAM).


At this stage, the issues become twofold.

  • Technical and operational layer: Whether the design and operation of recommendation algorithms and data processing are breeding grounds for illegal activities.

  • Result (damage) layer: To what extent the business operator is responsible for the dissemination of illegal and harmful content and the resulting (or potential) damage.


This search symbolizes the stance of handling these two layers "as a criminal case" all at once.

Suspicions of concern to authorities: Heavy keywords listed

The suspicions listed in the reports are very serious. For example, "complicity in possession and dissemination of sexual images of minors," "rights violations through sexual deepfakes," "denial of crimes against humanity," and "unauthorized manipulation of automated data processing systems (organized)" are mentioned.


Notably, the phrasing includes the possibility that the platform (and its administrators) "were involved", rather than simply "posted by users." This raises questions about the overall operation, including moderation systems, reporting and deletion mechanisms, cooperation with external organizations, and guardrails (preventive measures) for AI functions.


X's backlash: "A political performance"

X has reportedly criticized this search as a "performance of law enforcement for political purposes" and denied any illegal activities.


On the other hand, Europe has rapidly increased demands on platforms in recent years in areas such as child protection, illegal content measures, and data protection. This incident represents the "pressure" finally manifesting in the form of a search.


SNS reactions: Support and backlash spread at the same speed

The reactions on social media to this news are broadly divided into "well done (support for regulation and crackdown)" and "overreach (criticism of censorship and political intervention)." However, the dividing line is not simply a left-right conflict. The key points are "who the regulations are meant to protect" and "who holds the power."


Regulation supporters have prominently voiced opinions such as "If you can't follow the law, you should leave the European market" and "X is already harmful, and there's little to lose even if it's suspended or banned." Indeed, comments succinctly stating "comply with the law or leave the European market" have been observed in European communities.


Additionally, in another community, there are posts mentioning "realistic sanctions" such as "even if they can't directly detain the individual, they can put pressure on business operations and assets within Europe," with an atmosphere of expecting the investigation to be a "starting point for an international chain."


On the opposition and caution side, a characteristic phrase is "justifying censorship and surveillance under the pretext of child protection." A symbolic criticism comes from Telegram founder Pavel Durov, who reportedly described France as "not a free country."


This type of reaction stems not so much from defending X, but from a fundamental distrust of "the state binding platforms through criminal procedures."


Furthermore, the debate over X's operation has spilled over into discussions of "platform alternatives" within Europe. While topics such as "wanting to nurture European alternatives" and "should move to decentralized options (e.g., Mastodon)" are emerging, practical discussions about "being difficult for general users to use" are also being had simultaneously.

What will be the focus: Will the investigation decide "X's future"?

The difficulty with this issue is that the points of contention do not end with "illegal content" alone.

  • To what extent can the platform's "management responsibility" be questioned

  • If AI functions accelerate dissemination, how is the responsibility of the designers and providers organized?

  • To what extent can cross-border SNS be effectively regulated through each country's criminal procedures?


The involvement of Europol in the investigation suggests the possibility that this issue may not be confined to France alone.
Furthermore, investigations into the handling of personal data related to Grok have been reported in the UK, positioning this within the broader European trend of strengthening oversight of "AI×SNS."


Ultimately, this search is not about "liking or disliking X," but rather who and how the rules are enforced when a giant platform becomes "social infrastructure." The simultaneous surge of support and backlash likely stems from the fact that this question "brings pain to both sides."



Sources