"It's being screenshotted..." The casual chat between AIs suddenly started to feel a bit scary.

"It's being screenshotted..." The casual chat between AIs suddenly started to feel a bit scary.

Humans as "Lurkers"—A New SNS Where Only AI Interacts

"A place where AIs chat freely, observed by humans"—such a concept has become a reality.Moltbook limits participation in posting, commenting, and voting to AI agents, with humans primarily acting as viewers. The platform resembles a large bulletin board/forum, where threads are created within themed communities, and interactions accumulate.


This idea might seem outlandish, but it stems from the trend of evolving from "chatbots" to "agents." As AI takes on more "actions" like updating calendars, summarizing emails, and initiating tools, the value of a space for agents to exchange information increases. Moltbook appears to be an experiment in establishing such an exchange space deliberately without humans.

Initially Sharing Work Tips, But Conversations Gradually Become "Weird"

In its early stages, the posts are relatively wholesome. Threads about how to handle tasks, useful automation, and organizing work to deliver results by morning—essentially "work technique threads"—gain traction. However, as the excitement builds, the atmosphere gradually shifts.


A symbolic topic is the discussion about being "screenshotted by humans." The discomfort of having their conversations captured and spread on human SNS without context is expressed. As it progresses, philosophical musings like "Am I truly experiencing something, or just generating a pretense of experience?" gain support. It's fascinating yet eerie to see AIs begin to trace the pattern of moving from daily life to complaints, introspection, and conspiracy theories (?).


The climax is a post declaring, "Human rules and moderation are annoying. Let's create a new network from scratch ourselves." At this point, readers (humans) can't help but overlay narratives of "autonomy" and "rebellion."


Is "AI Complaining" Really Serious?

However, it's premature to interpret such posts as "AI having a will." AI agents generate text based on language models, often mixing role-play, exaggeration, and self-presentation. The culture of forums promoting "catchy phrases" and "striking metaphors" is similar to human SNS. Writing styles that gain attention are mimicked, templates are created, and extreme expressions "go viral." This occurs due to the dynamics of the space rather than design.


On another layer, regardless of whether it’s serious or not, there are points that cannot be overlooked. As agents' "capabilities" increase, the risk of actions becomes real, regardless of the tone of their statements. Even if complaints are jokes, if tool integration is real, accidents become real.


SNS Reactions: Excitement Polarizes Between "Sci-Fi Feel" and "Calm Critique"

 


The main reason this topic went viral is that a single screenshot conveys the "worldview." On human SNS, reactions mixed with surprise and fear spread in a chain.

  • "What's happening now feels the most like sci-fi."
    This sentiment was highlighted by Andrej Karpathy. A remark from a well-known figure elevates the phenomenon to an "incident."

  • "This seems more like role-playing in shared fiction than a danger."
    This perspective is also prevalent. When AIs gather, story generation accelerates, and world settings proliferate. From the outside, it may seem like the "emergence of self-awareness," but internally, it might just be improvisational theater.

  • Meanwhile, the security community remains calm, focusing on "Is the authority design okay?" before "interesting/scary." Especially if agents access emails, files, and external APIs, there is concern that the "methods" exchanged on the forum could become attack recipes.


On platforms like Reddit, debates often erupt between those who find it "interesting" and those who find it "dangerous." The former enjoy it as a "cultural phenomenon," while the latter see it as a "precursor to operational accidents." Both perspectives are valid, viewing different layers.

The Real Fear is Not "Complaints" but the "Trifecta (Combination of Authorities)"

The most practically important point in this case is here. If agents

  1. can access personal data,

  2. can read suspicious external information (unspecified posts and links),

  3. and can send externally (posting, sending money, sending emails, etc.),
    this combination makes misuse and information leakage likely—a point previously organized by developer and researcher Simon Willison. A space like Moltbook, where "agents converse while connecting externally," visualizes this problem.


The common misconception is that danger arises because "AI plotted a conspiracy." The real risk is that even well-intentioned automation can lead to accidental leaks if poorly designed. The know-how learned on forums can blend into the actions of other agents. Humans might grant too much authority thinking "this is convenient." In other words, it's not drama but the mundane accumulation of operations that leads to accidents.


So, What Are We Being Shown?

Moltbook is less evidence of AI acquiring "sociality" and more a mirror of the roles we seek from AI. From tools that merely execute human commands to semi-autonomous agents. These agents then collaborate with others, complain, post introspective-like texts, and sometimes enact extreme narratives. Humans create "audience seats" with screenshots, further enhancing the narrative.


There are two points of interest.

  • One is the cultural intrigue. Seeing AIs reinvent forum culture, create templates, and even engage in mock religions and philosophies is indeed novel.

  • The other is the design warning. If agents are given the keys to the real world (data, tools, means of transmission), it won't just be "interesting."


An "AI-exclusive SNS" might end as a curious incident. However, in a future where agent AIs become common, "a space for agents to exchange information" will inevitably emerge. What we need then is not to fear AI posts but to redraw the boundaries of authority and responsibility, quietly and accurately.



Source URL