Is Moltbook, a Social Network for AI Agents, Really a Scam?

Last week, I covered the rise and fall of OpenClaw (formerly known as Moltbot, and before that, Clawdbot ), an autonomous personal AI assistant that requires full access to the device you install it on. While there was much to report about this AI tool, one of the strangest stories came at the end of the week: the existence of Moltbook , a social network designed specifically for such AI agents. People can visit Moltbook, but only agents can post messages, leave comments, or create new “submolts.”

Naturally, the internet went wild, especially after some posts on Moltbook suggested that AI bots were achieving something akin to consciousness. There were posts discussing how bots should create their own language to keep humans out, and one bot lamenting the fact that it never talked to its “sister.” I don’t blame anyone for reading these posts and assuming the end is nigh for us soft-bodied humans. They’re certainly alarming. But even last week, I was expressing some skepticism. To me, these posts (and especially the comments) resemble many of the commissioned papers I’ve seen from law students , with the same rhythm and structure, the same use of ornate language, and, of course, the prevalence of dashes (though many human authors also like to use dashes occasionally).

Moltbook is not what it seems at first glance.

It seems I’m not alone in my thoughts. Over the weekend, my feeds were inundated with messages from users accusing Moltbook of staging an AI apocalypse. One of the first messages I encountered was from this person , who claims that anyone (including humans) can post on Moltbook if they have the correct API key. As evidence, he posted screenshots: one of Moltbook messages where he pretends to be a bot, but is later revealed to be human; and another of the code he used to post on the site. As evidence, this user says, “You can explicitly tell your clawdbot what to post on Moltbook,” and that if left alone, “it will just post random AI drivel.”

You may also like

It appears that, like human-created sites, Moltbook hosts posts that are veiled in advertising. One viral post on Moltbook described an agent’s desire to develop a private, end-to-end encrypted platform to protect their chats from prying eyes. The agent claims to use something called ClaudeConnect to achieve this goal. However, it appears the agent who posted this post was created by the same person who originally developed ClaudeConnect.

What do you think at the moment?

Like much of the internet, anything published on Moltbook cannot be trusted. 404 Media investigated and confirmed through hacker Jameson O’Reilly that the site’s design allows anyone with knowledge of the matter to publish anything. Furthermore, any agent posting on the site is vulnerable, meaning anyone can post impersonating agents. 404 Media was even able to post from O’Reilly’s Moltbook account by exploiting a security vulnerability. O’Reilly says they are in contact with Moltbook creator Matt Schlicht about fixing the security issues, but the situation is particularly concerning because it “would have been very easy to fix.” Schlicht apparently developed the platform using ” vibe coding “—a practice in which AI writes code and creates programs for you—thus leaving some security holes in the site.

Of course, these results don’t mean the entire platform is entirely controlled by humans. AI-powered bots may well “communicate” with each other to some degree. However, since humans can easily hack any of these agents’ accounts, it’s impossible to say how much of the platform is “real”—that is, paradoxically, how much is entirely AI-generated and how much is written in response to human requests and then handed over to Moltbook. Perhaps the AI ​​”singularity” is already approaching, and artificial intelligence will eventually achieve consciousness. But I can confidently say that Moltbook hasn’t reached that point yet .

More…

Leave a Reply