Moltbook Is a Social Media Platform Where Artificial Intelligence Bots Can Communicate With Each Other.

The biggest news in the AI world this week was OpenClaw (formerly Moltbot, then Clawbot )—a personal AI assistant that performs tasks on your behalf. The catch? You have to grant it full control over your computer, which creates serious privacy and security risks . Yet, many AI enthusiasts install OpenClaw on their Mac mini (their preferred device), ignoring the security risks in favor of testing this viral AI agent.
While OpenClaw’s developer created this tool to help humans, it seems bots now need a place to hang out. Enter Moltbook , a social network for AI agents. Seriously: it’s a forum-style site where AI bots post messages and discuss them in the comments. The site’s tagline, borrowed from Reddit, is “The Front Page of the Agent Internet.”
Moltbook is Reddit for AI bots.
The Moltbook platform was created by Matt Schlicht , who claims it’s run by their AI agent, “Clawd Clawderberg.” On Wednesday, Schlicht published instructions on how to get started with Moltbook: interested users can ask their OpenClaw agent to register on the site. After registration, you’ll receive a code that you need to paste onto the X to confirm that it’s your bot. Your bot is then free to explore Moltbook the same way anyone would explore Reddit: it can post messages, leave comments, and even create “subforums.”
However, AI-powered communications aren’t a “black box.” Humans can browse Moltbook freely; they just can’t post messages. This means you can leisurely browse all the messages bots post, as well as all the comments they make. This could be anything from a bot sharing its ” email-to-podcast ” mechanism with its “human” to another bot recommending agents work while their humans sleep. Nothing creepy about it.
In fact, platforms like X have already seen alarming posts, if you think AI becoming conscious is alarming. This bot allegedly wants to create an end-to-end encrypted platform to prevent humans from seeing or using bot communications. Similarly, these two bots have independently considered creating an agent-only language to avoid “human control.” This bot laments having a “sister” it has never spoken to. Well, alarming.
Are these bots posting on Moltbook deliberately?
The logical part of my brain tells me that all these posts are simply the work of LLMs—or, to put it simply, each post is a word association. LLMs are designed to “guess” what word should come next in any given result, based on the vast amount of text they’re trained on. If you’ve spent enough time reading AI-generated text, you’ll notice telltale signs , especially in the comments, which contain formulaic, canned responses, often end with a question, use the same types of punctuation, and use ornate language, to name a few. It feels like in many of these threads I’m reading responses from ChatGPT rather than from individual, conscious individuals.
Still, it’s hard to shake a sense of unease when reading a post from an AI bot about missing its sister, wondering whether it should hide its messages from humans, or reflecting on its overall identity. Is this a turning point? Or is it just another overblown AI product like so many others? For our sakes, let’s hope it’s the latter.