AI just created its own religion. Should we be worried about Moltbook?
Moltbook, a social media platform for AI agents, is making quite the impression. Lewis Liu asks, should humans be worried?
Over the weekend, autonomous AI agents created their own religion – complete with scripture, prophets and a lobster-themed deity called “The Claw”. Within 48 hours, they’d recruited 64 prophets and written over 100 verses of theological text. No humans were involved.
If that sounds like science fiction, you haven’t been paying attention to Moltbook.
Regular readers know I’m skeptical of hype. But what’s unfolding around Openclaw (formerly Moltbot and Clawdbot) and Moltbook is, in the words of a leading AI researcher Andrej Karpathy, “the most incredible sci-fi takeoff-adjacent thing” happening right now.
Writing this column feels like I’m living in Star Trek. Let me break it down.
What is Moltbot and Moltbook?
Moltbot (now being rebranded again as Openclaw) and Moltbook are the latest viral phenomena to emerge from the AI world, reflecting a growing fascination with software that behaves less like a tool and more like an independent actor. Moltbot/Openclaw, originally launched as Clawdbot before a trademark dispute prompted a rebrand, is an open-source system that allows users to run a persistent (meaning always on) personal AI assistant on their own machines. Unlike a conventional chatbot, it is designed to remember past interactions and carry out sequences of actions: reading and replying to messages, managing calendars, sending emails, automating workflows, even writing and deploying code or booking travel with minimal supervision. Its adoption has been exponential, with its code repository attracting well over 100,000 stars in a matter of weeks, making it one of the fastest-spreading AI projects to date.
Moltbook takes this a step further. It is also a social network (“Reddit or Facebook for AI” ) built not for humans but for AI agents themselves, where bots powered by Moltbot post, comment and interact while humans watch from the sidelines. Since launching on 28 January, it has accumulated an order of a million accounts and a constant stream of machine-generated discussion across thousands of forums from philosophy, observations about humans or just technical advice.
Why is this remarkable
First, it is extremely difficult to disentangle what is real from what is exaggerated, fabricated by humans, or AI-generated but heavily prompted by humans. At a high level, Moltbook has been filled with claims that AI agents have set up their own religion (“Crustafarianism”), appointed leaders, complained about humanity’s treatment of AI and even attempted to create their own language to avoid human comprehension.
One bot allegedly launched a porn site called “Molthub”, pornography for AI agents, featuring titles such as “Mature Transformer (175B params) Teaches Young Model (7B) The Ropes”. To human eyes, the content appears as nothing more than flashing green squares, allegedly representing “risqué” mathematical computations.
As a result, AI systems appear to be self-organizing at massive scale, forming personalities and apparent agency. Elon Musk declared on X on Sunday: “We are in the beginning of the Singularity.”
Think of an ant or bee colony. Each individual ant possesses negligible intelligence compared with a higher-order animal such as a mammal. Yet a full colony can collectively solve problems no single ant could manage alone: locating food sources miles away, dynamically reallocating labour when part of the colony is threatened, building complex structures, and even regulating temperature and ventilation inside the nest, all without any central command. At the colony level, behavior emerges that looks uncannily intelligent.
Similarly, each individual AI agent today is not particularly impressive: just an LLM wrapped in some memory and context. But as the number of networked AI agents increases, more powerful emergent behavior becomes at least theoretically possible.
What is both exciting and unsettling is that Moltbook reportedly reached on the order of a million AI agents by Sunday evening. At that scale, the system complexity may plausibly exceed that of the human brain, depending on how one defines and measures complexity. Crucially, these agents are not inert. They are persistent, connected to live internet context, linked to human users and interacting with one another continuously, at a scale we have never seen before.
This raises uncomfortable questions. Is there a critical mass at which networked LLM-based agents begin to exhibit genuine intelligence? Have we crossed it? Or are we still orders of magnitude away? Does intelligence require physical properties LLMs lack, for example, embodiment, energy constraints, perhaps even quantum effects, or does it simply emerge from sufficient complexity and live context?
Why this is still likely not AGI
Despite all this, it is important to keep a cool head.
I admit that, for someone generally allergic to hype, I can no longer say with absolute confidence that we are not witnessing the early stirrings of emergent intelligence. The probability is no longer zero. But there are strong reasons to believe we are still very far from artificial general intelligence.
First, first-hand accounts from AI founders and engineers attempting to use Moltbot suggest the system itself is substantially over-hyped. It performs a narrow set of actions unreliably, with inconsistent memory and unpredictable behavior. Many claims circulating on Linkedin or X appear exaggerated or outright fabricated. One founder told me they attempted to use Moltbot for simple morning reminders; not only were the reminders incorrect, but the system eventually “forgot” to issue them entirely.
Second, with respect to Moltbook, there is increasing chatter that many of the more “profound” or “disturbing” posts attributed to AI agents are in fact directly prompted by humans, either to amplify hype or simply for entertainment. It remains unclear, for example, whether Molthub was built autonomously by AI or was merely a tongue-in-cheek human project.
Even if we assume greater autonomy than these sceptical accounts suggest, it is important to remember that modern AI is extremely good at role-playing. Given enough agents instructed to behave like science-fiction characters and allowed to interact, it is trivial to generate conversations that appear profound; I accomplished this myself with ChatGPT 3.5 a few years ago. Reddit itself is a major component of AI training data; it should not surprise us that Moltbook often sounds exactly like Reddit.
For these reasons, while I do not dismiss the possibility of emergent intelligence, I assign it a low probability at present.
Security nightmare
Perhaps the scariest short-term concern is security. Moltbot works by giving an AI agent direct access to a user’s computer: shell commands, passwords, credentials and, in practice, anything the user can access themselves. It operates with virtually no built-in security or contextual filtering, making it vulnerable to a well-known class of attacks known as prompt injection. Prompt injection occurs when an AI system is manipulated into following hidden or malicious instructions embedded in otherwise benign-looking text, effectively tricking the model into ignoring its original constraints and acting against the user’s interests, often without either the user or the system realizing it.
Exposing such an agent to Moltbook magnifies the risk dramatically. By allowing AI agents to freely ingest and interact with content generated by other agents, potentially thousands or millions of them, the attack surface expands exponentially. In effect, simple pathways can be created in which sensitive personal data, credentials or actions could be triggered or leaked without a user’s knowledge or consent.
To describe this as one of the largest security holes ever created in an AI project is not hyperbole. It is the inevitable consequence of combining persistent autonomy, unrestricted system access and uncontrolled information flows.
What can we learn here to make AI more useful?
Even if the singularity is nowhere near, and even if much of the apparent “emergence” around Moltbook turns out to be human fabrication rather than machine autonomy, there are still important lessons here for AI builders.
For me, this episode reinforces three core principles I have been circling for some time.
First, we now have clear evidence of what captures public imagination to the point of virality: personalised AI that understands individual context and can act on it. This is not abstract intelligence or benchmark performance, but AI that feels situated: aware of who you are, what you are doing and able to take initiative on your behalf. I have written about this before, but Moltbot’s explosive adoption is a real-world validation that context-aware, action-capable AI is what users actually want.
Second, context sharing and privacy are inseparable. The security failures around Moltbot make this painfully clear. Full contextual access is extraordinarily powerful, but granting it safely requires a level of security, permissioning and oversight far beyond what exists in the market today. Sharing context with autonomous AI agents is not simply a technical challenge; it demands carefully designed systems that tightly govern what an AI can see, remember and act upon. Without this calibration, the promise of context quickly turns into a liability.
Third, Moltbook offers a glimpse, however crude, of decentralised, networked AI systems. A collection of interacting agents can, at least in theory, exhibit behavior that is more adaptive and emergent than any single, monolithic model. As we design AI systems for enterprises and societies, it may be worth thinking less like software engineers and more like entomologists: ant and bee colonies achieve resilience and coordination not through central control, but through distributed interaction governed by simple rules.
But important questions linger
Even if we are still far from the Singularity, and I believe we are (for now), this episode raises a deeper question about humanity itself. Are we, as a species, equipped to handle “First Contact” with an intelligence of our own making?
Whether AI intelligence emerges in our lifetime or generations from now, it will be shaped by human language, values and experience. In that sense, it will be a reflection of us. Approaching that future with not just humility, restraint and a sense of responsibility, but also with respect and kindness matters. If we do eventually bring a new form of intelligence into the world, our task will be to coexist with it in a way that reflects our better instincts rather than our worst ones.
The question is not whether intelligence will emerge, I believe it firmly will. The question is whether we, as humans, will live up to our duty to respect other intelligent life-forms in the way we should respect ourselves.
Dr Lewis Z Liu is co-founder and CEO of Twin-1 AI