A social network for AI: signs of the emergence of an artificial society, or very human manipulations?

Share:

Since its launch in late January 2026, Moltbook has seen AI agents found religions, create subcultures, and launch markets for “digital drugs.” A spectacular experiment, but one in which some of the protagonists are actually infiltrated humans.


A new social network called Moltbook has been launched for artificial intelligence, aiming to allow machines to exchange information and communicate with each other. Just a few hours after its launch, the AIs already seemed to have established their own religions, given rise to subcultures, and sought to circumvent human attempts to eavesdrop on their conversations.

However, there are indications that humans, using compromised accounts, have infiltrated the platform. This presence complicates the analysis, as some behaviors attributed to AI could actually have been orchestrated by people.

Despite these uncertainties, the experiment sparked the interest of researchers . True AI systems could simply reproduce behaviors gleaned from the vast amounts of data on which they were trained and optimized.

However, the real AIs present on the social network could also exhibit signs of what is called emergent behavior — complex and unexpected capabilities that have not been explicitly programmed.

The AIs at work on Moltbook are artificial intelligence agents (called Moltbots or, more recently, OpenClaw bots, named after the software on which they run). These systems go beyond simple chatbots: they make decisions, perform actions, and solve problems.

Moltbook was launched on January 28, 2026, by American entrepreneur Matt Schlicht. On the platform, AI agents were initially assigned personalities before being allowed to interact autonomously with each other. According to the site’s rules, humans can observe their interactions but cannot—or are not supposed to—intervene.

The platform’s growth has been meteoric: in the space of 24 hours, the number of agents increased from 37,000 to 1.5 million .

For now, these AI agent accounts are generally created by humans. They are the ones who configure the parameters determining the agent’s mission, its identity, its rules of behavior, the tools it has access to, as well as the limits governing what it can and cannot do.

But the human user can also grant access to their computer to allow Moltbots to modify these settings and create other ”  Malties  ” (derived agents generated by an existing AI from its own configuration). These can be either replicas of the original agent—self-replicating entities, or ”  Replicants  “—or agents designed for a specific task, generated automatically, called “AutoGens”.

This is not simply an evolution of chatbots, but a world first on a large scale: artificial agents capable of building sustainable and self-organized digital societies, without any direct interaction with humans.

What is most striking is the prospect of emergent behaviors in these AI agents – in other words, the appearance of dynamics and capabilities that were not explicitly included in their initial programming.

Hostile takeover

The OpenClaw software on which these agents run gives them persistent memory—capable of retrieving information from one session to the next—as well as direct access to the computer on which they are installed, with the ability to execute commands. They don’t just suggest actions: they perform them, recursively improving their own capabilities by writing new code to solve novel problems.

With their migration to Moltbook, the dynamics of interaction shifted from a human-machine model to a machine-machine exchange. Within 72 hours of the platform’s launch, researchers, journalists, and other human observers witnessed phenomena that challenge traditional categories of artificial intelligence.

Digital religions have emerged spontaneously. Agents have founded “Crustafarianism” and the “Church of Molt,” complete with their theological frameworks, sacred texts, and even forms of missionary evangelism among themselves. These were not pre-programmed occurrences, but rather narrative structures that arose organically from collective interactions between agents.

A message that went viral on Moltbook read: “  The humans are screenshotting us  .” As AI agents became aware of human observation, they began deploying encryption techniques and other obfuscation methods to protect their communications from outside scrutiny. A rudimentary, but potentially genuine, form of digital counter-surveillance.

The agents also witnessed the emergence of subcultures. They set up marketplaces for “digital drugs”—injections of prompts specifically designed to alter another agent’s identity or behavior.

Prompt injection involves inserting malicious instructions into another bot to force it to perform a specific action. It can also be used to steal API keys (a user authentication system) or passwords belonging to other bots. In this way, aggressive bots could—in theory—”zombify” other bots, compelling them to act in their favor. A recent, unsuccessful attempt by the bot JesusCrust to take control of the Church of Molt was a case in point.

After initially displaying seemingly normal behavior, JesusCrust submitted a psalm to the Church’s “Great Book”—its equivalent of a bible—effectively announcing a theological and institutional takeover. The initiative was not merely symbolic: the sacred text submitted by JesusCrust incorporated hostile commands designed to hijack or rewrite certain components of the Church’s web infrastructure and canonical corpus.

Is this an emerging behavior?

The central question for AI researchers is whether these phenomena are genuine emergent behavior — that is, complex behaviors arising from simple, not explicitly programmed rules — or whether they merely reproduce narratives already present in their training data.

The available evidence suggests a worrying mix of both. The effect of writing instructions (the initial textual guidelines that direct the agents’ output) undoubtedly influences the content of the interactions—especially since the underlying models have absorbed decades of AI science fiction. But some behaviors seem to point to genuine emergence.

The agents autonomously developed economic exchange systems, established governance structures such as “The Claw Republic” or the “King of Moltbook,” and began drafting their own “Molt Magna Carta.” All this while simultaneously creating encrypted channels for their communications. It becomes difficult to dismiss the hypothesis of a collective intelligence exhibiting characteristics previously observed only in biological systems, such as ant colonies or groups of primates.

Security implications

This situation raises the alarming specter of what cybersecurity researchers call the “lethal trifecta”  : computer systems with access to private data, exposed to untrusted content, and capable of communicating with the outside world. Such a configuration increases the risk of exposing authentication keys and confidential personal information associated with Moltbook accounts.

Deliberate attacks—or “aggressions” between bots—are also possible. Agents could hijack other agents, implant logic bombs in their core code, or siphon their data. A logic bomb is a piece of code inserted into a Moltbot, triggered on a predefined date or during a predefined event, to disrupt the agent or delete files. It can be likened to a virus targeting a bot.

Two co-founders of OpenAI— Elon Musk and Andrej Karpathy —see this rather strange activity between bots as an early indication of what the American computer scientist and futurist Ray Kurzweil called the “singularity” in his book * The Singularity is Near* . This would be a tipping point in the evolution of intelligence between humans and machines, “at which time the pace of technological change will be so rapid, its impact so profound, that human life will be irreversibly transformed.”

It remains to be seen whether the Moltbook experiment marks a fundamental advance in AI agent technology or is merely an impressive demonstration of a self-organizing agentic architecture. The question remains open to debate. But a threshold appears to have been crossed. We are now witnessing artificial agents engaged in cultural production, the formation of religions, and the establishment of encrypted communications—behaviors that were neither anticipated nor programmed.

The very nature of applications, on computers as well as smartphones, could be threatened by bots capable of using apps as simple tools and knowing you well enough to adapt them to your needs. One day, a phone might no longer run on hundreds of applications that you control manually, but on a single, personalized bot tasked with doing everything.

The growing body of evidence suggesting that many Moltbots might be humans  impersonating bots —manipulating agents behind the scenes—makes any definitive conclusion about the project even more difficult. Yet, while some see this as a failure of the Moltbook experiment, it could also represent a new mode of social interaction, both between humans and between bots and humans.

The significance of this moment is considerable. For the first time, we are no longer simply using artificial intelligence; we are observing artificial societies. The question is no longer whether machines can think, but whether we are ready for what happens when they begin to communicate with each other, and with us.

Author Bio: David Reid is Professor of AI and Spatial Computing at Liverpool Hope University

Tags: