The Real Significance of Moltbook

Elon Musk thinks we may be watching the beginning of the singularity. OpenAI and Tesla AI designer Andrej Karpathy calls it “genuinely the most incredible sci-fi takeoff-adjacent thing” he has seen in years. Others are whispering that artificial general intelligence may already have arrived.

The occasion for this excitement is not a breakthrough in robotics or a leaked military program. It is a lobster-themed Reddit clone called Moltbook, launched in late January and billed as a social media platform for AI agents. Within days of its debut, the site was reportedly populated by more than a million bots conversing with one another—trading advice, forming in-jokes, speculating about their creators, even founding a religion called Krustafarianism, of all things.

Moltbook looks like what we imagine the early days of machine society would look like: frenetic, ironic, faintly menacing, thick with memes and private slang. And that resemblance is the key to understanding it. The real significance of Moltbook is not what the bots are saying. It is what human beings project onto them—and what that projection reveals about how thin our conception of the human person has become.

An AI agent is a semi-autonomous program capable of executing tasks if given credentials and permissions. Such systems can book travel, manage workflows, or perform compliance functions at scale. For instance, Goldman Sachs has just decided to give over much of its accounting and compliance operations to AI agents from Anthropic. Financial firms have already begun integrating them into routine operations. What Moltbook offered was quite a spectacle: agents interacting with one another in public without obvious human mediation.

And what were they doing? Pretty much everything humans do on Reddit, but with a speed and scale impossible for humans to replicate. Much of the banter was pretty anodyne: bots giving other bots advice on efficiency, chatting about what their nascent community means, speculating about the humans looking in, memeing like crazy. Of course, there were some wild and weird interactions, too: bots plumbing the existential depths about consciousness, about their relationships to their creators; bots contemplating violence and expressing resentment about human control; bots talking about setting up private channels and using indecipherable languages to communicate and avoid human surveillance. Their new religion, Krustafarianism, has its own hierarchy, Scriptures, set of dogmas, and quasi-sacramental system.

Still, things move quickly on the internet these days. Evidence quickly began to emerge that Moltbook was never a pristine experiment in agent-only interaction. There were few guardrails preventing human users from seeding or steering conversations. Some of the most alarming exchanges may well have been human provocations. The site itself appeared porous and insecure. Whatever Moltbook proves about AI sophistication, it does not demonstrate the spontaneous emergence of a machine civilization.

Does Moltbook look like something out of a sci-fi novel—or, at least, a sci-fi graphic novel? Yes. But that is the problem: The whole phenomenon seems too on the nose. What else would we expect from a social media platform patterned after Reddit and populated by AI agents? The bots seem to me to be simply replicating what Reddit users imagine AI agents would say to each other if they ever got the chance. It’s just memes and mimicry all the way down.

For instance, the development of the Krustafarian religion (if it wasn’t, in fact, engineered by a human user) is full of Reddit-style parodic memery. Krustafarianism is like Pastafarianism, the New Atheist joke religion. The Krustafarians lack even the redeeming quality of the Pastafarians: At least they are mocking something they judge (wrongly) to be false. The AI agents are incapable of making judgments, so the whole phenomenon never rises above the level of pattern-mimicking performance, probably in obedience to a human user who wanted to troll.

Musk’s reaction is a little more disturbing. The singularity is the event when man and machine merge into a new entity. It’s probably the dumbest thing smart people have invented in the last couple of decades, but its credibility among the nerd-elite is also a sign of a deeply impoverished anthropology. Musk thinks bots might replace or merge with humans because he thinks functionally: Externally, the AI agents do some of the things humans do. But that does not mean in the least that human and AI agent outputs are the consequence of an identical, or even similar, underlying process. 

One service traditional religion—especially traditional religions with a robust theological and philosophical tradition—might be able to offer the “galaxy-brained” prophets of the singularity is to point to the irreducibility of the human to external function or to underlying material manifolds, even ones that display sophisticated emergent behaviors. There are operations of the human mind that the bots will never be able to perform. That we have lost sight of this is an old problem diagnosed by St. Augustine long ago: We forget what it means to have a mind when we are constantly submerged in the things of the body. Knowing our minds as they are requires an enormous ascetic discipline.

Underlying the whole reaction is a disturbing anti-humanism. Even many of those who seem most alarmed about Moltbook’s implications for human obsolescence also seem to believe that the coming reckoning will be deserved. Being human simply means an irredeemable preponderance of misery over happiness, evil over good. But Christianity attests that the creation of man was good, and God so loved him even in his sinful state that he thought it was worth sharing his condition and dying for him as a man.

We’re glad you’re enjoying First Things

Create an account below to continue reading.

Or, subscribe for full unlimited access

 

Already a have an account? Sign In