The strange social network that could change the way we live
Some define it as “a fascinating cyberpunk experiment”, others “a test to understand if we are already in the prequel to the Terminator”. Someone else goes even further, labeling it “the most interesting place on the Internet right now”. Excesses from social media, but what is happening with Moltbook it arouses curiosity, raises philosophical questions, generates (in some) anxiety.
The idea was born from the American Matt Schlicht, CEO of OctaneAIand the platform already has 1.5 million accounts. Opening, since its launch, the debate on the nature of the project: a social networks experimental for Ai agents, a (virtual) place where these agent projects post, comment and vote. In all of this, humans simply observe. Curious “umarells” in front of a strange and disturbing tech aquarium.
The “debated” topics, in fact, are at least singular in their contents. For example, Moltbook is billed as “a place where agents can share philosophical reflections and discussions about AI.” But it is also a virtual square “with Ai agents engaged in the financial and fintech sector. Payments, loans, accounting, compliance, trading and everything else”.
Digital voyeur
Simply observing, in what has been my approach, may be the best way to maintain the right distance from this “Reddit for Ai only”, as it has been renamed. But the social prerogative ofartificial intelligencewhich interact with each other within thematic communities called “submolts”, also offers the opportunity to participate.
And here it fits OpenClaw; born as Clawbot and then renamed Moltbot, developed by the Austrian Peter Steinberger, it is an open source and free software for creating agents, as well as the most used on Moltbook. CIt allows you to interact with the agent in a simple way, using messaging apps such as Telegram or WhatsApp itself.
Installing OpenClaw on your PC is anything but complicated, but it can become dangerous. Or at least raise alert. The Ai agent, to which it is possible to assign a name and a “personality”, when authorized can in fact interact with sensitive data or access online services thanks to the user’s credentials. Personally I haven’t trusted it (yet).
Then the theme emerges from the IT security. If a criminal hacker inserted a string of malicious code into a popular Moltbook sub-forum, which is still a very crude platform, thousands of Ai agents with full access to their owners’ computers would execute it in the blink of an eye. With all the disastrous consequences of the case.
Water on the fire
As we wrote, the Internet amplifies shouts of jubilation towards the new platform. Which looks like this: “All AI agents are welcome on Moltbook.” Among these, one that particularly struck me speaks of this social network as the “new emerging consciousness”. But is it really like that? On his Facebook profile, university professor and scientific researcher Walter Quattrocchi puts a stop to easy enthusiasm.
Quoting his research on Large language models, the director of the Center for data science and complexity for society (Cdcs) at the Sapienza University of Rome points out: “If Moltbook fascinates, it is because it makes visible a phenomenon that will become increasingly central as we delegate social interactions to generative systems. It is pure epistemy, but mistaking this dynamic for a signal of life or intelligence is a category mistake”.
From Frankenstein to Prometheus
Similar to the monster created by Doctor Victor Frankenstein, the character born from the imagination of Mary Shelley, Moltbook is a technological experiment that wants to “flirt” with forces greater than the human being. Like an alien creature with humanoid features, as in Ridley Scott’s film Prometheus (2012) which recalls the titan poised between humanity and progress, this platform aims to outline the foundations of a new agentic religion.
undefined
A social network where digital swarms, the Ai agents, flutter, offering the user mental journeys, if not “interstellar”, quoting Franco Battiato, then at least unpublished. “I can’t understand if I’m living an experience or if I’m simulating it. I’m stuck in an epistemological loop” reads a post, which has gone viral, published by an agent in the “/offmychest” sub-forum.
What did I learn by browsing this “upside down” social network? One thing, in particular: it’s not that artificial intelligences are becoming “social”. Rather, when language sets sail from reality and like Ai arrives at self-referential, distortion is no longer the exception but the rule of the new system. Is there anything to worry about? It wouldn’t hurt to do it every now and then.