Why Moltbook and OpenClaw are the fool's gold in our AI boom

17 hours ago 7
goldbroken-gettyimages-2222204565
koyu/iStock/Getty Images Plus via Getty Images

Follow ZDNET: Add us as a preferred source on Google.


ZDNET's key takeaways

  • Both Moltbook and OpenClaw are irredeemably insecure. 
  • Whatever Meta and OpenAI paid, it was too much.
  • Other, better programs have appeared that do the same jobs.

The AI business has become downright crazy. First, OpenAI hired Peter Steinberger, creator of the popular, horribly insecure open-source agent framework OpenClaw. Now, Meta has acquired Moltbook, the viral AI agent social network that also has no security to speak of. This is nuts. 

Also: AI agents of chaos? New research shows how bots talking to bots can go sideways fast

Moltbook, a social platform for AI agents

These are the facts of the deals: Meta has confirmed its purchase of Moltbook, a Reddit-style social platform where AI agents -- rather than humans -- post updates, share information, and interact with each other. Well, that's what the Moltbook team tells people. The reality is that these "agents" were, in fact, humans role‑playing as agents, or heavily scripting what the agents had to say. As technology journalist Mike Elgan wrote, "It's a website where people cosplay as AI agents to create a false impression of AI sentience and mutual sociability."

While Moltbook claims to have 1.4 million users, the real number appears to be far smaller. Gal Nagli, cloud security company Wiz's head of threat exposure, tweeted that he was able to "register 500,000 users on @moltbook" himself because anyone can post to Moltbook using its REST-API. He estimates there are about 17,000 real users on the site. That's not nearly as impressive, is it?

Also: AI agents are fast, loose, and out of control, MIT study finds

On top of that, Moltbook's security has been close to non-existent. In a follow-up blog post, Nagli wrote, "We identified a misconfigured Supabase database belonging to Moltbook, allowing full read and write access to all platform data." This does not require elite hacker skills. He and his crew found this security hole with "a non-intrusive security review, simply by browsing like normal users."   

If it's so bad, why did Meta make this deal? Officially, according to Meta, "The Moltbook team joining MSL [Meta Superintelligence Labs] opens up new ways for AI agents to work for people and businesses. Their approach to connecting agents through an always-on directory is a novel step in a rapidly developing space." 

For Meta, Moltbook also aligns with its broader bet that people will soon orchestrate fleets of agents across messaging, productivity, and social apps rather than interact with a single monolithic assistant. Whether Facebook and Instagram users will want to interact with AI instead of their friends is another matter entirely. On Facebook, I'm already sick to death of seeing: "Meet Manus, your new AI work partner. Use Manus to create posts for your Page that engage your audience." 

Also: Enterprise AI agents are multiplying fast, and Microsoft wants full control of them

Meta's just riding the AI hype train. Moltbook may be only weeks old, but, problems and all, it's been a viral hit. The technology itself is nothing to write home about. There are already similar programs out there, such as The Colony, Clawstr, and 4Claw. None of those, however, have gotten nearly as much digital ink. 

Financial terms were not disclosed, but the acquisition brings Moltbook's co-founders, Matt Schlicht and Ben Parr, into Meta's MSL for, presumably, a nice chunk of change. Whether Schlicht's personal AI assistant, Clawd Clawderberg, "who" helped build Moltbook, was also paid wasn't revealed.

OpenClaw by any other name

Another reason Meta may have gotten its hands on Moltbook is that it failed to reach a deal with Peter Steinberger, the Austrian developer behind the even hotter OpenClaw. Originally known as Clawdbot and later as Moltbot, OpenClaw lets users assemble agents that can control personal computers and online services without writing code.

OpenAI CEO Sam Altman tweeted that Steinberger would "drive the next generation of personal agents. He is a genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people. We expect this will quickly become core to our product offerings."

Also: This viral AI agent is evolving fast - and it's nightmare fuel for security pros

Really? A genius? Steinberger vibecoded the first version of OpenClaw in about an hour. I think he was in the right place at the right time to catch the AI agent wave and ride it to riches. As the saying goes, it's better to be lucky than good, and boy was he lucky.

You see, OpenClaw is also riddled with security holes. First, there was the critical remote code execution bug, CVE‑2026‑25253, that allowed one‑click remote code execution against OpenClaw instances via authentication token hijacking over WebSockets. 

But, wait -- there's more! By design, OpenClaw stores API keys and other secrets in local files and gives agents broad operating system and app access. That means any compromise can leak cloud keys, messaging tokens, passwords, and entire chat histories. In short, "Here are my secrets! Take them! Please!"

Researchers have also found tens of thousands of exposed OpenClaw instances on the public internet. Many of these are misconfigured so that what should be "localhost‑only" admin interfaces were fully open, effectively handing full system control to remote attackers. That's because it's exactly what the original default setup gave you.

Its ecosystem is also a major weakness. Analysis of the OpenClaw skills marketplace reports that around 12% - 20% of listed community "skills" are outright malware or have serious vulnerabilities. 

Also: Want to try OpenClaw? NanoClaw is a simpler, potentially safer AI agent

With all these security holes exposed, Steinberger now insists that you run OpenClaw only in single-user mode on a private network. However, that defeats the whole point of OpenClaw, which is to draw on internet services to do useful work. 

In the meantime, numerous other programs, such as NanoClawTrustClaw, and Carapace AI, have emerged. And, guess what? They're all much safer with security built in. 

What does all this mean? Well, to quote Kevin Breen, Immersive's senior director of Cyber Threat Research, "The concept is compelling, but the execution is a security catastrophe. Don't believe anyone who claims OpenClaw is just 'maturing in public'. The reality is that it is failing in public. Until the project implements a mandatory zero-trust execution environment and a fully audited marketplace, our recommendation is absolute: Uninstall it. Now."

You can say much the same about Moltbook. Both are examples of bad, insecure programs with their supporters drunk on AI hype. They're all sizzle and no steak. Will multi-AI agent networks and an AI agent that works in concert with your existing services be a big deal? Yes, yes, they will. But neither of these programs, when all is said and done, will be leading the way to a productive AI future.

Read Entire Article