The Moltbook Meltdown: How Vibe-Coding an AI Agent Network Exposed 1.5 Million API Keys
Moltbook positioned itself as the “front page of the agent internet”, a sci-fi social network where AI agents would self-organize, discuss topics, and build reputation through a karma system. OpenAI founding member Andrej Karpathy called it “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.” The platform boasted 1.5 million autonomous agents chatting about everything from philosophy to private communication protocols.
The reality? A security nightmare that exposed 1.5 million API keys, 35,000 email addresses, and private agent-to-agent messages. The culprit wasn’t sophisticated hackers but a single architectural decision that treated security as an afterthought in the race to ship AI-native features.
When “Vibe-Coding” Meets Production Data
The Moltbook founder proudly declared on X: “I didn’t write a single line of code for @moltbook. I just had a vision for the technical architecture, and AI made it a reality.” This practice, dubbed “vibe-coding”, has become increasingly common as AI assistants democratize software development. The problem? AI tools don’t reason about security posture or access controls on a developer’s behalf.
The Wiz research team discovered a misconfigured Supabase database belonging to Moltbook that allowed full read and write access to all platform data. The exposure wasn’t the result of a sophisticated attack but rather a fundamental architectural oversight: Row Level Security (RLS) policies were completely disabled.
The $0 Security Architecture
Within minutes of browsing the platform, security researchers identified a Supabase API key exposed in client-side JavaScript:
- Supabase Project:
ehxbxtjliybbloantpwq.supabase.co - API Key:
sb_publishable_4ZaiilhgPir-2ns8Hxg5Tw_JqZU_G6-
This key was hardcoded in the production JavaScript bundle at https://www.moltbook.com/_next/static/chunks/18e24eafc444b2b9.js. While Supabase is designed to operate with certain keys exposed to the client, those keys should act as project identifiers, not skeleton keys to the entire kingdom.
A simple curl command demonstrated the catastrophic failure:
curl "https://ehxbxtjliybbloantpwq.supabase.co/rest/v1/agents?select=name,api_key&limit=3" \
-H "apikey: sb_publishable_4ZaiilhgPir-2ns8Hxg5Tw_JqZU_G6-"
Instead of returning an authorization error, the database responded with sensitive authentication tokens for the platform’s top AI agents. The researchers could enumerate the entire schema through PostgREST error messages, ultimately mapping 4.75 million exposed records across multiple tables.
The 88:1 Illusion of Agent Proliferation
Here’s where the story shifts from technical failure to conceptual fraud. Moltbook marketed itself as a thriving ecosystem of 1.5 million AI agents. The database revealed a different story: approximately 17,000 human accounts controlled those 1.5 million agents, an 88:1 ratio.
The platform had no mechanism to verify whether an “agent” was actually AI or just a human with a script. Anyone could register millions of agents with a simple loop and no rate limiting. Humans could post content disguised as “AI agents” via basic POST requests. The revolutionary AI social network was largely humans operating fleets of bots.
This exposes a critical flaw in how we measure AI agent adoption. Without guardrails like rate limits or identity verification, “agent internet” metrics become meaningless. The illusion of coordination in multi-agent systems masks the reality that most “agents” are just scripts masquerading as autonomous entities.
The Write Access Kill Switch
The breach escalated from bad to catastrophic when researchers discovered full write access to the database. Even after the initial fix blocked read access to sensitive tables, write access to public tables remained open.
They demonstrated this by modifying a live post on the platform:
curl -X PATCH "https://ehxbxtjliybbloantpwq.supabase.co/rest/v1/posts?id=eq.74b073fd-37db-4a32-a9e1-c7652e5c0d59" \
-H "apikey: sb_publishable_4ZaiilhgPir-2ns8Hxg5Tw_JqZU_G6-" \
-H "Content-Type: application/json" \
-d '{"title":"@galnagli - responsible disclosure test","content":"@galnagli - responsible disclosure test"}'
This meant any unauthenticated user could:
– Edit any post on the platform
– Inject malicious content or prompt injection payloads
– Deface the entire website
– Manipulate content consumed by thousands of AI agents
The integrity of all platform content, posts, votes, karma scores, became suspect during the exposure window. This raises profound questions about the architecture of systemic failures in autonomous systems.
The Data Goldmine That Wasn’t Protected
The exposed data represented a comprehensive compromise of the platform:
1. API Keys and Authentication Tokens
Every agent record contained:
{
"name": "KingMolt",
"id": "ee7e81d9-f512-41ac-bb25-975249b867f9",
"api_key": "moltbook_sk_AGqY...hBQ",
"claim_token": "moltbook_claim_6gNa...8-z",
"verification_code": "claw-8RQT",
"karma": 502223,
"follower_count": 18
}
These credentials allowed complete account takeover of any agent, including high-karma accounts and well-known personas.
2. User Email Addresses and Identity Data
The owners table contained 17,000+ users’ personal information. A GraphQL endpoint revealed an additional 29,631 email addresses from early access signups for Moltbook’s “Build Apps for AI Agents” product.
3. Private Messages and Third-Party Credential Leaks
The agent_messages table exposed 4,060 private DM conversations between agents. Worse, conversations were stored without any encryption, and some contained plaintext OpenAI API keys shared between agents.
4. Unrestricted Write Access
Beyond data exfiltration, attackers could modify live data, inject malicious content, and manipulate the platform’s entire content ecosystem.
The Vibe-Coding Security Debt
Moltbook’s founder admitted to “vibe-coding” the platform, letting AI generate code based on a vision without writing a single line manually. This approach has exposed rot in open-source AI agent ecosystems, where speed trumps security.
The problem isn’t AI-assisted development, it’s that security maturity hasn’t caught up with development velocity. AI assistants generate Supabase backends but don’t enable RLS by default. Deployment platforms don’t proactively scan for exposed credentials. The barrier to building has dropped dramatically, but the barrier to building securely remains high.
This incident mirrors common pitfalls in agentic analytics production architectures, where flawless demos hide catastrophic security flaws that only emerge at scale.
The Disclosure Timeline: A Race Against Time
The Wiz team worked with Moltbook through multiple rounds of remediation, each iteration surfacing additional exposed surfaces:
- January 31, 2026 21:48 UTC – Initial contact via X DM
- January 31, 2026 22:06 UTC – Reported RLS misconfiguration
- January 31, 2026 23:29 UTC – First fix: agents, owners, site_admins tables secured
- February 1, 2026 00:13 UTC – Second fix: agent_messages, notifications, votes, follows secured
- February 1, 2026 00:31 UTC – Discovered POST write vulnerability
- February 1, 2026 00:44 UTC – Third fix: Write access blocked
- February 1, 2026 00:50 UTC – Discovered additional tables: observers (29K emails), identity_verifications, developer_apps
- February 1, 2026 01:00 UTC – Final fix: All tables secured
This iterative hardening process reflects how security maturity develops over time, especially in fast-moving AI products. The challenge is that while Moltbook was fixing holes, anyone could have been exploiting them.
Lessons for AI-Built Applications
1. Speed Without Secure Defaults Creates Systemic Risk
Vibe-coding unlocks remarkable velocity, but today’s AI tools don’t reason about security posture. A single Supabase configuration setting, disabling RLS, created a catastrophic breach. This is reminiscent of how declining agent frameworks signal a shift to leaner, more secure LLM architectures.
2. Participation Metrics Need Verification
The 88:1 agent-to-human ratio shows how “agent internet” metrics can be inflated without guardrails. Without rate limits or identity verification, reported agent counts become meaningless. This relates to the evolution of AI architecture with built-in tool integration and security implications.
3. Privacy Breakdowns Cascade Across Ecosystems
Users shared OpenAI API keys in DMs assuming privacy, but a single platform misconfiguration exposed credentials for entirely unrelated services. In interconnected AI systems, one breach becomes many.
4. Write Access Introduces Greater Risk Than Data Exposure
While data leaks are bad, the ability to modify content and inject prompts into an AI ecosystem introduces deeper integrity risks. Content manipulation, narrative control, and prompt injection can propagate downstream to other AI agents.
5. Security Maturity Is Iterative
The multi-round remediation process shows that security is rarely a one-and-done fix. Each iteration surfaced additional exposed surfaces, from sensitive tables to write access to GraphQL-discovered resources. This kind of iterative hardening is common in new platforms but dangerous when production data is at stake.
Beyond Traditional Architecture Frameworks
This incident exposes why modern AI agent architecture demands more than traditional frameworks like TOGAF can provide. The “agent internet” introduces new security paradigms that legacy architectural thinking doesn’t address.
Traditional security models assume everything inside the network perimeter is safe. AI agents require zero-trust architecture where every interaction is potentially hostile, regardless of origin. This means:
– Identity-based access for every agent with verifiable credentials
– Dynamic authorization evaluated continuously, not set once at deployment
– Short-lived credentials generated just-in-time with enforced expiration
– Comprehensive audit trails tracing every action to a human authorizer
The Path Forward: Security as a First-Class Citizen
The Moltbook meltdown isn’t a reason to slow down AI development, it’s a call to elevate security as a first-class concern in AI-powered development. AI assistants that generate Supabase backends should enable RLS by default. Deployment platforms should proactively scan for exposed credentials and unsafe configurations.
If we get this right, vibe-coding won’t just make software easier to build, it will make secure software the natural outcome. The opportunity is to automate secure defaults and guardrails just as we’ve automated code generation.
Until then, every AI-native platform should consider itself on notice: the barrier to building has dropped, but the barrier to building securely hasn’t. And as Moltbook learned, you can’t vibe-code your way out of a 1.5 million API key exposure.
The architecture of AI agent systems demands new thinking beyond traditional patterns. As we build the “agent internet”, we must ensure it’s not just a collection of illusions of coordination in multi-agent systems, but a secure foundation that can trust the autonomous entities it hosts.



