The OpenClaw Paradox: Why Its Creator Joined OpenAI to Keep It Open Source
February 17, 2026 (2m ago)
Peter Steinberger had a unicorn in his hands. OpenClaw — his open-source AI agent framework — had just gone viral. 180,000+ GitHub stars in two weeks. 2 million visitors. Every major AI lab in San Francisco wanted to meet him.
The obvious move? Raise a Series A, build a company, ride the hype to a multi-billion-dollar valuation. In today's AI market, viral infrastructure projects with this kind of traction command $5-10B+ valuations.
Instead, he joined OpenAI.
Not to build a company. Not to cash out. To keep OpenClaw open source.
This is the story of why the most counterintuitive choice might be the smartest one.
#The 90-Day Rocket Ship
OpenClaw didn't stumble into virality. It earned it.
Steinberger launched OpenClaw (originally called Clawdbot, then briefly renamed to Moltbot before becoming OpenClaw in late January) in November 2025 as an open-source AI agent framework. It was ambitious: a tool that could control your computer, automate tasks, and interact with the world like a human assistant.
But it was late January 2026 when everything changed. The Moltbook project — a demo showing OpenClaw autonomously building a Mac app — hit the front page of Hacker News. Then Twitter. Then everywhere.
Within the first two weeks of going viral:
- 180,000+ GitHub stars (reaching 200,000+ by mid-February)
- 2 million visitors to the project page
- Coverage from TechCrunch, The Verge, CNBC
- Warnings from China's industry ministry about "security risks"
Steinberger became the hottest founder in AI. Multiple labs courted him during his San Francisco meetings in early February. The path was clear: raise money, build a company, become the next AI unicorn.
He chose differently.
#The High-Caliber Hire
On February 14-15, 2026, Steinberger announced he was joining OpenAI. OpenClaw would become an independent foundation, sponsored by OpenAI but guaranteed to remain open source.
OpenAI effectively hired one of the most sought-after software engineers in the AI space — someone who had just proven they could build infrastructure that captured global attention. While the exact terms aren't public, top-tier engineering hires at frontier AI labs in 2026 command total compensation packages of $5-15M+ annually (base salary + stock + bonuses), with multi-year retention packages that can exceed $50M. For someone of Steinberger's caliber and proven track record, the numbers are likely on the higher end.
Here's the comparison he faced:
Build a Company
The Expected Path
Join OpenAI
The Paradox Choice
On paper, it looks like an unconventional choice. But Steinberger's reasoning is clearer when you understand what he actually wants.
#"I Want to Change the World, Not Build a Large Company"
In his personal blog post announcing the move, Steinberger was direct:
"I want to change the world, not build a large company."
That's not startup-speak. That's someone who already built a company and knows exactly what it costs.
Steinberger spent 13 years building PSPDFKit, a successful developer tools startup. He knows the fundraising treadmill. The investor pressure. The quarterly metrics. The pivots. The trade-offs between growth and mission.
He also knows that keeping a project truly open source gets harder with every funding round. Series A investors want revenue. Series B investors want scale. Series C investors want an exit. Somewhere along the way, "open source" becomes "open core" becomes "proprietary with an API."
Joining OpenAI, counterintuitively, protects OpenClaw from that fate.
#The Foundation Model (Literally)
Here's what the deal actually looks like:
- OpenClaw becomes an independent foundation — not owned by OpenAI, not controlled by Steinberger, not beholden to investors
- OpenAI sponsors the project — provides funding, infrastructure, and research access without equity or governance control
- Steinberger joins OpenAI's team — works on agent research, gets access to frontier models and the best AI researchers in the world
- OpenClaw stays fully open source — guaranteed, in perpetuity
It's a rare structure in AI. Most "open source" projects from big labs (looking at you, Meta's "open" Llama) come with strings attached. This one has OpenAI's resources but remains genuinely independent.
#Why OpenAI?
The paradox makes more sense when you consider what Steinberger actually needs to build the agent he envisions.
His stated goal: "Build an agent that even my mum can use."
That requires:
- Frontier models — GPT-4, o1, whatever comes next. The best reasoning engines available.
- Research collaboration — Access to the people pushing the boundaries of agent capabilities
- Infrastructure — The compute and tooling to test at scale
- Focus — No fundraising, no board meetings, no growth metrics. Just building.
No other path gives him all four. Building a company gets him funding and maybe research collaboration, but loses focus and independence. Staying independent keeps control but limits resources. Meta or Anthropic might offer similar deals, but OpenAI is the frontier leader in agent research.
The irony is that joining the most closed lab is what keeps his project open.
#The Builder's Choice
This is what happens when a founder optimizes for impact instead of outcomes.
Steinberger could have built a unicorn. He's done it before. He knows the playbook. But he also knows that playbook doesn't maximize for "change the world" — it maximizes for "build a large company."
Those aren't the same thing.
OpenClaw's virality gave him leverage to dictate terms. He used it to secure the rarest thing in AI: genuine independence with frontier resources. OpenAI gets a brilliant agent researcher. Steinberger gets to build without constraints. OpenClaw stays open source forever.
It's a paradox. But it might be the smartest play in AI this year.
#What This Means for Open Source AI
The OpenClaw deal sets a precedent. It shows that viral open-source projects have more negotiating power than founders realize.
Most assume the path is binary: build a company or stay indie. OpenClaw proved there's a third option: partner with a frontier lab, retain independence, guarantee open source.
Will other labs follow? Meta's already sponsoring open models. Anthropic funds AI safety research. Google has dozens of open-source projects. But none have structured it quite like this — a true independent foundation with guaranteed openness and frontier access.
If it works, expect more founders to take this path. If it fails, we'll all learn something about the limits of independent foundations in AI.
Either way, Steinberger made the interesting choice. And in AI, interesting usually beats obvious.