There’s something surreal about watching a $850 billion company burn through cash like a teenager with their first credit card. The AI community is split between true believers who’d “pry these tools from my cold dead hands” and skeptics who see a financial train wreck happening in slow motion. Both sides are right, and that’s the problem.
OpenAI’s latest financial projections reveal a company generating impressive revenue while simultaneously digging a hole that would make a crypto mining operation blush. According to The Information, OpenAI now forecasts it will burn through $111 billion in cash by 2030, a figure so staggering it makes their $8 billion projected loss for 2025 look like a rounding error. That’s roughly $22 million evaporating every single day, or about $916,000 disappearing every hour. For context, that’s more than the median US household earns in a decade, gone in 60 minutes.

The kicker? Even with this astronomical spending, analysts estimate OpenAI will need an additional $207 billion in funding just to reach profitability by 2030. This isn’t a runway, it’s a launchpad to nowhere.
The Circular IOU Economy
Here’s where the financial engineering gets deliciously perverse. Amazon is reportedly ready to write OpenAI a $10 billion check. The natural question, what will OpenAI buy with that money? The answer: probably $10 billion worth of AWS cloud capacity. Welcome to the circular investment economy, where infrastructure providers fund the very companies that consume their services. It’s like a restaurant paying you to eat there, then counting your meal as revenue.
This pattern repeats across the AI ecosystem. Nvidia, the world’s most valuable company, isn’t just selling shovels during this gold rush, it’s accepting equity as payment for those shovels. With over 50 equity-for-chips deals this year, including a massive position in OpenAI, Nvidia has become a shadow financier of the AI boom. The company needs OpenAI to succeed because OpenAI needs Nvidia’s chips, which means Nvidia needs to keep funding OpenAI to buy more chips. If that sounds like a snake eating its own tail, you’re paying attention.
These circular investment patterns between cloud providers and AI labs create a dangerous illusion of sustainability. The money moves in a loop, generating impressive top-line numbers while obscuring a fundamental truth: someone, somewhere, eventually needs to make actual profit.
The SAFE Trap That Wipes Out Founders
The venture capital structure fueling this bonfire makes the circular problem worse. OpenAI’s funding relies on Simple Agreements for Future Equity (SAFEs) and convertible debt, financial instruments that work brilliantly for investors but create perverse incentives.
Here’s the mechanics of a typical round:
| Investor’s Value (Ownership %) | Company Overall Value |
|---|---|
| Funding Round: $100B (11.8%) | $850B |
| Future Valuation (Higher): $200B (11.8%) | $1.7T |
| Future Valuation (Lower): $100B (50%) | $160B |
Notice the asymmetry? If valuations go up, everyone wins. But if OpenAI’s valuation collapses to $160 billion at IPO, employees and founders get completely wiped out while investors still own half the company. The incentive structure demands ever-higher valuations, which requires ever-grander visions, which requires ever-more spending. It’s a financial perpetual motion machine that only works while going up.
This explains why OpenAI is reportedly pushing for an IPO as early as this year. Investors need exit liquidity before the market loses faith in the story. The problem? The story is getting harder to believe.
Anthropic’s Slightly Less Flaming Train
Anthropic isn’t in the same boat, it’s in a slightly more seaworthy vessel taking on slightly less water. The company recently closed a $30 billion Series G at a $380 billion valuation, with a revenue run-rate of $14 billion that’s grown 10x for three consecutive years. Eight of the Fortune 10 are Claude customers, and business subscriptions to Claude Code have quadrupled since January 2026.
These numbers are genuinely impressive. A senior developer with over a decade of experience noted: “I went from $20->$100->$200 personal sub within 10 days” after trying Claude. The product-market fit is real, especially for coding.
But Anthropic faces the same fundamental physics. That $30 billion raise mostly flows to, you guessed it, compute providers. The company’s partnership with Infosys to integrate Claude into enterprise systems is smart, but it’s still a capital-intensive game of catch-up. When your CFO says the funding “reflects the incredible demand”, it’s worth asking: does it reflect demand for your product, or demand from your investors to keep the valuation story alive?
The company’s focus on “responsible AI” and safety research, while ethically commendable, adds another layer of cost without immediate revenue. It’s like a restaurant advertising how much it spends on food safety inspections, important, but not what drives customer adoption.
The Acquisition Math No One Wants to Talk About
Here’s the uncomfortable truth buried in Reddit threads and analyst notes: OpenAI and Anthropic are worth more to Google or Microsoft as acquisitions than as independent companies. Not because of their technology, but because of their burn.
Google and Microsoft have “robust continuing free cash flows” and can run these models on their own infrastructure. OpenAI paying AWS or Azure rates for compute is like renting a car for a decade instead of buying one. The hyperscalers can absorb these companies, eliminate the infrastructure markup, and instantly improve unit economics.
The argument against acquisition is valuation. Who can afford a $850 billion OpenAI at a premium? But that’s thinking like a traditional M&A player. If OpenAI’s valuation collapses to $100-200 billion and it defaults on its debts, the acquisition math changes dramatically. The “feast on the remains” scenario becomes plausible.
As one researcher at a major lab noted on developer forums, “hyperscaler giants can sit back and watch without much fear.” They’ve already secured the talent through acqui-hires, the technology through licensing deals, and the market through distribution. Waiting for a distress sale isn’t just rational, it’s optimal.
The Productivity Paradox and the China Wildcard
The bull case rests on AI tools being “essential” to modern development. And it’s true, many engineers won’t give up their AI assistants. But essential doesn’t automatically equal profitable.
The barrier to entry is surprisingly low. Chinese models are reportedly only a year behind and trending toward open source. When (not if) a Chinese lab releases a model that matches GPT-4 and open-sources it, the pricing pressure becomes existential. Open source doesn’t mean zero cost, you still need massive infrastructure, but it does mean zero differentiation.
This creates the economic paradox of automating jobs while undermining customer purchasing power. If AI tools replace enough developers, who pays for the $200/month subscriptions? The market starts eating itself.
Meanwhile, OpenAI’s aggressive hardware stockpiling has already secured control of roughly 40% of DDR5 RAM supply, forcing prices up 156%. It’s a desperate move to secure compute capacity, but it also signals how precarious the supply chain is. When you’re buying RAM from retail stores, you’re not planning for sustainable growth, you’re hoarding for a siege.
The Inevitable Reckoning
Everyone is acting rationally. Nvidia needs to sell chips. Microsoft needs to upsell AI in productivity tools. SoftBank needs a big win after WeWork. The incentive alignment is perfect, until it isn’t.
The problem is the sheer scale. We’re talking about $650 billion in capex from big tech companies this year, with OpenAI alone facing $1.5 trillion in spending commitments. Half of US economic growth is now driven by AI spending. Stocks like Comfort Systems (industrial HVAC) are soaring because data centers need cooling.
This is the $8 trillion AI infrastructure mirage. The math doesn’t add up because the business model hasn’t been proven at scale. Unlike Microsoft, Google, or even Netflix, which were profitable before going public, OpenAI is asking public markets to fund losses while promising a future that keeps receding.
When Microsoft stock dropped 12% in a day, erasing $440 billion, because investors finally questioned the AI growth narrative, it was a preview. The market is starting to ask: where’s the disruption? Where are the new business models? As one analyst pointed out, even the most hyped use cases like “agentic commerce” boil down to “filling out web forms, improving search, and basically creating an AI that works like Instagram ads.”
The Endgame Scenarios
So what happens? Three paths seem plausible:
-
The Soft Landing: OpenAI and Anthropic slowly reduce burn, find sustainable niches, and become profitable at much smaller scale. The $850B valuation was fantasy, but the core business survives.
-
The Acquisition: One or both companies face a valuation crunch, defaults on obligations, and gets acquired by a hyperscaler at a fraction of previous valuations. Investors get some return, founders get golden handcuffs, and the technology gets absorbed.
-
The Meltdown: The funding cycle breaks before a soft landing or acquisition can happen. A combination of open source competition, investor fatigue, and economic downturn creates a cascade. The AI layoff paradox meets the intensifying startup work culture in a death spiral.
The surreal part is that we can see all three paths simultaneously. The tools are genuinely useful. The financials are genuinely terrible. The incentives are genuinely misaligned. And the world is genuinely watching it happen in real-time.
The question isn’t whether OpenAI and Anthropic can survive without profitability. It’s whether they can survive long enough to find a path to profitability that doesn’t require a complete rewrite of their business model, or a buyer with deeper pockets and more patience than the current investors.
For now, the circular IOU economy spins on. But as any engineer knows, circular dependencies are a nightmare to debug, and they always break eventually. The only question is which component fails first.




