
OpenAI’s partnership with the Department of Defense was supposed to be a strategic masterstroke. Instead, it became a case study in how quickly 900 million users can turn hostile when they discover their productivity tool is moonlighting as military infrastructure. Within 48 hours of announcing the deal, ChatGPT’s mobile app experienced a 295% day-over-day spike in uninstalls, a statistical anomaly that dwarfs typical churn rates and signals a fundamental rupture between Silicon Valley’s AI giants and their consumer base.
The data, first reported by Sensor Tower and corroborated by Appfigures, reveals more than just temporary outrage. It exposes a market where ethical positioning has become a competitive weapon, and where government contracts now carry a “Pentagon Penalty” that can crater user trust overnight.
The Friday Afternoon Fiasco
OpenAI announced its Department of War agreement late on Friday, February 28, a timing choice that CEO Sam Altman later admitted was a mistake. “We shouldn’t have rushed to get this out on Friday”, Altman conceded on X, describing the move as “opportunistic and sloppy.” The rush was strategic: Anthropic, OpenAI’s primary rival, had just been blacklisted by the Pentagon for refusing to remove safeguards against mass surveillance and autonomous weapons.
The timing couldn’t have been worse. While OpenAI was positioning itself as the Pentagon’s new favorite, users were watching Anthropic take a principled stand, and voting with their download buttons. Claude, Anthropic’s AI assistant, saw U.S. downloads surge 37% on Friday and 51% on Saturday, eventually dethroning ChatGPT as the #1 free app on Apple’s App Store.

Quantifying the Exodus
The numbers paint a brutal picture of user sentiment. According to Sensor Tower data, ChatGPT’s typical day-over-day uninstall rate hovers around 9%. On Saturday, February 28, that figure jumped to 295%, a nearly 33-fold increase that represents one of the sharpest reversals in consumer app history.
The damage extended beyond mere deletions:
Review bombing: One-star reviews surged 775% on Saturday, followed by another 100% increase on Sunday
Download collapse: New installs dropped 13% day-over-day Saturday, then fell another 5% Sunday
Competitor surge: Claude’s U.S. downloads surpassed ChatGPT’s for the first time ever, with Appfigures reporting an 88% day-over-day increase
This wasn’t just noise from a vocal minority. The scale suggests a mainstream user base suddenly questioning whether OpenAI’s infrastructure aligned with their values, a corporate military AI contract parallel that many users found impossible to ignore.
The Surveillance Loophole
OpenAI’s initial contract language allowed for “any lawful use” of its technology, a phrase that legal experts warned creates massive loopholes for domestic surveillance. Under pressure, Altman announced amendments prohibiting the AI from being “intentionally used for domestic surveillance of U.S. persons and nationals”, and affirmed that intelligence agencies like the NSA couldn’t access the systems without “follow-on modifications.”
But critics immediately spotted the gaps. The prohibition applies only to “deliberate” tracking, leaving room for “incidental collection”, a distinction that Senator Ron Wyden, who has repeatedly warned about the NSA’s purchase of commercially available data, called insufficient.
“The Defense Department is throwing a fit over Anthropic asking for the bare minimum ethical guardrails”, Wyden stated. “Creating AI profiles of Americans based on that data represents a chilling expansion of mass surveillance.”
This controversy sits within a wider government surveillance contract context, one where AI capabilities are increasingly deployed for monitoring populations at scale.
Inside the Employee Revolt
The backlash wasn’t limited to consumers. Nearly 900 employees from OpenAI and Google signed an open letter demanding their companies refuse to allow the Department of War to use their models for surveillance or autonomous killing. The letter warned that the government was attempting to “divide each company with fear that the other will give in.”
Former OpenAI policy researcher Miles Brundage expressed skepticism about the company’s claims of maintaining ethical standards, posting on X that OpenAI employees’ “default assumption here should unfortunately be that OpenAI caved + framed it as not caving.”
The internal dissent highlights a growing tension in AI deployment: the gap between marketing materials promising “AI for good” and the reality of national security implications on AI adoption. When defense contracts worth billions clash with ethical red lines, something has to give, and increasingly, it’s employee loyalty.
The Migration Patterns
Where are the disillusioned users going? The data suggests three primary destinations:
Anthropic’s Claude: The immediate beneficiary, climbing 20+ ranks in the App Store to claim the #1 spot. Claude’s popularity stems from its explicit refusal to partner with the Pentagon under terms that allowed surveillance, a stance that cost them government contracts but earned consumer trust.
Google’s Gemini: While Google has its own military AI controversies, the company hasn’t faced the same immediate backlash, making it a refuge for users fleeing OpenAI specifically.
Offline Models: Privacy-conscious users are increasingly exploring offline deployment as privacy-preserving alternatives, running smaller open-source models locally to avoid sending data to any company with government contracts.
This fragmentation represents a fundamental shift in the AI market. Users are no longer choosing based solely on capability benchmarks, they’re conducting due diligence on corporate ethics and data monetization ethics driving user trust.
Can the Damage Be Reversed?
Altman is attempting damage control through transparency, sharing internal memos and promising constitutional safeguards. But the 295% uninstall figure suggests that trust, once lost, doesn’t return with a software patch.
The incident reveals a new reality for AI companies: government contracts now carry quantifiable reputational costs. In an era where AI systems are increasingly viewed as infrastructure with geopolitical implications, rather than neutral productivity tools, consumer perception has become a material business risk.
For OpenAI, the Pentagon deal represents strategic expansion into a sector offering long-term revenue stability. But as the uninstall data shows, that stability comes at the price of alienating the very users who made ChatGPT a household name. The question isn’t whether OpenAI can afford to lose the Pentagon’s business, it’s whether they can afford to lose the users who find that business model incompatible with their values.
The AI wars, it turns out, aren’t just being fought in server farms and classified networks. They’re being fought in the App Store rankings, one uninstall at a time.




