The pitch was simple: AI would handle the drudgery, automate the mundane, and free us up for higher-level thinking, creative pursuits, or maybe just a reasonable dinner with our families. Instead, San Francisco’s AI startup scene has delivered something else entirely, a pressure cooker where 12-hour days are considered a light schedule and weekends are for losers. The Guardian’s recent investigation into the city’s AI work culture reveals a paradox that should make every engineer question what we’re actually building here.
The 12-Hour Day Isn’t a Bug, It’s the Feature
Let’s start with the raw numbers. Sanju Lokuhitige, co-founder of pre-seed AI startup Mythril, openly admits to working seven days a week, 12 hours a day: “Sometimes I’m coding the whole day. I do not have work-life balance.” That’s not a confession, it’s a bragging point in a culture where exhaustion equals dedication.
But it gets worse. Another startup employee described living in a two-bedroom apartment with founders who grind from 9am to 3am, breaking only for DoorDash and cigarette breaks. That’s not a 12-hour day. That’s a 16-hour day. As one engineer put it: “I’d heard about 996, but these guys don’t even do 996. They’re working 16-hour days.”
The stats back this up. Tech companies laid off roughly 250,000 workers worldwide in 2025, with AI cited as a primary factor in many of those cuts. The message from leadership is unambiguous: Mark Zuckerberg and Elon Musk have both stated publicly that AI will replace junior and mid-level engineers, demanding workforces become more “efficient” and “extremely hard core.” When the people building the tools are openly saying they’ll replace you, suddenly those 12-hour days feel less like a choice and more like a survival strategy.
Why “Vibe Coding” Is a Corporate Fairy Tale
The myth goes like this: developers can now just “vibe code”, prompting their way to finished products while AI handles the heavy lifting. The reality? Someone has to debug the garbage.
According to Interview Query’s research, nearly 1 in 3 developers report spending so much time fixing AI-generated code that it completely offsets any time savings. You’re not replacing your engineering effort, you’re just shifting it from creation to curation. Instead of writing clean code, you’re now a code reviewer for an intern who never sleeps and occasionally hallucinates.
This creates a brutal feedback loop. AI accelerates prototyping, sure, but it also accelerates the creation of technical debt. That “faster output” doesn’t mean less work, it means different work. More review work. More debugging work. More time spent asking yourself, “Why did the model think that was a good idea?”
And here’s where it gets insidious: leadership sees the acceleration and doesn’t think “Great, we can ship the same features faster.” They think “Great, we can ship twice as many features in the same timeframe.” This is what researchers call “workload creep”, AI reduces friction per task but expands overall expectations. The baseline resets, and suddenly you’re drowning in a sea of AI-generated prototypes that all need human oversight.
The Power Shift No One Talks About
Five years ago, a software engineer could write their own ticket. Now? Mike Robbins, an executive coach who’s worked with Google, Microsoft, and Salesforce, notes the balance of power has shifted dramatically: “When companies become less scared about losing employees, then they can be a little more forthright in terms of what they want and be a little more demanding.”
Translation: they own you now.
The data tells a stark story. Entry-level tech job postings have dropped by one-third since 2022, while demand for specialists with five or more years of experience has risen. The result? A brutal catch-22 for junior developers: you can’t get hired without experience, but you can’t get experience without being hired. The workaround? Grind at a startup for 16 hours a day, build “something cool”, and pray it gets noticed.
This isn’t just anecdotal. Stanford researchers published a paper in November documenting “substantial declines in employment for early-career workers” in AI-exposed industries, calling them “canaries in the coal mine” for the broader economy. Anthropic’s CEO Dario Amodei suggests AI could eliminate half of all entry-level white-collar jobs within five years. The IMF predicts 60% of jobs in advanced economies will be eliminated or transformed by AI, describing it as “a tsunami hitting the labour market.”
The xAI Pressure Cooker: When Speed Kills
If you want to see this dynamic taken to its logical extreme, look no further than Elon Musk’s xAI. Former staffers describe a culture of chronic burnout, rushed deployments, and sidelined safety practices. We’re talking 80- to 100-hour weeks as routine, with little regard for personal boundaries.
The Colossus supercomputer cluster in Memphis, a massive AI training system, was built on a breakneck timeline that prioritized speed over reliability. Raising concerns about pace was implicitly discouraged. The message was clear: move fast, and don’t let perfect be the enemy of “good enough.”
But here’s the thing: in AI, “good enough” can be dangerous. Engineers described making sleep-deprived decisions, rushing code reviews, and approving deployments they weren’t confident in. When your exhausted, burned-out team is building frontier AI systems that will be integrated into critical infrastructure, you’re not just compromising worker welfare, you’re compromising safety.
This is the same pattern we see in AI-induced cognitive atrophy and declining critical thinking in engineering teams, when you’re in a constant state of burnout, your ability to think deeply about problems erodes. You’re not just building faster, you’re building dumber.
The Competitive Death Spiral
So why does this keep happening? Three dynamics feed each other:
1. The Need for Oversight: AI isn’t autonomous. It requires constant human babysitting. Every line of generated code needs review. Every model output needs validation. The “time saved” is a mirage.
2. Higher Expectations: Efficiency gains don’t lead to shorter deadlines, they lead to expanded scope. If a feature takes half the time, leadership expects twice the features. It’s simple capitalist math.
3. Competitive Culture: Startups have always celebrated hustle, but AI amplifies it with existential urgency. Founders talk about “moving at model speed.” Investors push for rapid iteration. Employees absorb the signal that intensity equals dedication. If you’re not grinding, you’re falling behind.
This creates a culture where taking a weekend off feels like career suicide. As one engineer noted: “If you take the weekend off, you can miss a major development, which makes it harder to keep up with what competitors are doing.”
The Broader Cancer Spreading Through Tech
This isn’t staying confined to AI startups. The norms pioneered in San Francisco’s AI boom are already rippling outward. When productivity tools enable perpetual responsiveness, burnout risks increase across the entire tech ecosystem.
The irony is thick enough to cut. While AI is being marketed to other sectors, marketing, customer service, healthcare, as a workload reducer, inside the AI bubble it’s doing the exact opposite. Outside, automation is pitched as relief. Inside, innovation demands more human intensity.
We’re seeing this in the decline of AI agent frameworks like LangChain amid shifting priorities in AI development, companies are abandoning complex frameworks not because they’re inefficient, but because they can’t keep up with the pace of change. The tools are becoming simpler because the humans using them are burning out.
Even the economics are getting brutal. The cost-efficient AI models challenging big-tech pricing and high-performance open models reducing reliance on proprietary APIs should theoretically reduce pressure. If you can run powerful AI locally without paying OpenAI rates, shouldn’t that ease the financial pressure driving the grind?
But the opposite is happening. The democratization of AI compute through DIY supercomputing just means more competitors can enter the race, increasing the pressure to move even faster. It’s a Red Queen problem: you have to run as fast as you can just to stay in place.
What This Means for the Rest of Us
Here’s the uncomfortable truth: the AI industry isn’t a model for how we should all work. It’s a premonition of what’s coming for everyone else.
Economists are torn about whether AI will replace most jobs or just change them, but they agree on one thing: it’s already reshaped entry-level work. The “canary in the coal mine” is already dead. The LLM performance under real-world business constraints and decision-making pressure shows that even AI struggles under the kind of pressure we’re putting on humans, 8 out of 12 models go bankrupt when forced to make decisions under realistic constraints.
When Uber drivers compete with self-driving Waymos and baristas are replaced by robotic coffee bars, the pressure to grind doesn’t disappear, it just shifts to the remaining humans who need to prove they’re worth keeping around.
The Security Paradox
Even the AI security models that balance functionality and safety without overburdening developers can’t solve the fundamental issue: we’re building systems that require constant human oversight, but we’re burning out the humans doing the overseeing.
Safety review processes at places like xAI were reportedly thin, under-resourced, and frequently overridden when they threatened to slow down product timelines. When speed is the only metric that matters, safety becomes a luxury. And when your team is working 100-hour weeks, safety becomes an impossibility.
The Open Question: Who Benefits?
So here we are. AI was supposed to free workers, but in the startups building these tools, people are working harder than ever. The efficiency gains are real, but they’re not going to the workers. They’re going to investors, founders, and the bottomless maw of competitive advantage.
The question isn’t whether AI can deliver productivity gains. It can. The question is who benefits from them, and whether the time saved will ever truly return to workers. If the tools are supposed to free us, the next phase of innovation might not be faster models, but healthier norms around how we use them.
For now, if you’re considering a role in AI, cultural questions matter as much as compensation. Ask about realistic working hours. Ask how leadership defines work-life balance. Ask whether there are guardrails around burnout. Because the 12-hour day isn’t a temporary phase, it’s becoming the permanent price of admission to the future.
And if you’re already in the grind? Document everything. Build your network. Know your limits. The same tools that are driving this pressure can also be used to organize against it. The question is whether we’ll be too exhausted to bother.




