The Productivity Trap: How AI Is Engineering a Burnout Crisis

The Productivity Trap: How AI Is Engineering a Burnout Crisis

New research reveals AI tools aren’t reducing tech workloads, they’re quietly expanding them through ‘workload creep,’ turning efficiency gains into burnout, role confusion, and cognitive overload.

AI was supposed to give us our time back. Instead, it’s eating it from the inside out. A bombshell study from UC Berkeley researchers has exposed what many engineers have been quietly whispering in Slack channels and griping about in anonymous forums: AI tools aren’t reducing work, they’re intensifying it, expanding it, and quietly engineering a burnout crisis that management dashboards can’t measure.

The eight-month field study, embedded at a 200-person tech company and published in the Harvard Business Review, tracked knowledge workers as they adopted AI tools. The findings? Task expansion, expectation inflation, and a phenomenon the researchers call “workload creep” that turns productivity gains into a self-reinforcing cycle of exhaustion. If you’re a tech worker feeling like your AI-assisted days are somehow longer and more draining, you’re not imagining things. The data confirms it.

The Efficiency Illusion: Where the Time Actually Goes

Here’s the core paradox: AI makes individual tasks faster, but the time savings evaporate before you can close your laptop. The Berkeley team identified three mechanisms that transform efficiency into exhaustion:

Task Expansion: When something becomes easier to start, you start more of it. Product managers began writing code. Researchers took on engineering tasks. Roles with clear boundaries dissolved into a blurry soup of “full-stack responsibilities.” The friction that once protected your focus, learning curves, blank-page anxiety, technical gatekeeping, vanished. Suddenly, everything felt possible, so everything became expected.

Expectation Inflation: Managers see a report that used to take four hours now taking two, and they don’t see a half-day of relief. They see an opportunity for two more reports. The Interview Query analysis of “vibe coding” trends found that nearly 1 in 3 developers report having to fix AI-generated code in ways that offset the time savings. You’re not working less, you’re debugging more, reviewing more, and coordinating more, often for work you didn’t initiate.

Multitasking Mayhem: AI enables parallel workstreams that would have been impossible before. One engineer described manually coding while AI agents churned out versions in the background, creating a “C’mon do something” meme loop of waiting, reviewing, and context-switching. The cognitive cost of switching between six problems, each “only taking an hour with AI”, is brutally expensive for human brains. The AI doesn’t get tired. You do.

A man in a white shirt sits at a desk in a dimly lit office, leaning forward with his hands clasped near his face in a thoughtful pose. A laptop, a desk lamp, and various papers and objects are on the desk. A large red circle is digitally overlaid behind the man, contrasting with the dark and blue-toned background.
A man in a white shirt sits at a desk in a dimly lit office, leaning forward with his hands clasped near his face in a thoughtful pose. A laptop, a desk lamp, and various papers and objects are on the desk. A large red circle is digitally overlaid behind the man, contrasting with the dark and blue-toned background.

The Burnout Metrics They Don’t Put in Board Decks

The human cost shows up in the numbers that actually matter. A DHR Global survey of 1,500 corporate professionals found 83% experiencing burnout, with overwhelming workloads cited as the top culprit. The Upwork Research Institute reported that 77% of employees using AI said these tools decreased their productivity and increased their workload. Let that sink in: three-quarters of AI users are falling behind because of the tools meant to push them ahead.

The Berkeley study revealed a vicious cycle: AI accelerates tasks, which raises speed expectations, which makes workers more reliant on AI, which widens the scope of what they attempt, which expands the quantity and density of work. Rinse, repeat, burnout. Several participants noted they felt more productive but not less busy, in fact, they felt busier than before.

The seniority gap is stark: burnout hit 62% of associates and 61% of entry-level workers, while only 38% of C-suite leaders reported the same. The people implementing AI are suffering, the people mandating it are seeing dashboards go up.

Skill Atrophy: The Hidden Cognitive Tax

Software engineer Siddhant Khare’s viral essay “AI fatigue is real and nobody talks about it” captures the existential dread: “I shipped more code last quarter than any quarter in my career. I also felt more drained than any quarter in my career.” He describes his role morphing from creator to reviewer, a judge at an endless assembly line of AI-generated pull requests.

The scariest part? What happens when the AI isn’t there. Khare describes struggling to reason through a concurrency problem on a whiteboard without his AI assistant. It’s like GPS navigation: the skill atrophies because you stop using it. Even Andrej Karpathy, who coined “vibe coding”, recently admitted he’s “slowly starting to atrophy my ability to write code manually.”

This isn’t just about comfort, it’s about cognitive resilience. The Berkeley researchers warned that workload creep can lead to “cognitive fatigue, burnout, and weakened decision-making.” When you’re constantly context-switching and verifying AI output, you’re not building deep expertise. You’re becoming a very sophisticated linter.

A frustrated AI programmer is shown in this stock image
A frustrated AI programmer is shown in this stock image

Vibe Coding and the Quality Death Spiral

The “vibe coding” phenomenon, where non-engineers generate code through conversational prompts, has created a secondary crisis for actual engineers. One Berkeley participant perfectly captured it: engineers suddenly found themselves “reviewing, correcting, and coaching colleagues who were vibe-coding.” The person who automated part of their job just created more work for someone else.

This isn’t theoretical. The study documented product managers shipping AI-generated code that engineers had to untangle, researchers taking on engineering work they weren’t trained for, and a general degradation of code quality that created invisible technical debt. The productivity gain at one level becomes a productivity sinkhole at another.

Jevons Paradox in the Enterprise

Developer forums have been buzzing with a term that perfectly explains this dynamic: Jevons paradox. When technological progress increases the efficiency of resource use, the rate of consumption tends to rise, not fall. The cotton gin increased slave labor rates. AI is increasing knowledge worker labor rates.

The paradox plays out in three ways:
1. Individual: You finish tasks faster, so you do more tasks
2. Team: Your efficiency enables colleagues to offload work to you
3. Organizational: Leadership sees productivity metrics improve and reallocates headcount or increases targets

The Reddit sentiment from r/programming threads captures the frustration: workers are waiting for pay raises that never come, watching benefits accrue to companies while they absorb the cognitive cost. As one thread noted, “Labor sees benefits in proportion to how supply and demand stack up”, and right now, AI is radically skewing that balance by creating an oversupply of attempted work.

The Structural Problem: No Guardrails, Just Gas Pedals

The Berkeley researchers emphasize this isn’t a technology problem, it’s a management failure. Companies rolled out AI tools without “AI practice” frameworks. No structured pauses before decisions. No sequencing of work to reduce context-switching. No protection for focus time. Just raw acceleration.

Workers are expected to self-regulate in an environment designed for addiction. AI companies push constant updates, creating FOMO that has engineers spending weekends evaluating new tools. The interface design encourages “just one more prompt” during lunch breaks or right before logging off. As Khare notes, it’s engineered like a slot machine: you’re always one prompt away from the perfect answer.

The result is a fundamental misalignment of incentives. Companies capture the financial upside of productivity gains while workers absorb the downside of cognitive overload. The tools are free, the mental health costs are externalized.

What Actually Works: Building AI Practice

The researchers and practitioners agree on solutions, but they require organizational will, not individual hacks:

1. Explicit Scope Boundaries: Define what AI should and shouldn’t do. Not every task needs acceleration. Some need deep thinking.

2. Protected Focus Time: Calendar blocks that are AI-free. Use the efficiency gains for thinking, not just more doing.

3. Decision Pauses: Mandatory cooldown periods before shipping AI-generated work. Prevents the “vibe commit” problem.

4. Review Budgets: Cap the percentage of AI-generated code a team can produce based on review capacity. Prevents reviewer burnout.

5. Skill Maintenance: Regular “AI-free days” to prevent atrophy. Whiteboard sessions, manual coding exercises.

6. Outcome Metrics: Measure quality and sustainability, not just speed. Track reviewer time, bug rates, and engineer satisfaction.

Khare’s personal rules are instructive: 30-minute timers on AI use, ignoring AI conversations during breaks, and recognizing that the cost of coordination often exceeds the benefit of generation.

The Bottom Line: Who Captures the Upside?

The Berkeley study’s most damning insight is that AI is becoming a “labor multiplier” rather than a labor saver. Work expands to fill efficiency, expectations reset upward, and output becomes the baseline. The question isn’t whether to use AI, it’s who benefits when you do.

If productivity gains simply translate into more assigned work without guardrails, then AI becomes the most sophisticated burnout engine ever invented. The long-term challenge isn’t adoption, it’s preventing workload creep from becoming the new normal.

For engineers and tech workers, the takeaway is blunt: your AI-assisted speed is being weaponized against you. For managers, the mandate is clear: implement intentional constraints, or watch your best people hit a cognitive wall at 4x velocity.

The tools aren’t going away. But if we don’t change how we measure success and protect human capacity, we’ll end up with a workforce that can generate 10x more code, documentation, and analysis, and zero ability to think critically about any of it.

Share:

Related Articles