Why AI-Assisted Development Is More Exhausting Than It Should Be

The promise of AI-assisted development is that it should make developers' lives easier. In some ways it does. Yet I see many developers suffering from post-LLM burnout and exhaustion.

In part, it is because of the unrealistic expectations of the organizations they work for, caught up in AI hype and FOMO.

However, I am seeing another issue. And it's one rooted in the psychology of human-computer interaction (HCI).

Cognitive Modes

When you consider user behavior from an HCI perspective, you might think about "modes". A "mode" is a distinct state in which the same interface produces different behavior. But modes aren't just states of the interface. They're states of the user.

Imagine someone using a project management app. When they're scanning the board, they're in a reading mode, absorbing the state of things. When they're creating a task, they're in input mode, making decisions about what to write and how to categorize it. When they're reviewing a teammate's task before marking it done, they're in an evaluation mode.

Same app, perhaps the same screen, but three different cognitive states. Each uses different mental resources, and rotating between them is part of what makes the work feel varied rather than grinding.

Software development has always had these kinds of modes. And until recently, the natural rhythm of work kept developers moving between them.

Planning, Implementation, and Integration

There are at least three modes of work in software development: Planning, Implementation, and Integration.

Planning

Planning is about understanding the problem and designing an approach to solve it. What are the problems we need to solve? What are the constraints? What is the best architecture for the circumstances? How do the pieces fit together?

I would argue that this is the most cognitively demanding mode. It requires holding multiple concerns in your head at once, reasoning about tradeoffs, and making decisions that will impact every step that follows.

Implementation

Implementation is about making the solution a reality. Writing the code and solving the unexpected challenges along the way. Debugging, testing, fixing, and getting it all working.

But implementation also served a deeper cognitive function. It was a cognitive reset. After the taxing, uncertain work of planning, you could drop into the flow of building. The plan is defined, at least enough to start. And writing code, which is really the act of solving a series of small problems within a larger context, provides a rhythm of frequent, tangible success.

You write a function and it works. You connect the backend, frontend, and the database and see data appear and update. You style a component and it looks right. These are small wins, but they accumulate, creating momentum. They produce the feeling of progress that sustains motivation through the harder parts of the work.

Even debugging, which can be frustrating, is a different kind of frustration than planning uncertainty. Debugging is traceable. There's a bug, eventually you find it. The satisfaction when you do is immediate and concrete.

Implementation was the mode where the abstract became real. It was the mode that, for many devs, recharged you for the other modes.

Integration

Integration is about quality control. Code review, both your own and others'. Making pull requests and merging code. Careful inspection of the choices made. Catching regressions, enforcing standards, verifying behavior.

But integration, like implementation, provided other benefits. It was a period of cognitive feedback. When you reviewed your own code before a merge, you were meditating on your own work with fresh eyes. You'd catch things you missed, notice patterns you'd repeated, see implementation that could have been clearer. Self-reflection was built into the workflow.

When others reviewed your code, you got the benefit of a different perspective, which is a key part of learning and growth. Integration was also a space for debate, process refinement, and team alignment.

While you may have felt the pressure of getting code reviewed, the act itself seldom had back pressure. It happened, on a broad scale, at the speed of code or faster, and thus was rarely a bottleneck. You could reflect, review, consider, and rethink. The cognitive equivalent of stretching after a long morning run.

Mode Collapse

AI-assisted development has not only changed the ratio of time developers spend in each of these modes, but the very nature of each mode and the back pressure involved.

Planning

Now that AI can theoretically implement features in minutes instead of days, organizations expect more output. That means more planning. More specs, more architectural decisions, more prompts. The pace of planning used to be gated by how long it took to build. That gate is gone, blown open by the runaway inference truck.

Developers are being asked to do the most cognitively demanding part of their job at a volume that was never required before. Planning at this velocity, without proportional rest via mode change, is exhausting in a way that's hard to articulate or even recognize.

Implementation

Implementation has undergone the biggest change. The flow state of building, the rhythm of small wins, the tactile satisfaction of writing code that works, these have been compressed into "prompt, wait, review."

The prompting itself is a form of planning, not building. The waiting is dead time, unless you subject yourself to myriad context switching. Reviewing, in reality, is integration. The actual building, the part that let you live in a flow for awhile, is being done by something else.

What remains of implementation is supervisory. You're directing the work, not doing it. That's a fundamentally different cognitive experience than the one developers have been accustomed to, and I don't believe it provides the same restorative benefits.

Integration

Integration has also changed dramatically. When AI generates the code, integration becomes purely a review of output you didn't write. You lose the reflective quality of reviewing your own work...because it isn't your work.

Feedback from others reviewing your code not longer has the same moments of learning and growth, because it's more similar to learning from examples in a book than from reviewing your own work, which implies your thought process and understanding.

Reviewing AI-generated code is also cognitively different from reviewing a teammate's code. With a teammate, you can often infer intent. You can ask questions. You learn their patterns.

With AI output, you're inspecting code that has no intent behind it. It's syntactically coherent and often well-structured. But there's no one on the other end who meant anything by it.

Integration becomes an audit rather than a dialogue. Audits are a necessary piece of a process. They're not enriching. They don't provide the same space for learning or improved team dynamics.

Worse still, developers spend far more time in integration than before. Integration by a human is slower than implementation by an AI, placing immediate back pressure on the process, and thus making any personal improvements for the developer an even less likely outcome.

One-and-a-Half Modes

What developers are left with is roughly one-and-a-half modes.

Planning has expanded and intensified. Integration has been stripped of its relational and reflective qualities and reduced to pure verification. Implementation, the mode that provided cognitive reset and tangible momentum, has been automated and compressed.

The initial rush of getting something working quickly is real. Watching an AI generate a feature in minutes that would have taken you a day is exciting. But that excitement is being tempered by the reality of what comes after. You still have to plan the next thing, and the thing after that. You have to review all of it. And the actual building is a shadow of what it was. A prompt cycle or agent orchestration isn't flow. It's management.

In some ways, AI didn't remove the hard parts of development, it removed the cognitive modes that made the hard parts sustainable.

To give this a name, let's call it single-mode burnout. The exhaustion of spending most of your time in the most demanding modes of work, with bottleneck pressure and little of the recovery and reflection that the other modes used to provide.

Cognitive Impact

I don't believe we fully understand yet the long-term impacts of this mode collapse, especially in three areas:

Skill

Implementation was where developers deepened their craft. You learn the most when you're building. You develop intuition for how systems behave by constructing them yourself. That intuition and understanding is what allows a junior developer to become a senior, an architect, a manager.

Compressing implementation shrinks the primary mechanism by which developers improve. Over a span of years, what does that do to the skill of a developer workforce? What happens when the next generation of developers has spent most of their career planning and reviewing, but relatively little time building?

Motivation

The three-mode cycle provided natural variation in a workday. Hard thinking, then flow-state building, then reflective review. This variation sustains motivation and balances cognitive load.

Collapsing that into sustained planning and auditing has become a recipe for a new kind of burnout that looks different from what we're used to identifying. It doesn't look like overwork. It looks like fatigue, apathy, laziness, and atrophy in the presence of output. You're getting more done, and feeling worse about it. That's confusing, because social media tells us we should feel better about ourselves the more we produce.

To a degree, producing more does feel good. But if it comes at a cognitive cost, the feeling is not long-lasting.

Quality

Verification without the context of having built the thing yourself is harder and less reliable. Developers reviewing AI-generated code may miss issues that they would have caught if they'd written it themselves, because writing code builds a mental model of the system that reviewing code alone does not.

A good mental model helps you spot when something is subtly wrong. Without it, code review becomes a surface-level exercise, no matter how careful you are.

If you offload code review to another AI model, then you lose yet another opportunity to learn and grow. Short-term success at the cost of long-term skill.

Redesigning Cognitive Infrastructure

The three modes of development weren't just phases in a process, they were a cognitive infrastructure around which the software industry built experts. They provided rhythm and recovery. Opportunities for challenge, reflection, and growth.

We've disrupted that infrastructure, and we need to be intentional about what we put in its place.

It might mean reserving certain kinds of implementation for yourself, not because AI can't do them, but because you need the cognitive benefits of doing them.

It might mean teams rethinking what a healthy developer workflow looks like when the ratio of planning to building has inverted. Perhaps "implementation time" becomes something that's protected, and a backlog of AI-generated code to review is not considered a drop in velocity, but evidence of care.

At minimum, it means naming the problem. If you're an experienced developer and the work feels more draining than it used to, even though you're supposedly more productive, you aren't imagining it. The cognitive structure of your craft has changed.

AI-assisted development can be more exhausting than we are led to believe. I believe we can educate, innovate, and mentor ourselves and our teams into a new, better infrastructure. But first, as an industry, we need to acknowledge and consider the problem.

It's up to us to take care of ourselves, our cognitive load, and our future skill.