

That day was an extremely productive day at the office. Using AI, I had produced more code than I did in a week 5 years ago. I was “more efficient” than ever. But as I stood there on the track, I realized what I had actually been feeling for a long time: my brain had not grown with the speed of my output. I had packed days of thought into a few hours and the bill was now presented.
That night on the track was a wake-up call. We go 5 times faster with AI, but that also means something for our brain.
The death of “simmering time”
The problem lies in what I do 'simmer' name. Back in the day, when I was solving a complicated problem as a developer, I spent hours typing, debugging, and rewriting manually. That sometimes felt slow, but that slowness had an important function. It was the time when information trickled from my short-term memory into my long-term memory. It was the simmering of logic in the back of my head. Even when I sat in the car at night, that information simmered quietly. The next day, I had a fresh perspective and was able to create an innovative solution.
With AI, the natural brake is completely removed. The process is now: enter the prompt, receive a block of code, briefly scan to see if it works and on to the next task. The simmering time is skipped in favor of the result. The effect is that you have done a huge amount of work at the end of the day, without your brain having stored anything. Even though they did do a lot of work.
The director's role consumes more energy than you think
It's often said that AI makes our work easier because we don't have to type in boring lines of code. But actually, the tax has moved from 'doing' to 'assess'. This forces developers into a new, mentally much heavier role: that of the director.
When an AI model generates a solution in three seconds, the foundation of the thought process that normally preceded it is missing. We are forced into a state of constant high tension; a kind of high-intensity review booth that never slackens. Is this safe? Does this fit in with the rest of the architecture? Do I really understand what's here, or am I blindly trusting the suggestion?
That constant state of distrust consumes energy. It's the difference between taking a familiar walk through the woods (relaxing and you see everything), or having to check from a fighter jet at top speed that every tree is in the right place. You go faster, but the focus needed not to make a mess is debilitating. You save time at the front end, but you pay it back immediately with your mental battery.
Why decelerating is sometimes the smartest strategy
But how do we, as developers, deal with this without burning out? As far as I am concerned, as a sector, we need to revise the definition of a “productive day”. We have to accept that a working day where you are intensively involved in AI direction is mentally shorter than a traditional day of writing code. You simply can't spend eight hours a day in hyper-focused mode without sacrificing quality.
What does work is to organize your thinking differently. I am now consciously building blocks in order not only to review at a detailed level, but also to think at a strategic level:
- Setting the course: where are we going with this project?
- Architecture choices: which solution fits this problem?
- Coaching: assisting other developers in their process.
Because you take a helicopter view during these hours, you don't have to dive into the tiring details. You don't always have to be on the cutting edge to still deliver enormous value. Since I combined this, I've noticed that I have energy left over for a game of tennis at the end of the day.
A perfect prompt as a new simmer time
Another method that helps me is putting more time into a better prompt. There is a great temptation to type in something quickly, see what comes out and then start refining endlessly. But it is precisely that “ping pong” with the AI that costs mountains of energy.
I'm doing things differently now. I sometimes think about 20 to 30 minutes for a single prompt. At first, that felt awkward, after all, I'm sitting behind my screen 'nothing to do'. But it turns out: by taking that time, I force myself to solve the problem completely in my own head first. I'm reintroducing the simmering time at the front of the process. The result is not only a better output from the AI, but also a calmer head, because I took control instead of letting the machine gamble.
People as the most important filter
The lesson is simple. This human slowness is not a bug that we need to brush away more and more with AI, but rather the filter that determines whether software is right or wrong. AI always wins over speed, but has no idea why we build what we build.
Real expertise in the AI era therefore lies not in how fast you can type a prompt, but in the ability to stay in control. To understand what you're making, why you're making it and when to intervene. Because if we are not careful, we will end up in a world where we will only draw lines between systems that we no longer understand ourselves.
By consciously protecting that simmering time and taking our mental battery seriously, we will continue to do where we, as people, do make a difference. Give direction. Making choices. Pull the brakes if you have to.
In a world that is going faster and faster thanks to AI, a sharp, equipped human judgment is no luxury. It's your most important competitive advantage.
Read more about how we develop future-proof software here.





.jpg)
