

From separate prompts to agentic workflow
A year earlier, I was still prompting in ChatGPT myself. Great for small tasks, but ultimately with little impact on real work. The difference came when we started thinking in workflows. Not: how can I use AI for a while to do something faster? But: how do I organize the work in such a way that AI structurally takes over part? That's where the transformation started.
We learned how to divide work into pieces that you can give to an AI. How to be clear constraints provides. How to let AI build within the “guardrails” that you design yourself. And above all: how to control the output and integrate it into a larger whole.
We also built internal agents who were able to perform tasks: test generation, documentation, logging modules... No more separate tools, but parts of our development process. In doing so, we went from Level 2 to Level 3 in our AI adoption framework. And some colleagues even went to Level 4.
Case study: cleverly splitting a component
In a customer project, we worked on a complex feature with a tight deadline. The lead developer — a Level 3 engineer — decided to split up the work: he kept the core algorithm himself, but had an AI agent build everything around it. So he focused on the unique, valuable business logic. The agent built validation, UI, error handling, and more. In a day and a half, there was a working version.
That's no trick. That is a different way of thinking. AI-first means that you don't start with code, but with the question: “What can the AI do and where should I take over?” This way of working requires experience, sharpness and guts. But it pays off in speed and quality.
Accelerate testing with AI support
We have also taken significant steps in software testing. One of our test leads uses an LLM agent as an automatic tester. It simulates user behavior, runs hundreds of test scenarios and analyses logs for anomalies. It saves hours of repetitive work and provides faster insight.
Of course, human validation remains necessary, but the role of tester is changing fundamentally: from self-testing to designing and monitoring test processes. And that fits perfectly with AI-first thinking: you design the collaboration with AI, instead of doing the work yourself.
Product management with AI as sparring partner
It's not just development and testers who benefit. Recently, one of our product managers worked on an idea for credential lifecycle automation with a major customer. Instead of whiteboarding and wireframing with one Deep Research prompt, he asked an LLM to make a complete design proposal.
Half an hour later, he had a complete document with user stories, flow charts, API designs, security considerations and a comparison of solution directions. Of course, it still had to be tinkered with. But the basis was there. Thought out much faster and more broadly than if he had worked it out himself. From such a starting point, as a human specialist, you can then continue to work with a great advantage. So AI isn't just executive. It's also conceptually strong, if you know how to manage it properly.
Learning by doing — and through mistakes
Of course, we also made our mistakes along the way. Sometimes that was part of the cost and engineers used hundreds of euros in AI tokens unnoticed, without generating anything. Sometimes it was down to the quality and AI code proved useless because the context was not provided properly. But that's part of it. It is precisely by trying that you will learn how to organize it better next time. AI-first work isn't something you learn from a book. You have to go through it. With all the confusion and frustration that comes with it.
But also the wow-moments.
This is the last part in a five-part series about AI-first working in software development. In the white paper 'The foundation of an AI-first company' you'll read about the framework we used to make Blis Digital AI-first and what we're currently using to make our customers AI-first.





