

I was part of one of the few classes of AI students at TU Delft. For the masses, artificial intelligence was pure science fiction at that time. For us, those few lost pioneers in Delft, it was struggling with complex mathematics and trying to put human logic into machines. A field that was hardly taken seriously by the outside world.
Fast forward time to today and that lonely lecture hall has given way to a global gold rush. Everyone is 'doing something' with AI and the experts are springing up like mushrooms. When I watch that explosion, I feel a mixture of pride and healthy skepticism. Because there are also dangers in the hype.
And to clarify those dangers, I would like to take you to how it all began.
The roots of AI date back to the 1950s
When visionaries like Alan Turing dreamed of thinking machines, the vision was born to make a computer work like a human brain. In the decades that followed, hard efforts were made to find ways to achieve that. The development of AI went in fits and starts: one minute scientists (and investors) were getting on the AI train en masse, the next moment their belief in AI sank to freezing point. Several “AI winters” passed by. But it never completely froze to death...
When I was in the middle of my AI studies around 2001, the AI world was divided
Hard work was still being done to develop that dream artificial intelligence. But the AI enthusiasts were clearly divided into two camps:
“Algorithmics” camp
The vast majority of scientists believed that intelligence should be built primarily with programmed algorithms, logic, and decision trees. Possibly supplemented with a bit of machine learning. Neural networks? They were never going to provide widely reliable results, was the conviction.
Neural networks camp
A small minority believed in the digital shadow of the brain. They believed that machines should not primarily follow rules, but should “learn” by recognizing patterns. This is the predecessor of what we now call Deep Learning.
For years, the camp won 'algorithmics' by force majeure. We simply lacked the computing power and data to really let neural networks come into their own. A neural network back then was like a powerful engine without fuel. The theory was brilliant, but the hardware was hopelessly underpowered and data sets were scarce. Yet, all these years, I continued to believe in the power that AI would one day show us.
That ratio has now completely reversed: where it used to be ninety percent algorithmics and a touch of machine learning, it is now increasingly moving towards machine learning, using traditional algorithms only where necessary.
At Blis Digital, we believed in the power of AI from an early age
Of course, I brought my enthusiasm about AI to Blis Digital. We were constantly looking for ways to use it. However, that was not always immediately successful. After all, pioneering also means that you sometimes stumble upon your nose.
Our most important lesson? That the gap between a technically working prototype and a valuable product is often wider than expected.
For example, about ten years ago, we built a system for analyzing financial laws and regulations for a large consulting firm. The vision was good: supporting experts with digital speed. But the AI models of that time could only process a few sentences at a time, while legal documents span dozens of pages. We had to manually cut documents into small pieces to get them through the model at all. Technically, the solution was impressive, but it was not commercially viable.
A completely different barrier was encountered in a project for a large entertainment organization, where we implemented facial recognition at the entrance. In fact, the technology was no problem here. The system worked flawlessly, but we were up against the walls of social acceptance and legislation. The world wasn't ready for the smart solutions with AI yet.
You could say they are failures, but I see it differently. As far as I'm concerned, these were necessary experiments that taught us that innovation isn't just about writing smart code. The context is just as important.
AI is now at a completely different but also dangerous point
Now that we are years later, the 'fuel' is finally here. Computing power and data are now available in abundance. The results of AI are therefore amazing. After years of waiting, the theoretical technology and the power of the AI engine come together. In no time, you'll develop solutions that used to take days, weeks, or even months.
But that's exactly where the danger lies. We already learned it in those early projects: just the outcome that AI gives you is not enough, context is everything. And that is often overlooked by the current hype.
I call this 'the magic wand illusion'
I call this 'the magic wand illusion'. Because tools like ChatGPT seem so amazingly human and intelligent, many companies are staring at the quick solution that AI provides. But they forget that necessary context and think that the machine will take over the entire process of thinking for a while.
Through harm and shame, we now know better: an AI model, no matter how impressive, does not understand the deeper logic and nuances of your specific business process. Anyone who uses AI purely as a magic band-aid without that context is building on quicksand.
AI is not the solution in itself, but an incredibly powerful engine in a bigger picture. An engine with enormous power is useless if it doesn't have a steering wheel, brakes and gearbox around it. The AI engine provides the power, but at Blis Digital, we never just “let go” of it in an organization.
People will remain an indispensable link, even in 10 years
My vision for the future? Right now, we're in the middle of the generative AI revolution and we're far from reaching the ceiling. The models are becoming increasingly powerful, the technology is more sophisticated and the applications are wider. But the fear that people will become obsolete? I'm not sharing that one.
In fact, in a world of raw computing power, the human ability to see context, maintain ethics, and ask the right questions is only becoming more important. This is true now, but also in 10 years.
AI is the engine, but we remain the driver.






