

A customer came to us asking us to further develop 3 mission-critical applications. For us, that's more or less the standard assignment, so we got to work enthusiastically. But it soon became clear that there was more to it...
The challenge becomes clear
The applications in question came from another party and were built with outdated technology, without any documentation.
Although the software ran in the (Google) cloud, it was not built on a modern stack there. The stack was also very complex, especially for apps with fairly comprehensive functionality, and combined .NET, React, Blazor and 3 databases in Postgres and MySQL. We were only able to access it via an SSH connection (for the uninitiated: that meant typing in a text interface like the one you see in movies from the 90s).
There were hardly any logs, and certainly no downtime alerts. So when an app was down (and that happened quite often), we didn't hear it until the customer called us. Due to the excruciatingly slow OTAP street, it took between 2 and 3 hours to roll out a hotfix or a new feature.
So, in theory, we could have further developed these apps. We could have invested our time in figuring out how it all worked technically. We could have sat down and ate ourselves behind our SSH terminals, waiting for white smoke about our deployment. But we weren't in the mood for that. Because even with that investment of time, this would never be a maintainable, future-proof app. And finally, after years of frustration, we would have advised the customer to migrate the business to a better platform.
So it was better to get through the sour apple right away.
The decision: we're going to Azure
Suppose you hire a technology partner to further develop your applications. You've chosen one of the best, so you're thinking, “That'll be fine.” And then they come to you after a few weeks to tell you that they are not going to start. Or at least, that they first want to partially renew your software and move it to Azure. Then you're just not so happy. The thought “why should I convert an application that just works?” might just cross your mind. And we wouldn't blame you for that.
This is exactly how it went in this case. But our advice to modernize the applications first and migrate them to Azure was based on a thorough analysis. As a result, we were able to convince the customer to take this step after all.
How we went about it
At Blis Digital, we have a lot of experience with these types of processes (read this blog by Christian to learn more about it). So we knew well how to go about it. At the same time, you never know exactly what you're going to encounter. All the more reason to tackle it systematically:
- Updating. A crucial insight that we had gained from the start was that many of the most used packages needed an update. Usually, the code that called the packages also had to be modified.
- Create Azure subscriptions and resources We built the necessary infrastructure on Azure and made sure everything was properly configured for the migration. A big advantage we had was that the original applications already used Active Directory to log in. So we were able to reuse that entire configuration.
- Merge and deploy databases. Now we were able to transfer the applications, but the data was in different databases. We migrated them all to one SQL Server database, using scripts to clean the data and verify that all data was correct and complete. During this step, we discovered that the original databases contained a lot of corrupt data. It was a lot of work to track down and rectify all that, but data quality increased significantly as a result. That would greatly improve the user experience later.
- Testing, testing, testing, testing... Now we were ready to launch the applications on Azure, but when we did, we found a lot of bugs. Partly, these were caused by (even more) corrupt data, but many changes in the code were also required. Remarkable, because the code looked good at first glance. It wasn't until we tested the entire application in conjunction that the errors surfaced. So this was by far the most time-consuming phase of the project.
- Dry run on production. While we tested the final production environment, we allowed users to continue working on the original apps. That way, we were able to take the time to make sure everything was perfect without disrupting processes.
- Good luck! The final switch to the new production environment went smoothly. Thanks to detailed preparation and testing, we were able to make the switch without users noticing much. A special moment, when — with some tension in your body — you turn the 'button' and wait for users' calls... And they didn't come. Everyone worked on the new environment and found: “It works faster and more stable. And the data is finally accurate.”
A success to be proud of. But to be honest: going live was actually the only step of the project that wasn't too bad. In almost all other steps, we encountered unexpected problems and delays.
It is worth a big compliment to the customer for always trusting us to continue with the project. As a reward, the customer now has a modern, scalable and fast software platform that is ready for new functionalities and future developments.
Should we have done things differently?
One question that has been on my mind all along — and still concerns me — is: couldn't we have seen this coming sooner? Shouldn't we have done a Technical Due Diligence at the very beginning to identify all risks in advance?
The answer to this question is “yes and no”. Of course, it would be great to always know what you're going to encounter in a development job from now on. That would make planning, the quote and the conversation with the customer much easier. On the other hand, such an analysis also costs a lot of money and time, which means that you always start solving the problems later and with less budget.
And solving problems, that's what we're all about as developers. Strangely enough, the fact that we don't know exactly how beforehand doesn't matter. We are used to coming up with creative solutions and we always work in open, honest communication with our customers. And our customers also understand that there are limits to the predictability of software projects.
I was also reminded of what Christian wrote earlier on this blog: Software is like a living garden. It is a complex system that consists not only of code and infrastructure, but also of people. By diving deep into this software, we've gained a deep understanding of how it works and how we should maintain and improve it in the future.
Did that result in us spending more hours on the project than planned? Yes. Did internal and external discussions occasionally lead to tough discussions? Sure. But now we have built a relationship of trust and cooperation with the customer who, together with the new technical basis, ensure that we can go into the future together.