

Monday morning, 9 a.m.
A new project is starting. I wasn't invited to the kick-off, but last minute Fortunately, the lead developer is also thinking of inviting the tester. It's going to be a good start with great people. Exactly what gives me energy!
The requirements for the project were collected by our business analysts and during the kick-off, we mainly get to see tree structures and titles. A quick look at the backlog shows me that it still has many outstanding questions. There are also not yet acceptance criteria for everything. I see my colleagues around me patiently watching the presentation. They face the future without worries. Like they're going to assemble an Ikea cabinet...
First week
The team works hard in the first week, but I can't do much as a tester yet. We work in 2-week sprints, so after completion of the first sprint - while the developers are working on the functionality of sprint 2 - I can test the release of sprint 1. Instead of waiting, I'm already preparing tests during this sprint. I get quick answers to questions I ask and I can even occasionally save someone from a missed requirement. A quick consultation with the customer provides a more complete feature, so that we don't have to solve that later via a finding.
When is a feature 'finished'?
After the sprint 1 demo, I can really test. The user stories that have been built go to “complete”. But... if they haven't been tested yet, are they 'finished'? I'll jot down a point for the retrospective to see if we can streamline that process and will quickly continue testing. Despite a lot of discussion during sprint 1, I still find quite a few things that do not seem right.
It is approximately a 50/50 distribution of bugs and findings that result in an addition to the requirements. For example, no validation on a date field is not a showstopper, but if you allow all kinds of different formats as well as plain text, you will have an unworkable dataset in no time. So that's on my list.
We have the agreement that we will add the findings in sprint 2, so that it quickly grows in scope. In addition to the planned activities, the developers also have to work with my findings!
Delay due to ever-growing sprints
The next sprint is going to be the same. Hounded by the planned features and my additions, the developers are hard at work. The new features provide new findings and the fixed bugs occasionally reveal a new problem. So instead of fewer findings, more appear on the sprint.
I'm consulting with the lead developer. We're not going to deliver new functionality fast enough, but the things I think are too important to miss. What now? I understand the problem and partly blame the requirements. If the preconditions for building software are not clear enough, things are quickly forgotten that can result in a finding. We decide to spend a little more time on refinements to ensure that requirements appear better developed in the sprint.
The test server goes down
A new day. The test environment is not updated automatically, but when I create a new one build need, I can do a release myself. So I can always continue with the latest version. Around half past five in the afternoon, I'll do a new release so I can check out some more fixes from that afternoon.
Crack.
The test environment then does nothing. Sucks, but better now than tomorrow, just before the daily*. I'm leaving a message on the team chat, hoping that an early bird will take a look at it tomorrow morning. To my joy, I see that the lead developer is taking some time to look at it. Together, we try out different browsers and after an hour he has the problem: there is a problem, specifically for the Chrome browser. We solved that quickly.
Halfway through the project: taking stock
Halfway through the project, we take stock. We clearly deliver results that work and the customer's acceptance tests show few findings. I am proud of the features that we ultimately delivered. They are complete and work well. In addition, we made improvements that were not included in the first plan. We weren't able to deliver all the planned features, but I'd rather make something good. If meeting the schedule leads to mediocrity, I'd rather leave the planning for what it is.
During the retrospective some pain points do surface. The developers think some of the findings they had to fix were far-fetched. “We all understand that hackers sometimes play weird jokes,” is the message.
“But we don't expect a user to intentionally try to field a few million character text file.”
I understand the complaints, but I also explain that these exotic tests contribute to the robustness of the product. We agree that I will provide a standard of what different elements must meet in areas such as SQL injection and validation. Then we can lay down a checklist that we would not include in every user story need to repeat.
There is also a positive contribution to this evaluation. In recent sprints, we have noticed that we need each other. Without the developers, my checks take much longer. They know exactly where I can find things. Thanks to the early consultation, we were able to prevent bugs.
Optimal cooperation = finding out what “perfect enough” is together
We agree that we will free up more time in the coming sprints to fix the bugs and slow down in planning new functionality. In addition, in consultation with the customer, we ensure that not all findings also come straight to the sprint. We don't have to solve all the problems that are found immediately. Even though I want a perfect application, it's important to see the difference between improvements and problems.
My colleagues make great features and they need to be continued. Sometimes that is best done with known issues. It's better to show something beautiful with a known problem than to show nothing because it's not perfect enough yet.