Stop writing requirements, start writing tests



After some interesting discussions I had at a customer the last few days about the use of workitems to track progress and capturing requirements, I started to reflect on what experiences I've had in projects.
One of the classic problems has always been the requirements process. Are non-functional requirements any different than 'normal' requirements, how do you capture requirements and keep them up to date as the needs of your customer change over the course of the project?

The classic way of handling requirements is of course the 'big' design up front. Especially in waterfall, everything is written down to such a level that an estimate can be given about the development cost (and hopefully also about testing cost) and any new insights by the customer happen as a change. After the project has finished of course. Expect one or more quite thick Word documents with hopefully some diagrams and start building this.
When changes happen, the word documents should be updated so they reflect the functionality of the new (changed) application.
When you work in a Scrum team, requirements are captured as you go, partially written down by the Product Owner, partially by the Development team as they gather the details while creating the userstory. If the definition of done indicates it needs to be captured in a Word document or some other form, the team will also make sure that this is up to date once they deliver the functionality. But we usually still end up with a heavy Word document.
But in a way this is odd. Because the definition of done tells us when the Product Owner is happy with the work done and as the representative of the users, when the users are happy with the functionality. So these are the requirements, although they are not placed in some document but in our definition of done. And how do we check that work is actually -done-? Why, by testing it of course. Even in classic waterfall projects, all requirements need to be SMART, or have acceptance criteria linked to them that are SMART.

● Specific – have a specific goal.
● Measurable – which measurable/observable conditions must be met.
● Acceptable – specify who will accept it.
● Realistic – can these results realistically be achieved?
● Time-related – specify when the result(s) can be achieved.

The most important part in SMART, at least for the purpose of this article, is the Measurable part. Since we need an objective measurement to conclude that our solution meets the needs of the Product Owner and users, we use tests. If all requirements are indeed SMART, that means we can test the entire solution to make sure that it will meet the needs. So instead of writing and updating a huge Word document, we should really be writing down our requirements in the form of tests. Since this will guarantee that we deliver the correct solution, when we change a single line of code, we can still test that all the requirements (and the definition of done) are met. At any point in time, the set of tests will define what the application should do.
So if the Product Owner asks for a change in a feature, this means a change in the tests and a change in the solution, so all tests pass once more.
Now a remaining issue is to make sure that not just up-to-date tests are available, but we also provide structure so the features of the solution are understandable for people who just wish to be informed about how the application works. So if this is part of the definition of done, update the requirement document, user manual or whatever documentation needs the Product Owner and users have. A more elegant solution would be a tool that contains the structure of requirements connected with the tests to be run. 

The advantage of specification by example is that it is very readable what the effect of a specific requirement is with the example providing both clarification to the reader as well as a specific testcase.
(Previously published on LinkedIn, November 27, 2014)

Comments

Popular posts from this blog

Using Azure Devops Service Connections in dashboard widgets

Running Azure DevOps container agents on OpenShift

NuGet Release and Pre-Release pipeline