While developing some HTML pages for one of my projects, I started to think about requirements. I was doing TDD with the HTML engine (thanks to my XSLT-based solution) and got to thinking about regression testing and requirements. I have never liked requirements docs, but I also see the need for them. Low level requirements docs tend to be tedious. They provide lots of detail, but in a way that makes them unusable. Developers tend to gloss over them. While low level requirements are not written by technical people, they tend to dictate parts of the implementation that can be problematic. Agile tries to tackle some of these issues to make it better for the initial development, but requirements tends to fall by the wayside...mostly because requirements docs are horrible.
I was also thinking about all the unit tests that I wrote. For new developers on a project, they may not understand why a particular unit test exists. You can try to name the unit test better and comment your unit tests, but the big picture is often lost. You can add a comment that lists the requirement name that is getting implemented, but the requirements are often written in MS Word using a number-based outline that causes requirements numbers to change when you add a new requirement. I started to think of a better way of handling this.
First off, imaging having a requirement xml file that is coupled with the item being developed and tested. For example, you have a webpage that is being generated. In my system, that starts with MyPage.xsl. You have beans and logic classes that are coupled with it: MyPageBean.java and MyPageBeanFactory.java. You have a unit test for the page that mocks out the midtier and generates your HTML: TestMyPage.java. Now, add another file to the mix: MyPage.rqdx (requirements document xml). Inside of MyPage.rqdx, you specify all the requirements for MyPage.html. This should include the requirements for what should show up in the generated MyPage.html file, and the client side interations on that page. Each requirement in that xml file gets a unique name. You can even put metadata about the requirement, like when it was requested, when it was implemented, who requested it, maybe even cost-to-implement information. A requirement can link to another requirement. Sometimes requirement B is only needed because of requirement A. This happens when B is the low level requirement (MyPage should have this text as a header) for a high level requirement A (You should have a page called MyPage).
Now, imagine having Java annotations that you can use inside of MyPageBeanFactory.java and TestMyPage.java. You have an annotation @ReqImpl(reqs="req1,req2") that you use to mark methods/blocks that implement a particular requirement. You have an annotation @ReqTests(reqs="req1,req1") that you use to mark unit tests that test a particular requirement. Code, tests and requirements are now linked together.
When you put this all together, a few things now become possible. First, it is much easier for new developers to find out why code was implemented in the first place. They can follow the annotations back to the requirements xml. Requirements are now part of your revision control system. You can see your requirements change over time, just like your code has. You can now translate a failed unit test to the requirement that is impacted. Tools can be written that calculate "requirement coverage" as opposed to code coverage. This is the percentage of requirements that have been tested. A big advantage is code retirement. Since you can search for all code that is related to a specific requirement, you can now search for all code that can be removed due to the removal of a specific requirement. With requirements in revision control, you can create a report of new requirements added per release of your software, just by diffing all the xml files for the current release with the previous release.
Another change that could be interesting for Agile teams is adding un-implemented requirements. When a new requirement is identified, you can add that requirement to the xml file right away. It will sit there as an unimplemented requirement. You can write a tool that calculates the number of unimplemented requirements, just like you had with untested requirements. As developers implement requirements, you create a "burn down" chart of implemented requirements. At the end of every sprint, you have the number of requirements added, removed, implemented and tested.
Overall, this idea could help make a piece of software more maintainable. Tooling could allow the management of requirements to be much more useful. Developers would be able to read/implement requirements easier, since they only see the requirements that are relevent. Project managers can see the progress of implementing requirements. Release managers could easily identify what new features are available in a release. Tying code to requirements makes the maintenance of that code much better.
As a whole, the core of this solution doesn't seem like it would be too hard to implement. The tooling would take the most time, but I think different organizations would want different tooling. I really wish I had more time to run with this.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.