Maven has a single threaded pipeline. When you tell maven to run the test phase, maven will run the compile phase then the test phase. There is only one test phase. It doesn't matter how many types of tests you want to do. I tend to write unit and integration tests. There are also static analysis tools that take a while to run. With a single threaded pipeline, all of these tests run sequentially. If any one of these steps fails, the entire "build" fails. This is what a lot of people want. They want immediate feedback about problems.
As your code base gets more complicated, so do your unit and integration tests. Then, someone decides it would be a great idea to integrate a real application server into your build process. Now, your war file gets deployed to a Tomcat server. You wait for the Tomcat server to start, then you run a suite of REST calls, or maybe a browser simulator. All of this happens as part of the build process. If a webpage fails to render, you want to no as fast as possible. With all of your tests running sequentially, your build now takes 3 hours. 3 hours is not immediate feedback.
I'm sure the maven people can come up with some complicated way of having multiple pom files and jenkins jobs that kick each other off. To me, the right answer is to have a multi-threaded build pipeline that supports distributed phases. In my opinion, the first build phase should only build your code. If the build succeeds, that code should be uploaded to a code repository. If any testing is done during the build phase, it should be unit tests only. There should not be any integration tests or static analysis tests. If the code will not build and is not self-consistent, then it is useless. If it does build and is self-consistent, then upload it to the code repository. If you use maven, this might not sit well with you. Once you upload the artifact to the code repository, every artifact that depends on that version is now using it because the -SNAPSHOT pointer will return the code you just uploaded. We also established already that locked snapshots don't work very well. Why would you upload the untested code to your artifact repository? At the bottom of the snapshot vs milestone article, I talk about how milestones should be pointers.
First, the newly created artifact should have a pointer name similar to -LATEST_BUILT. This pointer will point to the latest code that has build successfully. Once that artifact is uploaded, the unique name should be put into a bunch of queues. There should be one queue for each type of parallel testing that you want performed. You can have a queue for integration tests. You can have a queue for static analysis tools. You can have a queue for testing your application within Tomcat. At the end of each successful run of a sub-build task, a new pointer is created/updated. For integration tests, its -LASTEST_INT_TESTED. For static analysis tools, it can be -LATEST_ANALYZED. If any sub-build task fails, a pointer isn't created and someone is notified. The build flow defines a rule of what an overall successful build requires. You can define success as requiring every sub-build task to complete. You can split sub-build tasks into two categories: the ones that are required for an overall success, and the ones that are optional for an overall success.
This setup has a few advantages over the single threaded pipeline. First, integration tests and static analysis tools can now run in parallel. Second, you can fire off the next layer of integration tests while you are still fixing a static analysis tool issue. Forgetting to remove an unused import shouldn't cut short your rounds of integration tests. You can rebuild and integration test your war file while fixing your imports. Third, your build pipeline can now support multiple platforms. If you are writing a desktop Java application, you can have daemons running on Windows, Linux and Mac that run a test suite on each platform. If you are writing an Android application, you can have a bunch of queues that simulate each phone. Even better, you can have a lab of phones plugged into computers. Each time you run a build, the test suite is executed on the actual phones!
Supporting multiple build tasks that run independent of each other could open up a lot of possibilities. A setup like this could facilitate having a much larger automated test suite. You could run more tests in less time.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.