Monitoring and Improving the Health of your Tests
One of the important (and often controversial) features of Maven is the emphasis on testing as part of the production of your code. In the build life cycle defined in Chapter 2, you saw that tests are run before the packaging of the library or application for distribution, based on the theory that you shouldn’t even try to use something before it has been tested. There are additional testing stages that can occur after the packaging step to verify that the assembled package works under other circumstances.
As you learned in section 6.2 Setting Up the Project Web Site, it is easy to add a report to the Web site that shows the results of the tests that have been run. While the default Surefire configuration fails the build if the tests fail, the report (run either on its own, or as part of the site), will ignore these failures when generated to show the current test state. Failing the build is still recommended – but the report allows you to provide a better visual representation of the results. In addition to that, it can be a useful report for demonstrating the number of tests available and the time it takes to run certain tests for a package.
Knowing whether your tests pass is an obvious and important assessment of their health. Another critical technique is to determine how much of your source code is covered by the test execution. At the time of writing, for assessing coverage, Cobertura is the open source tool best integrated with Maven. While you are writing your tests, using this report on a regular basis can be very helpful in spotting any holes in the test plan.
To see what Cobertura is able to report, run
mvn cobertura:cobertura in the
proficio-core directory of the sample application. Figure 6-10 shows the output that you can view in
The report contains both an overall summary, and a line-by-line coverage analysis of each source file, in the familiar Javadoc style framed layout. For a source file, you’ll notice the following markings:
- Unmarked lines are those that do not have any executable code associated with them. This includes method and class declarations, comments and white space.
- Each line with an executable statement has a number in the second column that indicates during the test run how many times a particular statement was run.
- Lines in red are statements that were not executed (if the count is 0), or for which all possible branches were not executed. For example, a branch is an if statement that can behave differently depending on whether the condition is true or false.
Unmarked lines with a green number in the second column are those that have been completely covered by the test execution.
Figure 6-10: An example Cobertura report
The complexity indicated in the top right is the cyclomatic complexity of the methods in the class, which measures the number of branches that occur in a particular method. High numbers (for example, over 10), might indicate a method should be re-factored into simpler pieces, as it can be hard to visualize and test the large number of alternate code paths. If this is a metric of interest, you might consider having PMD monitor it.
The Cobertura report doesn’t have any notable configuration, so including it in the site is simple. Add the following to the reporting section of
[...] <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> </plugin> [...]
If you now run
mvn site under
proficio-core, the report will be generated in
While not required, there is another useful setting to add to the build section. Due to a hard-coded path in Cobertura, the database used is stored in the project directory as
cobertura.ser, and is not cleaned with the rest of the project. To ensure that this happens, add the following to the build section of
[...] <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <executions> <execution> <id>clean</id> <goals> <goal>clean</goal> </goals> </execution> </executions> </plugin> </plugins> </build> [...]
If you now run
mvn clean in
proficio-core, you’ll see that the
cobertura.ser file is deleted, as well as the target directory.
The Cobertura plugin also contains a goal called
cobertura:check that is used to ensure that the coverage of your source code is maintained at a certain percentage.
To configure this goal for Proficio, add a configuration and another execution to the build plugin definition you added above when cleaning the Cobertura database:
[...] <configuration> <check> <totalLineRate>100</totalLineRate> <totalBranchRate>100</totalBranchRate> </check> </configuration> <executions> [...] <execution> <id>check</id> <goals> <goal>check</goal> </goals> </execution> </executions> [...]
Note: The configuration element is outside of the executions. This ensures that if you run
mvn cobertura:check from the command line, the configuration will be applied. This wouldn’t be the case if it were associated with the life-cycle bound check execution.
If you now run
mvn verify under
proficio-core, the check will be performed.
The rules that are being used in this configuration are 100% overall line coverage rate, and 100% branch coverage rate. You would have seen in the previous examples that there were some lines not covered, so running the check fails.
Normally, you would add unit tests for the functions that are missing tests, as in the Proficio example. However, looking through the report, you may decide that only some exceptional cases are untested, and decide to reduce the overall average required. You can do this for Proficio to have the tests pass by changing the setting in
[...] <configuration> <check> <totalLineRate>80</totalLineRate> [...]
If you run
mvn verify again, the check passes.
These settings remain quite demanding though, only allowing a small number of lines to be untested. This will allow for some constructs to remain untested, such as handling checked exceptions that are unexpected in a properly configured system and difficult to test. It is just as important to allow these exceptions, as it is to require that the other code be tested. Remember, the easiest way to increase coverage is to remove code that handles untested, exceptional cases – and that’s certainly not something you want!
The settings above are requirements for averages across the entire source tree. You may want to enforce this for each file individually as well, using
branchRate, or as the average across each package, using
packageBranchRate. It is also possible to set requirements on individual packages or classes using the
regexes parameter. For more information, refer to the Cobertura plugin configuration reference.
Choosing appropriate settings is the most difficult part of configuring any of the reporting metrics in Maven. Some helpful hints for determining the right code coverage settings are:
- Like all metrics, involve the whole development team in the decision, so that they understand and agree with the choice.
- Don’t set it too low, as it will become a minimum benchmark to attain and rarely more.
- Don’t set it too high, as it will discourage writing code to handle exceptional cases that aren’t being tested.
- Set some known guidelines for what type of code can remain untested.
- Consider setting any package rates higher than the per-class rate, and setting the total rate higher than both.
- Remain flexible – consider changes over time rather than hard and fast rules. Choose to reduce coverage requirements on particular classes or packages rather than lowering them globally.
Cobertura is not the only solution available for assessing test coverage. The best known commercial offering is Clover, which is very well integrated with Maven as well. It behaves very similarly to Cobertura, and you can evaluate it for 30 days when used in conjunction with Maven. For more information, see the Clover plugin reference on the Maven Web site here.
Of course, there is more to assessing the health of tests than success and coverage. These reports won’t tell you if all the features have been implemented – this requires functional or acceptance testing. It also won’t tell you whether the results of untested input values produce the correct results. Tools like Jester, although not yet integrated with Maven directly, may be of assistance there. Jester mutates the code that you’ve already determined is covered and checks that it causes the test to fail when run a second time with the wrong code.
To conclude this section on testing, it is worth noting that one of the benefits of Maven’s use of the Surefire abstraction is that the tools above will work for any type of runner introduced. For example, Surefire supports tests written with TestNG, and at the time of writing experimental JUnit 4.0 support is also available. In both cases, these reports work unmodified with those test types. If you have another tool that can operate under the Surefire framework, it is possible for you to write a provider to use the new tool, and get integration with these other tools for free.