This series deals with the implementation of a unit testing process in a team or across multiple teams in an organization. Posts in the series include: | |||
---|---|---|---|
Goals | Outcomes | Leading Indicators I | Leading Indicators II |
Leading Indicators III | Leadership I | Leadership II | The plan |
We talked about management attention and support, and there’s more leaders can do, in order to help us make the process work.
Remember those leading indicators? They don’t collect themselves. If we think about those indicators as a feature, there are customers waiting for them.. Management and leaders are those customers.
Turns out the people who care about the metrics are the ones who have the power to facilitate their collection, demand the reports, analyze the patterns and ask for correction plans. Who knew?
The funny thing is the collection of the metrics is a leading indicator by itself. If the process of metric collection, anlysis and feedback is done, chances are the process will go well, since somebody’s supporting it. If it doesn’t (for any combination of reasons), people see that the process is “not that important to managers”, which quickly translates to “it’s not that important to us, and me”. Even if people care about unit testing, they care more about making sure they don’t get caught on other things that managemnet does care about. Safety first.
Embracing change
Even when the process does go well, an anti-pattern may appear: Sticking to the initial plan, rather than changing course over time, which we expect leaders to do.
An example of that is around our favorite metric: Management asserts that coverage should increase over time. Since our leaders are wise, they don’t state a minimum coverage threshold, just an indication that coverage is growing. And, we’re not covering old code, just new code. Looks innocent enough.
What happens next will surprise you (not). To keep the coverage increasing, people are encouraged to add tests to their code. But since our developers are also wise, they don’t add code that unit tests don’t make sense for (like data transformation, or auto-generated code).
So the coverage doesn’t rise. Alas, if that’s most of their code they don’t get “coverage points”, or in some measurement systems, lose some of them. Remember safety first? Let the gaming begin. The developer may add tests that are ineffective (or even harmful), just to satisfy the metric.
Overwatch
The only way to make sure this doesn’t happen is through retrospection, analysis and feedback. I’ve already said that as stakeholders, leadership can make sure these take place. But should they do all that by themselves?
Here comes the next way how leadership helps the process: creating forums for the learners to learn further from peers and grow internal experts. We call those communities of practice. Where practitioners discuss, review, analyze and get feedback to learn from the success and mistakes of others.
The chance of these forums created, and continuing to take place over time, is likely to succeed with management support. In these forums, the discussion takes two forms: the tactical level – how to write better tests, learn refactoring methods, etc. But also at the strategic level, look at the process itself, the metrics, suggest course correction, and follow up with the implementation.
Now, the more authority we give the communities, the better. We like self-organizing and self managing teams. They will need leadership to be created, exist and help when they need external resources and efforts.
Combine all these leadership support methods, and we’re on our way to a successful implementation.
2 Comments
David V. Corbin · July 19, 2017 at 5:11 pm
“But since our developers are also wise, they don’t add code that unit tests don’t make sense for (like data transformation, or auto-generated code).”
I disagree. If the developers are truly wise they lower the cost of unit test creation for these types of items to the degree that including them is nearly free.
One trivial example. Automatic Properties in .NET [getter and setter with no visible backing value or code. Clearly these will not “fail” and are often in the “not worth testing” category.
However, about 10 minutes work and a single test can be written that invokes *all* automatic properties, thus generating coverage.
Why is this valuable? Because if someone makes a change (perhaps to a property with some logic that needs testing) it become instantly recognizable as “non-covered” code and highlights the need for the now important test.
Gil Zilberfeld · July 20, 2017 at 11:06 am
Thanks David!
Let’s agree to disagree on this one.