Measuring Vision

How do I really know if our teams are improving?

This was a recurring thought in my head about six months ago. Things were going really well. We were getting consistently praised for our accomplishments, and other leaders were telling us we were setting a great example. But in the midst of that, I got worried that we were missing something and just weren’t seeing it.

And so, for all the same reasons I felt I needed to kick off Leesurely chats, I’ve been rethinking my relationship with metrics. But this can be a thorny path.

There is a lot of justified skepticism about metrics in the agile community mostly because of how they are implemented. Often times a senior leader gets fixated on particular metrics, overly reducing the complexity of what we do in software development, forcing teams to sub-optimize. We’ve all seen it happen. So how can we mitigate that risk?

The approach we tried is to go to the team managers and give them a goal to deliver on three things over the course of four months:

  1. Choose at least one attribute to focus on improving for each of your teams. Doesn’t have to be the same for each (most managers have 2-3 teams under their care). You must be able to speak to why you chose that attribute out of all the different things you could focus on.
  2. Then choose at least one metric to use to measure changes to that attribute. You must be able to speak to why you chose that metric.
  3. Finally, you must be able to show what you did over those four months to try and drive improvement of that attribute and what the results were.

We didn’t care so much whether the metric got better or not. It might get worse. But the leader had to be able to explain what happened and why.

We just wrapped up the final presentations on this phase. It was awesome to see what our managers learned. Some examples of what got focus: psychological safety, quality, predictability.

In my follow up 1:1s with the managers I asked, “why do you think we didn’t just give you a metric to start working towards?” The responses were consistent: if you had given us a metric we would have tried to game it – we wouldn’t have understood the connection between that metric and what was really going on with the team.

What this exercise did was make sure our leaders had a vision for what “good” looks like for their teams and were actively working towards it. Often we find ourselves floating from one crisis to the next, carried on the waters of events outside our control. That happens when we don’t have a vision we’re working towards. This exercise forced leaders to re-establish that vision for their teams and hold themselves accountable for it.

One final thought… It can be really hard to be patient sometimes. Ask any of my colleagues and they will tell you patience is not one of my natural gifts. But in this case, investing those months into our leaders is paying off in a big way. They are learning new tools to use to help their teams improve, and we’re mitigating many of the dangers of metrics driven management by grounding all that in verifiable reality and letting the teams drive it. We’re also making sure we have a diverse set of voices helping guide us to make sure we avoid those negative outcomes I mentioned earlier.

Now we just have to automate gathering all these helpful metrics. That should be easy, right? :-/

2023 Update: I had the opportunity to talk more about how this process has been going on a podcast. You can hear that episode here: https://anchor.fm/thetechtrek/episodes/Leaders-Need-Two-Things-Time-to-Learn-and-Time-To-Fail-e1u282d