Using ‘delivery metrics’ to review team best practices

Craig Taub
5 min readApr 12, 2020

At Nested there are a series of best practices which are followed for a given project’s delivery. Today we will look at those practices, and how to use some “delivery metrics” to understand if they are being followed.

The values had been built up over time through many project successes and failures. Many of which are something you would expect from any product/design and engineering department.

I’ll first share the top 5 practices we have, including a basic reason why we follow them, then get into the “delivery metrics”.

This follows on from 2 articles I hard written previously on Feature Leading and also introducing a Business impact culture.

Top 5 practices

1. Did we split tickets into small enough pieces of work?

We aim for most tickets to take around 1 day of developer time to complete. Occasionally tickets are longer, but 1 day is the ideal. The idea being that it’s hard to know exactly what will be hugely successful, so aiming to release the “smallest shippable product which delivers the most value” is a safe way to spend that expensive developer time resource. The priority however is on the deliverable NOT the time, meaning it can take more than a day and its fine, as long as we feel it’s in the smallest state to offer value.

2. Did we spec tickets out early enough?

We aim for most tickets in a project to be written up by the time the feature begins development. The reason being that by thinking about the start-to-end of the project, the possibility for further complications or unknowns to appear mid-way through the delivery (which we found was the biggest cause of delays) is minimised.

3. Was the product the primary focus?

When working on a big feature or product which might take a while, we found it is best to band together and get it done as a team. This means the entire team is focusing on the same thing with the following benefits:

  • Sharing product knowledge,
  • Diversifying the code,
  • Speeding up the delivery time.

4. Did we do much work in parallel?

As mentioned previously we want to focus on 1 big goal at a time, so being able to work on more than 1 story in parallel is crucial to that. A team of 5 working in sequence is pretty much the same as a team of 1, as there is only ever 1 ticket which can be worked on at a time. Losing all the benefits mentioned in the above list.

5. Did we have many gaps between tickets?

Again relating to 3 + 4 above. If we are focusing on 1 goal for a short period of time, then there should not be many gaps in the project. Gaps would suggest we are working on other things and if we find any we should question why they are there. Some examples are that we are waiting for a dependent piece of work to be complete, but this could and should have already been completed before we started. You do not want any external blockers during delivery.

The problem

So there we have our top 5 practices. We know we should follow them, but the problem is how can we make sure we do?

The way we initially found to do this was to hold a “feature retrospective”, where we review how the delivery of the project went. By the end we hope for a list of both:

  1. Actions; things we should actively go and do
  2. Learnings; things to share with the other engineering teams

However the flaw with this is that it is very subjective, and for long-running projects many problems or details of its delivery are inevitably lost in the past.

We started wondering if there was a way to add quantitative data to our qualitative “feature retrospectives”? Introducing the “delivery metrics” report.

Delivery Metrics

We use a tool called “Clubhouse” at Nested for ticket management, but this process works with Jira and other tools.

By looking at the analytics sections for an “epic” (name in Clubhouse for a place where you can house tickets under a single project) we can get answers to our questions.

Here are some examples of how they can look. This was for a project called Buyers Agent Reports, nicknamed BAR. For each metric I have given the statistic with a small conclusion:

From the above project we summarised the below;

“The results suggested that the tickets were not flushed out early enough, which slowed pacing towards the end of the first third. Because there was a lack of confidence due to the tickets not being created (I only remembered this due to these charts), the first 50% of work was not completed in parallel. We eventually ended up doing all the sections at the same time but I feel we could have cut time off the delivery if we had acted earlier”

They have proved very insightful in the past, we now analyse them inside the “feature retrospective” to assist with learnings and actions.

I really encourage any engineering/product/design departments to give Delivery Metrics a go, to learn more about whether you follow your own best practices and how you can improve.

I hope this article proved useful or interesting in some way.

Thanks, Craig 😃

--

--