Your dev "productivity" metrics are broken (and here's the proof)
Everyone wants to show progress. We track numbers, build dashboards, and celebrate achievements.
But what if the very metrics you're relying on are misleading you?
Today, we're diving into the world of metrics – but not the ones that actually measure something meaningful, rather the ones that... just look pretty on reports. Get ready to meet vanity metrics.
Code Coverage and Croatia – (how to cheat broken system)
Let's rewind to 2017. I was working on a software development team, getting ready to release a new version of our application to production.
Everything was going smoothly, until suddenly – boom! It turned out our changes had caused a 0.3% drop in code coverage. Small? At the time, it felt like a disaster.
Our project had a strict requirement: 80% test coverage to even consider deploying.
The problem was, the only person who could bypass this check, our team leader, was... on a long weekend trip in Croatia. We couldn't reach him, and we had to come up with something.
So what did we do? Instead of writing tests that would actually improve quality, we started getting creative. We removed empty lines from the code, shortened expressions to take up fewer lines.
And you know what? It worked! Code coverage actually went up, we could release the version, and we didn't have to bother the team leader.
But did the quality of the project increase?
Not necessarily.
This was a classic example of a vanity metric – something that looked great on reports but didn't tell the truth about what was really going on. It was a stark reminder that what gets measured, gets managed, but not always in the way you intend.
The Traps of Productivity – PRs, Tickets, and Story Points
But code coverage is just the tip of the iceberg.
I also worked on a team where the Head of IT would present data on the development team's efficiency at company-wide "all hands" meetings. What did he boast about? The number of pull requests (PRs) in a given month.
Does that make sense? Sure, it's some measure of work done. But can you game it? Absolutely! The team could simply make more frequent, smaller changes, driving up the PR count and creating an illusion of productivity. Evaluating a team solely on the number of PRs, regardless of their complexity, doesn't make much sense. It incentivizes quantity over quality, making it harder to track true progress.
It's similar with the number of closed tickets or completed tasks. If you evaluate a team based on this metric, guess what they'll do? They'll start splitting tasks into even smaller ones! Just because you're closing 20 small tickets that collectively make up one larger task doesn't mean your product is developing faster. This can lead to micromanagement and a loss of focus on the bigger picture, as teams prioritize ticking boxes rather than delivering cohesive features.
Another example is Story Points. This is a measure of task complexity in agile methodologies. We estimate complexity, pretending they're not hours, but... what happens when you start evaluating a team based on completed Story Points? Suddenly, tasks "magically" start being estimated for more points! The illusion of productivity grows, but does the delivered value?
Story Points are useful for estimation and planning, but not as a metric for evaluating team performance. When used as a performance metric, they can create a culture of inflate-to-compensate, where estimates become self-serving rather than accurate.
What to Measure Instead? Metrics You Can't Game
So, if vanity metrics are so misleading, what should we measure instead? Above all, metrics that are very difficult to game and that genuinely translate into business value.
1. Time from requirement inception to feature delivery to the customer. This is a powerful metric! We measure the entire process here – from idea to a finished product in the user's hands. It involves collaboration between multiple teams (development, product, design, etc.), and its direct correlation to customer value makes it incredibly difficult to game.
This metric tells you not just how fast you're building, but how quickly you're delivering tangible impact.
2. Mean Time to Recovery (MTTR), which is the average time it takes a team to fix a production bug. Bugs happen, that's normal in software. What's important is how quickly the team reacts. We measure the time from a bug appearing to its resolution (or the restoration of a previous version).
This measures responsiveness, system resilience, and ultimately, customer satisfaction when things go wrong. It encourages a focus on robust monitoring and efficient incident response.
3. Actual production bug rate. Simple and to the point: how many bugs occur in a given version after deployment? We measure the number of actual problems users encounter. This is a hard and undeniable figure that reflects the true quality of your releases and the stability of your product. It drives teams to prioritize thorough testing and quality assurance before deployment.
Optimize for Real Value
Remember: whatever we measure, we will optimize for it. If we measure vanity metrics, we'll gain an illusion of success. If we measure what actually leads to a better, more stable product and a satisfied customer, then we deliver real value.
Let's stop chasing easily traceable numbers that can be manipulated. Instead, let's focus on metrics that matter: the speed of value delivery, minimizing critical bugs, and rapid reaction to problems. These are the things that genuinely bring money to your business, build lasting success, and foster a truly productive team culture.
What are your thoughts on vanity metrics? Have you ever fallen into their trap, or found a brilliant way to measure real value in your projects? Share your experiences in the comments below!

