Doubling Product Velocity at Deepnote Through Pragmatic Metrics
As Head of Engineering at Deepnote, I faced a critical challenge: despite our talent, our product velocity was too slow. This is the story of how I stopped chasing vanity metrics, built trust with my team, and implemented a system that doubled our throughput.

Fun Fact
I faced a near revolt when I introduced metrics tracking tools. Engineers feared surveillance and comparison. The breakthrough came when I shifted the narrative from "measuring people" to "measuring the system" and let the team define their own Working Agreements.
Conference Presentation
In software engineering leadership, "velocity" is often the most loaded term in the room. It is the metric stakeholders crave, yet it is the one engineers dread because it is so frequently misunderstood. As the Head of Engineering at Deepnote, I hit a wall. Despite our talent, our product velocity wasn't where it needed to be. The perception from leadership and the feeling within my team were identical: we were moving too slowly. We weren't agile enough. I wanted to fix it, but I faced the classic dilemma: What exactly do I fix, and how do I measure if I've fixed it? This is the story of my journey—from chasing vanity metrics and building custom tools to finally implementing a system that improved our engineering culture and doubled our perceived velocity.
TL;DR: Executive Summary
The Challenge: My team felt sluggish. I initially tried improving estimation accuracy and measuring individual output, but velocity remained stagnant.
The Trap: I allowed us to spend months building internal tools to measure DORA metrics, wasting time on meta-work rather than shipping product.
The Pivot: I made the strategic decision to stop building and start buying, adopting Swarmia to handle the measurement.
The Friction: I faced significant pushback from engineers who feared surveillance and micromanagement.
The Solution: I shifted the narrative from "measuring people" to "measuring the system," using automated Working Agreements to nudge behavior.
The Result: Under this new system, cycle time dropped from 4 days to 45 hours, and our throughput more than doubled.
Phase 1: My False Start with Vanity Metrics
When I first set out to "fix" velocity, I fell into the trap that ensnares many new managers: I tried to measure everything.
I started by chasing individual metrics, hoping that optimizing a single number would uncork our potential. I looked at number of commits and lines of code. I quickly learned that this was a disaster. The moment I incentivized commit volume, I inadvertently encouraged developers to break valid work into microscopic, meaningless chunks. I wasn't measuring value; I was measuring noise.
The Estimation Fallacy: I then hypothesized that our sluggishness was due to poor planning. I assumed that if I could get the team to estimate better, we would move faster. I audited our ticketing and estimation processes, and we got really good at it. We reached a point where our burn-down charts were text-book perfect.
The Result? Nothing changed. My team was excellent at predicting exactly how slow they were, but the rate at which we shipped value to users hadn't budged.
Phase 2: The "Build vs. Buy" Decision
I went back to the drawing board and turned my focus to DORA metrics (Deployment Frequency, Lead Time, Change Failure Rate, and Time to Restore). These measured outcome and flow, which was exactly what I needed.
However, I made a critical error: I decided we should build the measurement tools ourselves.
I allocated significant engineering hours to building custom dashboards and pipeline integrations. It was a mistake. I had my best engineers doing meta-work—building tools to measure engineering—instead of building the actual product.
I realized I needed to stop trying to invent a solution and use a tool designed for the job. After evaluating market options like LinearB and Jellyfish, I selected Swarmia because of its focus on developer experience rather than just executive reporting.
Phase 3: Managing Culture Shock and Resistance
Bringing in an external tool to "measure engineering" is a dangerous moment for any leader. I knew it could destroy trust if I handled it poorly.
Swarmia gave me visibility into granular metrics like Cycle Time and Review Time, but the rollout was turbulent. When my developers saw a dashboard logging who deployed what and how many commits they made, the reaction was immediate and negative.
I faced a near revolt. The team asked: "Are you spying on us? Are we going to be fired if our stats drop?"
I also noticed a rivalry developing. People started comparing their stats—"I did four deployments, you only did two." I realized that while I had the right tool, I hadn't yet established the right culture to support it.
Phase 4: Shifting the Narrative
As the engineering manager, I was crushed between two stones. Leadership wanted the hard data to prove we were working; my engineers wanted the freedom to work without surveillance.
To fix this, I had to change the framing of the metrics entirely. I adopted a specific mantra for the organization: "We are measuring the system, not the person."
I had to explicitly communicate my philosophy to the team:
I will not use these metrics for performance reviews. I am not looking at individual PR sizes to judge competence. I am looking for bottlenecks in our process.
Implementing Working Agreements: My breakthrough came when I implemented Working Agreements. Instead of me personally nagging the team to "review code faster," I facilitated a session where the team agreed to their own rules, such as All code reviews should be finished within 12 hours.
I programmed this rule into the tool. Now, if a review took 13 hours, a bot notified the team. It wasn't me cracking a whip; it was the system reminding the team of their own promise. This psychological shift was the key to adoption.
The Results: 3 Months Later
By shifting the focus from individual performance to system health, the results were undeniable. Comparing our performance from pre-adoption (Dec/Jan) to post-adoption (Mar/Apr), the impact of my strategy was clear:
Cycle Time: Before: 4 Days → After: 45 Hours → 50% Faster
Throughput: Before: 11 PRs/week → After: 25 PRs/week → Over 100% Increase
PR Work in Progress: Before: ~6 → After: 3 → 50% Reduction
Time to First Review: Before: 16 Hours → After: 7 Hours → 55% Faster
Did I Actually Fix Product Velocity?
It is easy to game numbers, but did it feel faster? Yes.
Product velocity is often a matter of perception. Before I implemented these changes, features got stuck in the pipeline—trapped in testing, waiting for reviews, or stalled in deployment. By optimizing the engineering flow and reducing WIP, I eliminated the "wait time" for stakeholders.
We went from a team that felt stuck in the mud to a team that was shipping continuously.
My Lessons Learned
Don't Build Your Own Yardstick: I learned the hard way that unless you are a metrics company, you should not build your own metrics tools. I now advise buying a solution so you can focus engineering talent on the core product.
The "Big Brother" Fear is Real: I underestimated the fear of surveillance. I learned that you must proactively address why you are measuring before you start, or the team will assume the worst.
Measure Systems, Not Individuals: I made a strict rule never to use velocity metrics for individual performance reviews. The moment you do, the metrics become useless because people will game them to save their jobs.
Focus on "Investments": One of my most useful insights came from tracking "Investment Distribution." I could finally prove to stakeholders that we weren't just "slow"—we were spending 40% of our time on necessary maintenance. This data allowed me to have better trade-off conversations.
Visual Context

Presenting at Tech Fellows: Engineering metrics and delivering features while having fun
Conclusion
Improving engineering efficiency isn't just about code. It's about psychology, trust, and removing the friction that stops great engineers from doing their best work. By shifting from measuring people to measuring systems, by giving teams ownership over their own Working Agreements, and by focusing on flow rather than vanity metrics, I was able to double our product velocity while actually improving team morale. The metrics didn't just show we were faster—the team felt it, stakeholders saw it, and customers benefited from it.
Key Takeaways
- Vanity metrics like lines of code and commit count incentivize the wrong behaviors
- Building your own metrics tools is usually wasted engineering effort—buy a proven solution
- Address "Big Brother" fears proactively by establishing trust before introducing measurement
- Measure the system, not individuals—never use velocity metrics for performance reviews
- Working Agreements created by the team and enforced by automation drive cultural change
- DORA metrics (Cycle Time, Throughput, WIP) measure flow and outcomes, not activity
- Tracking "Investment Distribution" helps justify maintenance work to stakeholders
- Reducing Work in Progress (WIP) accelerates delivery more than adding headcount

About the Author
Vojtech Gintner - CTO @ Finviz
"Turning Engineering Chaos into Business Value"
Real-world leadership, not just theory. As the active CTO of Finviz, I don't just advise on strategy—I execute it daily. I navigate the same market shifts, technical bottlenecks, and leadership challenges that you do.
With 20 years of hands-on engineering experience (from React/Node to distributed infrastructure), I specialize in turning chaotic software organizations into scalable, high-performing assets. I bridge the gap between business goals and technical reality—speaking the language of your board and your developers.
Interested in similar results for your organization?
Let's discuss how I can help your engineering team overcome challenges and achieve ambitious goals.
Get in Touch