How Do You Measure a 10x Developer?
In my last post, I talked about the moment AI stopped being a novelty and became something I could actually build with. Leading up to Christmas, vibe coding away, three things hit me:
- First, I felt faster. My workflow had shifted from focusing on a single problem to prompting in one window, reviewing output in another, and knowing I could spin up a third.
- Second, I could see the potential. Even in those early weeks I could tell this method had room to grow.
- And third, I was having more fun coding than I’d had in years.
But feeling faster isn’t evidence. There’s no point paying for subscriptions and learning new tools if they’re not actually increasing your throughput.
So before I went any deeper, I needed to answer a few simple questions: Am I going faster? How much faster? And how much faster could I go?

The Problem With Measuring Developers
How do you measure a developer’s output? The industry has been arguing about this for decades:
- Lines of code: meaningless. More code is frequently worse code.
- Story points per sprint: guestimate. Not the be-all and end-all, but not nothing either.
- Tickets closed: There’s some meaning in volume, but no definition of size.
- Cycle time: The most honest metric, but heavily influenced by reviews and blockers which are out of mine (and AI’s) control.
None of these work in isolation. But looking at them as averages, and in relation to each other, I think there’s something useful we can draw from the combination. One metric lies. Several metrics together start telling a story.
Future posts in this series will add more dimensions: cycle time distributions, PR review turnaround, and rework rate. I’ll backfill the data so we can see the full picture as it develops.
Getting My Own Data
Stats like these aren’t readily available in our system. And it’s not something I’m going to ask the boss for, you know, just in case it shows I’ve been coasting for three years.
So I went and got them myself, using the MCP I’d built (from the previous post). I scraped our ticketing system and paired it with git history. For every ticket I’d worked on since joining the company in early 2023, I pulled:
- Ticket number and type (story vs bug)
- Story points (where assigned)
- Start and completion dates
🧠 Zero Effort Analysis
AI built the MCP tool that fetches ticket information. AI also built the data scraper that cross-references the git graph for ticket numbers and enriches it with that ticket data. There was very little raw effort from me in building either of those tools.
So we have AI building the instruments that measure AI's own impact.
Recursive, and worth noting.
That gave me over 170 tickets across nearly three years, nearly 70 stories and around 100 bugs (not all mine!) - Rookie numbers. We need to get them up.

I dumped it all into a CSV and started looking for patterns.
The Full Picture
Here’s every fortnightly sprint since joining the company in March 2023. Story points (light blue), stories closed (dark blue), and bugs closed (red) all in one view:
The shaded zones are annual leave. You can see the natural pattern: ramp-up in 2023 as I learned the codebase, a steady rhythm through 2024, and predictable holiday dips every summer and Christmas.
And I’m hoping to make the numbers go up, more!

Zooming In: The AI Effect
Now let’s zoom into 2025 onwards and mark when AI entered the picture:
The amber dashed line marks when I started experimenting with AI at work (around September 2025), and the red line is when I got serious about it (January 2026).
What Does This Actually Tell Me?
It’s messy. Real data always is. But a few things stand out:
- The August 2025 spike (41 story points in one sprint). That wasn’t AI, just a really good month. It’s an outlier, not a trend.
- The SMA tells a better story. Ignore the individual bars. The 8-week moving average shows the real trend, smoothing out the noise of individual sprints.
- Holiday dips are predictable. Every summer and Christmas shows the expected trough. Useful for knowing when to not panic about low numbers.
- Tickets tell a fuller story. The purple bars combine stories and bugs into total tickets closed. Story points miss unpointed bugs entirely, so looking at both together gives a more honest picture.
A steady rhythm through 2024, then things start shifting in late 2025 as AI tools entered the workflow. Now we watch for what happens next.
So Am I Going Faster?
The data says tentatively yes, but with caveats:
- Story points are subjective estimates, not objective measurements
- I’m also three years into the codebase now and experience compounds
- One good sprint doesn’t make a trend
What I can say: the baseline from mid-2024 through mid-2025 averaged about 6 story points per sprint. Since getting serious in January 2026, that’s sitting at roughly 7.5 points per sprint, around a 1.3x improvement, while also closing more tickets (3.1 vs 2.5 per sprint). It’s only three sprints of data, but the early signs are promising.
💰 The Adoption Tax
Some of the time I'd normally spend developing, I'm now spending figuring out how to improve my velocity. There's wasted energy there. If my output still matches the baseline while I'm investing time in learning new tools, that's already a quiet win. I'm hoping the real gains come as the learning curve flattens.
This is my baseline
Future improvements in this series can reference back to these charts now. When I build a new tool, automate a new workflow, or try a new approach, I’ll have numbers to compare against.
Coding feels faster, I can see the potential and it’s genuinely fun too! But I need to get down to the stats and to the evidence. Especially now, while every YouTuber and X post is hyping how we’ll all be out of jobs by the end of the year.
I need something real to help answer the question:
Am I a 10x developer?
Not yet. But I’m working on it.
This is part of the Using AI to Become a 10x Developer series.
Comments