A faster way to move in product
Speed matters. Most teams do need to ship faster, and a lot of the time slowness really is just coordination debt, unclear ownership, and too many dependencies.
The problem starts when being fast becomes a way to avoid looking closely at what just happened. A lot of teams have this moment after a launch or a change where the results are not bad, but they are not clean either. One metric is slightly up, another is flat, another is down, and the overall story is ambiguous enough that you can interpret it in multiple ways if you squint hard enough.
And this is where the behaviour diverges, because there are two very different instincts that can kick in.
One instinct is to slow down and do the annoying work: figure out what actually changed, whether it changed for the right users, whether the measurement is even trustworthy, and whether the thing you shipped solved the real problem you thought you were solving. That path is uncomfortable because it forces you to say “we don’t know yet” in a world that rewards confident narratives, and because it often ends with a conclusion nobody enjoys, which is that the bet wasn’t as good as you thought.
The other instinct is to protect momentum by protecting the story, which usually means picking the most flattering interpretation, calling it an “early signal,” and immediately moving to version two. You do it because there is a lot of social and career safety in motion, and because it’s psychologically easier to keep improving something than to admit the original thing might not deserve more time.
That’s how teams end up feeding dead initiatives and keep layering optimisations on top of something that never produced a clear win, and they start confusing activity with progress because at least activity is measurable and defensible. The work looks productive on a roadmap, and it feels productive in sprints, but the product doesn’t move in a way that is clean enough to build conviction, which means you never really earn the right to double down.
Over a quarter, this looks like decent throughput. Over a year, it starts showing up in a very specific way: you have a long list of shipped things, but it’s hard to point to one outcome and say, without qualifiers, “this changed the trajectory.” You end up with a portfolio of “promising” initiatives and very few that you’d actually fight to keep.
The version of velocity that matters most is not how quickly a team can build the next thing, but how quickly it can recognise that a bet is not working and change direction without pretending that version two is still part of the original success story.
If you want an uncomfortable check, look at your roadmap and ask which initiative is already on its third round of improvements without a clear definition of what “win” looks like, and without a moment where you’d be willing to say out loud that it might need to die. That’s usually where your real velocity is being spent.
Thanks for reading! If this kind of thinking is useful, I write more about product, strategy, and decision-making at michelepm.substack.com.

