Velocity
- Why focus on velocity inhibits agility?
- dangerous dynamic of an organization focuses a Development Team on velocity
- Definition
- The speed at which a DT converts PBIs into a Product Increment that is RELEASABLE
- Pay attention to RELEASABLE. Velocity is applicable when a DT can deliver an Increment that is RELEASABLE at least once per Sprint. In this case, Done = DoD = RELEASABLE. Awesome!
- But even in this case, velocity does not show:
- Whether the client’s problem was resolved
- Whether team is working on the highest priority problems
- Amount of value delivered
- Level of client satisfaction
- Velocity only shows that the team was busy with something
- But is this an accurate measure of the success of the products and services? Do the clients value products based on how busy company’s staff are?
- Velocity reflects: volume of a DT’s output
- The feature velocity of component teams = 0
- have the most stable feature velocity. Because it always equals zero
- not able to convert PBIs into value for a client: need integration with other component teams
- When a team has Undone work
- In my opinion, the most dangerous dynamic arises in cases where a DT is NOT capable of generating a Product Increment that is RELEASABLE each Sprint
- LEAD TIME: Done (releasable) = DoD + Undone, and team optimizes just the part of the whole flow
- DoD strength and organizational agility
- System diagram
- DoD strength -› Undone -› releases -› feedback -› decisions reordering the PB -› Agility -› seeing value in a strong DoD -› actions to improve DoD
- The stronger the DoD, the smaller Undone work in the Sprint
- Undone work limits the team regarding the number of potential releases, and, as a result, also regarding the amount of feedback that can be received from the market
- The more feedback, the more decisions can be on the re-ordering of a Product Backlog
- organizational agility: being able to quickly change the direction of a Product’s development
- This may prompt a Scrum Team to undertake additional improvements to create an even stronger and more exhaustive DoD
- By strengthening the DoD, velocity decreases and becomes real
- How is the strengthening of a DoD reflected in velocity?
- LEAD TIME: Done (releasable) = DoD,
- The velocity will decrease at least in the short term, but becomes more real!
- But this is not a problem if the goal is flow optimization (shortest Lead Time), learning and agility
- Why a DT will strengthen the DoD if they know that effectiveness is evaluated as velocity?
- My answer is unlikely. Focus on velocity keeps the team away from strengthening their DoD. As a result, that inhibits feedback from market, learning and agility
- Focusing on velocity inhibits organizational agility
- velocity reduce agility and increased the amount of Undone work
- Forget about velocity, if your team cannot produce an increment that is releasable at least every Sprint. Focus on strengthening your DoD first
- What should we measure then?
- In conclusion - The Article’s Main Ideas
- the fundamental idea of Scrum is to create an increment that is releasable at least every Sprint
- If your team is already at that stage, then the concept of velocity is useful and genuinely demonstrates the speed at which a team converts PBIs into an increment
- If not, please focus on strengthening DoD and flow optimization (LEAD TIME)
- Velocity: the speed at which a DTs converts PBIs into a Product Increment that is releasable
- Velocity does not show whether clients’ problems are being resolved, nor whether value is being delivered
- Velocity reflects volume of output: Velocity only shows that a team was busy with something
- Velocity of component teams is always zero
- When team focuses on velocity and has Undone, it is unlikely they would strengthen the DoD
- Focusing on velocity inhibits organizational agility
- ADVICE: Focus on the value that is actually delivered to the market
- Escaping the Velocity Trap
- what should an Agile organization measure, & where to start? My answer: start with customer outcomes
- well, sure, that’s nice, but that’s really hard. what about stuff like velocity?” My answer was still the same
- “What’s better? A hundred story points in the wrong direction, or one story point in the right direction?”
- Output matters, but only when delivered outcomes are right
- worrying about velocity is a trap: “we don’t care where we end up, so long as we get there fast.” That’s just wrong
- Teams who measure their velocity but don’t or can’t measure customer outcomes may, quite simply, be driving in the wrong direction
- teams have a lot of reasons why measuring customer outcomes is very hard, and they are right
- but if you can’t tell whether you’re delivering something valuable, you may be wasting your time
- The root of the problem is that most requirements are wrong
- Measuring velocity would be right thing to do if you could be sure that you’re building the right thing
- Most teams think they have sidestepped the problem by claiming that the PO decides whether a PBI is correct or not
- And this is true – except that PO are not somehow magically omniscient; they have the same confirmation biases the rest of us have
- RESEARCH
- The problem is nicely researched in a number of studies by Ronny Kohavi
- In his research group’s long-term study of ideas and their impact on business results
- they found that:
- a third of the ideas produced positive results
- a third resulted in no change
- a third of the ideas actually made things worse
- Things get implemented but are never used, or when they are used they require substantial rework to get them right
- One of the philosophical ancestors of Agile delivery approaches was the Toyota Way
- which identifies 8 types of waste; at the top of this list is overproduction, producing:
- items for which there are no orders
- Requirements that are never used or don’t deliver a desirable result (is a normally invisible form of unsellable inventory, a form of waste)
- Focus on Outcomes, not Outputs
- VELOCITY measures OUTPUT, how much work a team produced
- Except that it really doesn’t measure useful work, just that they did something
- Relying on the PO, SH to tell the DT that the work was useful might seem like a solution, but they are usually the source of the PBIs, and they wouldn’t have proposed them if they didn’t think they were useful. Sprint Reviews are necessary, but not sufficient
- Example:
- work hard on a release and then, customers didn’t think we had done very much: – this is not unique, nearly has had a similar experience
- we had delivered a lot of features to those customers, lots of output, but we hadn’t really improved the outcomes that they experienced
- Making hypotheses explicit helps
- As I learned when I talked to customers, you can’t see any of this until you start to measure customer experiences
- Every PBI is really just a theory about how you are going to make someone’s life better, and your life gets easier when you state the PBI as a hypothesis, not a statement of fact
- book Lean UX, a different format from the typical user story for capturing PBIs:
- We believe that we will achieve [business outcome] if this user [persona] can achieve [user outcome] with this feature [feature]
- The important difference: belief is made explicit, as is the business outcome that will be achieved when the user has a particular experience
- What’s missing from this statement, is how you will measure the user outcome
- “Done” really has to mean “in use”
- Why Lead Time matters
- The smallest possible release: one outcome, one persona: Lean UX vernacular
- If you can’t maximize Outcomes, maximize Learning
- TIPS - How to get started