Not too long ago I was introduced by Jim Benson to the concept of dropping estimation in favour of measuring cycle time. To me, this was a radical departure. My very being seemed to rebel against the notion (yes, I have been a project manager). And yet the argument against estimation seemed so powerful, so in keeping with my own experience (humans are notoriously bad at estimation, estimates are wilfully interpreted as concrete commitments) that I could not easily dismiss it. The idea of measuring cycle time also appealed strongly to my empirical bent. I was thrown into a state of confusion, of profound cognitive dissonance.
How to facilitate essentials for governance like ROI projections, prioritization etc. if not with estimation?
Shortly thereafter, while reading David Anderson's excellent Kanban, and the account contained therein of how Dragos Dumitriu (@netheart) turned a struggling Ops Dev team around, many of these concerns were addressed. In fact, I wished I had been as smart as Dumitriu in a recent similar engagement of my own. Basically he agreed with his customers that the team would abandon estimation and instead would undertake to complete any piece of work they committed to within a certain time frame. This was possible as existing agreements stated that the team involved should not take on work beyond a certain scope – work of that nature was managed and executed via different channels. Historical data indicated that only 2% of requests made to the team exceeded the agreed-upon scope and it was decided that the team could query work items (by estimating them) if they felt the item in question might exceed the limits specified. Upshot of all this was that the team went from estimating every job that they did, or eventually did not do, to estimating only in exceptional cases. In combination with a couple of other very astute changes this led to a fairly spectacular improvement in the team's performance. I was convinced, nearly...
Part of my ongoing issue with dropping estimation arose from the benefits I had seen Planning Poker deliver - if the team was not estimating, when would the opportunity for high-bandwidth communication on the content of upcoming work arise? What if the team involved had no agreed cap on the scope of work it should take on, very large chunks of work might arrive and clog up their queue. What to do with outliers? How to identify them? How much variance can be tolerated in a queue without detrimentally affecting throughput? And what about batch size, if we are not estimating, how do we know when to split?
Well dear reader, I'm sure you will be glad to know that, thanks to Siddharta's recent post on variance in velocity (and what not to do about it), the alpha waves are humming away harmoniously inside my cranium once again .
I suggest, in the spirit of CALM Alpha, that this seeming conflict between multiple agile methodologies (or their proponents) can be resolved as various approaches to the same problem. The trick is to think in ranges and limits and not absolute numbers. In the Dumitriu example above, estimation was implicit in the agreed scope limit (we estimate that we can get this done in less than X days), Mark Levinson has blogged on related insights (stories < 8 to 13 pts), and Ron Jeffries is proposing a similar approach of late (we estimate story to be sufficiently small to complete in a few days). This is congruent with the complexity heuristic of fine-grained objects.
We can conclude, in agreement with Esther Derby, that estimation can be useful because of the communication that it engenders, but that estimates more accurate than the likes of “needs to be split” and “small enough” have questionable worth. Such aggregated estimates are used for managing batch size and variance (and thus throughput) and not as a stick for beating developers. Thereafter, whether you use velocity or cycle time matters little - prediction remains precarious.