Pagina's

Showing posts with label kanban. Show all posts
Showing posts with label kanban. Show all posts

Tuesday, 17 September 2013

The scribe stirs

It has been a while dear reader, far too long! And that without an explanatory word as to my absence. For this, my apologies. Much has happened in the meantime, both personally and professionally. Not to be making excuses but I had to stop some activities to achieve a sustainable pace. The blog suffered. The fact that all the images disappeared suddenly for some strange reason, and that there was no one-click way to restore them, didn't help counter the inertia when I started to feel uncomfortable about how little log my blog was becoming...

Since I last posted, I have been involved in a Solvency II project which, though ultimately very satisfying, was a challenge like none I had ever faced at work before. For a narrative of this project, from the perspective of change management, see our presentation at Agile on the Beach.

Otherwise, I also became a father for the first time, of twins!

During the same period of radio silence, I also attended a Cognitive Edge Cynefin foundations course, read some good books (including Continuous Delivery and The Lean Startup) and continued to build the agile consulting service at Qualogy.

My current contract is a step into the unknown for me, I am coaching (Kanban) at a company that provides platforms and infrastructure as a service. Exciting virtualised cloudy kinda stuff. This will certainly provide me plenty to blog about as I turn more and more to Lean to get me through the day. Topics I intend to blog on in the near future will include teams as complex adaptive systems (based on a definition of CAS as offered by Paul Cilliers in "Complexity & Postmodernism"), scrumban as solution for the enterprise and (automated) testing as a merciless and pervasive spotlight on quality.

To fresh beginnings and instructive ruminations!

Wednesday, 28 March 2012

To estimate or not to estimate (that is the question)


Not too long ago I was introduced by Jim Benson to the concept of dropping estimation in favour of measuring cycle time. To me, this was a radical departure. My very being seemed to rebel against the notion (yes, I have been a project manager). And yet the argument against estimation seemed so powerful, so in keeping with my own experience (humans are notoriously bad at estimation, estimates are wilfully interpreted as concrete commitments) that I could not easily dismiss it. The idea of measuring cycle time also appealed strongly to my empirical bent. I was thrown into a state of confusion, of profound cognitive dissonance.

How to facilitate essentials for governance like ROI projections, prioritization etc. if not with estimation?

Shortly thereafter, while reading David Anderson's excellent Kanban, and the account contained therein of how Dragos Dumitriu (@netheart) turned a struggling Ops Dev team around, many of these concerns were addressed. In fact, I wished I had been as smart as Dumitriu in a recent similar engagement of my own. Basically he agreed with his customers that the team would abandon estimation and instead would undertake to complete any piece of work they committed to within a certain time frame. This was possible as existing agreements stated that the team involved should not take on work beyond a certain scope – work of that nature was managed and executed via different channels. Historical data indicated that only 2% of requests made to the team exceeded the agreed-upon scope and it was decided that the team could query work items (by estimating them) if they felt the item in question might exceed the limits specified. Upshot of all this was that the team went from estimating every job that they did, or eventually did not do, to estimating only in exceptional cases. In combination with a couple of other very astute changes this led to a fairly spectacular improvement in the team's performance. I was convinced, nearly...

Part of my ongoing issue with dropping estimation arose from the benefits I had seen Planning Poker deliver - if the team was not estimating, when would the opportunity for high-bandwidth communication on the content of upcoming work arise? What if the team involved had no agreed cap on the scope of work it should take on, very large chunks of work might arrive and clog up their queue. What to do with outliers? How to identify them? How much variance can be tolerated in a queue without detrimentally affecting throughput? And what about batch size, if we are not estimating, how do we know when to split?


Well dear reader, I'm sure you will be glad to know that, thanks to Siddharta's recent post on variance in velocity (and what not to do about it), the alpha waves are humming away harmoniously inside my cranium once again .

I suggest, in the spirit of CALM Alpha, that this seeming conflict between multiple agile methodologies (or their proponents) can be resolved as various approaches to the same problem. The trick is to think in ranges and limits and not absolute numbers. In the Dumitriu example above, estimation was implicit in the agreed scope limit (we estimate that we can get this done in less than X days), Mark Levinson has blogged on related insights (stories < 8 to 13 pts), and Ron Jeffries is proposing a similar approach of late (we estimate story to be sufficiently small to complete in a few days). This is congruent with the complexity heuristic of fine-grained objects.

We can conclude, in agreement with Esther Derby, that estimation can be useful because of the communication that it engenders, but that estimates more accurate than the likes of “needs to be split” and “small enough” have questionable worth. Such aggregated estimates are used for managing batch size and variance (and thus throughput) and not as a stick for beating developers. Thereafter, whether you use velocity or cycle time matters little - prediction remains precarious.

Thursday, 13 October 2011

Lean and Kanban 2011 BE - A narrative


The day before the Lean & Kanban 2011 Benelux conference started I was still undecided as to whether I should head for Antwerp that evening as planned or take an early train the next morning (my Sunday was proving to be busier than I had envisaged). I eventually plumbed for the riskier option of the early train (departure 6 am), which was possible because my colleague had gone to Antwerp as planned and secured the room in our hotel. I had unwittingly set a marker for myself; early rises and mental gymnastics would define my week. I had also exercised real options.

I arrived in Antwerp as scheduled. Having had just enough time to catch up with my colleague, register and marvel at the industrial aesthetics of the revamped hangar where the conference was taking place, I settled in for the first keynote: “Rethinking Deming” by Don Reinertsen. Given the context, it seemed that someone was spoiling for a fight. Those of us who enjoy (intellectual) confrontation, especially when peppered with rhetorical flourish, were not disappointed. Not knowing Deming (or statistics) as well as I might I took the whole thing at face value; food for thought. I did however enjoy his trashing of the red bead game (“an entertaining con”) and identified with some of his reservations about the deeper (“terrifying”) implications of some elements of systems thinking (the well-designed system aims to totally emasculate the individual). Reinertsen also introduced the idea of random drift – an accumulation of (random) events affects events in the future (by changing the system through incremental feedback) – I'm sure if this idea was developed, chaos, tipping points and complexity theory generally would quickly come into play.

For the rest of the morning I followed a case study. Three different speakers (or speaker pairs) took us through their experiences with specific transitions which they had guided or were busy guiding. This session delivered plenty of practical know-how and it was followed by a lively discussion. Although I had hoped to pick up some pointers on dealing with that most difficult element of change processes – the sceptic - we didn't get much farther than a version of “build it and they will come”. Which is fine, but it introduces a chicken-an-egg problem; how to demonstrate without investing?

After lunch on day one I followed the Real Options and BDD/Testing track. The notion of Real Options appeals to me, as do its cousins set based design (which Michael Kennedy talked on on day 2) and BDD. Here again however I was left wondering how to convince the sceptical that set based design/real options was worth a try as an economically sound way to realise value (when not given the opportunity to actually demonstrate).

As if in tune with my concerns generally on aids to persuasion, Mark Robinson brought the afternoon to a close with a refreshingly old-style presentation in which he used an ingeniously simple spreadsheet to demonstrate the effects of various WIP limits on throughput. Limiting WIP is anathema to many managers as they think it will reduce their efficiency, which it might, but I feel its efficacy we should be after and not total(ly illusory) efficiency.

Dave Snowden delivered the afternoon keynote. The gist of his talk (and I do his erudition, learning and sense of humour no justice by summarising it thus) was that context should be instrumental in determining appropriate action and that only well understood theory could help us decide what to do when faced with varying contexts in practice. Snowden continued in the combative spirit displayed by Reinertsen in the morning keynote by energetically stepping on some absolutist toes, claiming for instance that systems thinking was Talyorism by another name. His message was that kanban is not Truth and neither is Lean, nor is it Scrum, or anything else for that matter; context determines truth. Whereby Snowden of course cleverly laid claim to the Truth (There is no Truth, only truths). I was enjoying myself thoroughly!

On day 2 Alan Shalloway's opening keynote was also informed by the tension between the individual and the system. For the sake of argument he had chosen to state that it was “all about the people” but the point that he was actually making was that it is all about the interplay of individuals and the system which they inhabit/cause to emerge. Complex systems and their constituents co-evolve. Matter is both particle and wave. Man is both nature and nurture. It's all about the feedback.

After lunch I attended Bob Marshall & Grant Rule's double session on understanding effectiveness and the possibilities that Rightshifting delivers for managing same. This was all so completely new to me that I won't trouble you with half formulated thoughts on it. Suffice it to say that I am busy studying the Marshall model and hungrily consuming anything I can find on chaordic organisations.

And then there was Jim Benson, what a delight this guy was! He's allergic to powerpoint and delivered his utterly engaging and frequently funny talk strolling back and forth between two flip-charts; one was his kanban and the other was his whiteboard so to speak. As he spoke and wrote and drew, he moved sticky notes appropriately on his planning chart thereby breezily guiding his impressive delivery while keeping his process & progress transparent to all. It was a virtuoso performance in the application of kanban. The subject matter was substantively different to that which had come before too; we were talking the psychology of kanban.

I liked Benson's idea of existential overload. As I mentioned, my week was to be one of short nights and heavy cerebral exercise - my 2 days at LKBE were followed up by three days of CSD training with Ron Jeffries and Chet Hendrickson. At one point during that training Ron, in explaining one of the positive side effects of test driven development, told a story of how TDD had effected his wife's (!) life for the better. Ron, as he was sure many of the programmers in the room were too, was smart enough to keep many things in his head at the one time, and be dealing with many issues simultaneously, and investigating a few options in different areas from various perspectives all at the same time while also wondering how he might approach that other problem but that if his wife came to ask him what he wanted to eat for dinner that evening at that precise moment, that he would suddenly yell at her at the top of his voice that he was damn well THINKING, HOW COULD he know what he wanted to have for dinner?!?!?!? TDD allowed Ron to deal with things; start something, develop it, finish it. And so on. It helped him keep a clean, clear mind, with plenty of room for dealing with unexpected pleasantries like an invitation to dinner from his loving spouse.

On the other hand, I was troubled somewhat by another phenomenon that Benson mentioned; the Zeigarnik-effect. According to studies conducted by Bluma Zeigarnik we retain memories of unfinished work better than we retain memories of that which we have brought to completion. That strikes me in a way as a slightly depressing finding. If completion is equivalent to success and that which is unfinished represents failure, and we remember failure at the expense of success, then we're doomed to be pretty miserable animals aren't we? But if that's what the science says... Luckily there is somereason to doubt the veracity of Zeigarnik's findings but that doesn't take away from the general guideline; getting stuff done feels good.

Most interestingly, in the light of the undercurrent of (largely good natured?) tribal nettle throughout the conference, Benson spoke about methodologies as liberation theologies: Our life sucks because we're oppressed by some methodology-or-other. But, then some-NEW-methodology-or-other comes along and liberates us, we feel good because we are free and we decide some-methodology-or-other is okay. But as time goes on, we get all caught up in the rules, aren't going anywhere and are feeling all oppressed again. He called it the cycle of Liberation, Redemption & Addiction, which he laid on top of the story arc; Birth, Life & Resolution, which he laid on top of the kanban lanes; Ready, In-Progress & Done. It was all getting very universal.

John Seddon - “It's the system stupid!” - delivered the conference's closing keynote. He came to wipe the floor with everybody who had gone before and did that with an abrasive elan. Reinertsen had upset him, Snowden was full of it, other systems thinkers hated him. He really didn't care because he knew nothing about software development except this one thing; if you did as he said you should, you would do it better! What he said was, "Study, study, study!"

Surely safe-to-fail probes, set-based design and real options, OODA loops, PDCA cycles, and/or study, study, study are all variations on the theme of “inspect & adapt”. In order to inspect meaningfully and adapt successfully, systems must be extended incrementally via short iterations - it's all about the feedback stupid!

There will always be a better way to do things.