Pagina's

Wednesday, 31 August 2011

The virtues (or otherwise) of feedback


Agile methodologies are successful because they are empirical, incremental and iterative; every action is examined and feedback is generated such that improvement is attained by taking small refining steps. Running scrum with XP generates feedback (TDD, pairing) as often as every few minutes!


It is abundantly clear to me why this results in great software better aligned with the customer's needs. However, in a recent conversation on the merits of frequent (structured) feedback I decided to play the devil's advocate and asked if it weren't possible that frequent feedback might sometimes in fact stymie some of the very best minds. This suggestion was met with appropriate derision (yes, it was a conversation among 'the converted').

I wasn't to be put off that easily however, what about Van Gogh I asked? He never sold a single a painting while he was alive, and when he paired with Gauguin it ended in tears. If Van Gogh had heeded the feedback he received on the fruits of his artistic endeavours he would have given up long before he got to Arles and the world would have been a poorer place for it. Art is not science was the rejoinder. There is a science to art though, and an art to science.

What about music then, it's still art but art with an explicitly mathematical character, how do great musicians deal with feedback? Beethoven was stone deaf by the time he wrote his majestic 9th symphony, a symphony which to this day sets the standard for all other symphonies. When Stravinsky's Sacre du Printemps was first performed some ninety years later there was a riot! The composer (and the choreographer, who was also a touch too modern for the tastes of Paris' beau monde) were nearly run out of town. The press was almost unanimous in its condemnation of this newfangled noise and the weird dancing that went with it. These days, Sacre du Printemps is regarded as one of the masterpieces of twentieth century music. As for Schoenberg...

I wasn't convincing however, art was far too subjective (we hadn't even touched on literature yet), and that's exactly why creative geeks get into software; a bit is either on or off. Feedback is always good. And Schoenberg really is unadulterated noise!

However, new research would seem to indicate that I may not have been entirely misguided in my playful challenging of the virtues of feedback. Creative ideas tend to generate resistance, even among people explicitly seeking innovative breakthroughs, and regardless of objective evidence supporting the new idea!

So, when is negative feedback a sign that you need to change your ways and when is it a sign that you are most definitely on the right track? There is no simple answer to that question. As a rule I would err on the side of responding to feedback at face value but try and allow for the occasions when feedback says one thing but means another. In fact, this is a specific instance of a generic problem with all process guidelines, and a topic of hot debate in the scrum community;

  • When does a product owner challenging a team become a manager pressuring the team?
  • When does guarding the process become procedural nit-picking?
  • When does positive conflict become damaging mudslinging?
  • When does the avant garde represent the emperor’s new clothes?
  • Etc.

In all cases the answer is (a sometimes seriously unsatisfying), “it depends”. And that is the art to this science of software development.

Wednesday, 24 August 2011

Fixing to get flexible


When adopting scrum, one of the more subtly difficult issues that can arise, derives from the varying expectations generated by the word agile. Customers who are used to long lead times and cumbersome procedures are delighted at the prospect of 'going agile' – they will get to call the shots and there will be no penalty for change. This lightweight interpretation of agile, inspired by a craving for flexibility, can cause problems when customers decide mid-sprint that they need something new, tomorrow.

When faced with the situation outlined above, the scrum master will of course state that the customer needs to talk to the PO and that, in all likelihood, (some portion of) the new functionality will be delivered no earlier than at the end of the next sprint (which could be up to 4 weeks hence!). The customer's response will often range from disappointment to disillusion; what's so agile about that?

Or, what to do as freshly minted scrum master when the sprint ends and the last story or two (!) on the sprint backlog are not done? The freshly minted PO, who is also still learning, and his customer base are pushing hard for an extension to the sprint – surely adding a day or two to the sprint is better than delaying delivery of the story in question for (at least) another whole sprint; where is the agility in (such) strict time-boxing?

Management might also react negatively when it transpires that agility does not involve people hopping nimbly from one team to the other as the (perceived) need arises. How to ensure efficient use of resources; what's so agile about all this rigidity?

So, how should a scrum master explain to his or her environment that a measure of intransigence is in fact conducive to flexibility?

They might point out that four weeks is still significantly shorter than 12 or 26 weeks but I wouldn't recommend it.

They could delve into the theory of complex adaptive systems and explain how scrum is an astutely chosen set of boundaries (constraints; time-boxes, rules etc) imposed on the behaviour of a complex adaptive system (team) catalysed by the probe of certain types of information (user stories, feedback).

Or they could point out that scrum's basic measure of progress and unit of prediction, velocity, requires fixed sprint length and fixed team composition to have any meaning.

Otherwise they might pontificate on the magic of cadence and appeal to their listeners' sense of (universal) rhythm. Or they could point out that if the team does not have to constantly busy itself with a fluid agenda that they can concentrate on what really matters; delivering working software that the customer actually wants with astonishing regularity.

For the PO, whose time is also usually at a premium, stable sprint configuration means that he or she knows months in advance what possible delivery dates are, but also when they need to be in the office as planning meetings, grooming sessions and reviews take place in the same locale, at the same time & for the same duration and on the same day of the week, sprint in, sprint out.

If all else fails, a story might help; it is often said that Einstein's wardrobe was filled with identical suits because he felt that (mental) energy expended on deciding what to wear was energy wasted. As with much of the myth of Einstein it is difficult to establish the veracity of this anecdote but recent research indicates that decision fatigue is indeed a force to be reckoned with. Developers, scrum masters, POs, line managers and customers will surely all agree that a team's (decision) energy quotient is far better spent on coding questions than on deliberations as to the best time for this week's grooming session.

Wednesday, 17 August 2011

What scrum is (not) – a meme


Although it's difficult to test or otherwise corroborate in any way, I have the feeling that 'the agile community' has been trending negatively over the last couple of weeks. At the very least there has been much to-do in my personal infosphere on the evils of agile (aficionados). The debate as to what agile, and specifically scrum, is seems to have been revived via a couple of slightly bad tempered posts by Ken Schwaber. Even well-loved fixtures of scrum lore like the chicken and the pig are coming under fire.

A framework that has inspect&adapt as one of its central tenets should obviously itself be under constant review, but what surprises me about some of the negative utterances I have come across recently is the vehemence with which they are delivered. Onediatribe in particular caught my attention, not only because (in a discussion of scrum) it ranged – in a series of three articles - over politics, (social) history and religion (subjects close to my heart), but most especially because of its toxicity. This guy really has a bee in his bonnet. His central metaphor, of priests and followers, is not without its worth though, and the unthinking uncomprehending dogmatists that seemed to have riled him so are definitely not a figment of his imagination – I have met them too. As for the parasitic consultants; I don't doubt they're out there either.

However.

Lambasting scrum as a religion, when it is explicitly constructed on empiricism, is a bit like creationists claiming that the theory of evolution is just one more belief system.

One of the issues that came up in discussions engendered by said article (also re-posted to LinkedIn) was the Scrum Alliance and its certification (specifically the Certified Scrum Master). I do not want to get into a discussion on the Scrum Alliance but I will defend the CSM, at least as delivered by Jeff Sutherland; not that everybody who attends the course comes away a good scrum practitioner but then again not everybody with a university degree in computer science is a good programmer.

Two days is enough time to get through the basics of scrum, scrum is simple and incomplete; that's the beauty of it. More importantly, Sutherland starts by introducing scrum as, above all else, a frame of mind; the way. He then goes on to explain shu ha ri and states that although it is good to follow the rules to the letter initially, gaining understanding is the goal and that once that goal has been attained the rules can be left for what they are; useful guidelines. He also explicitly includes XP and Lean as means to maximising the efficacy of scrum. Scrum as taught by Sutherland is therefore anything but an exclusive collection of dogma and rites. 

Further; the belief that agile development equates to no documentation, no planning and no commitments abounds. I have met teams that claimed to be running scrum when sprints were not fixed-length, team composition was wildly variable, random managers could change the 'sprint backlog' whenever it suited them, and most importantly, delivery of working software was haphazard at best. I have known team leads and scrum masters who 'itched' at the idea that 'their' charges were not all fully allocated – that their team was thus inefficient – and therefore assumed responsibility for sprint planning. With so much confusion out there some attempt at certification is absolutely necessary. If someone is a CSM then you know at least that they have been exposed to a version of scrum that will work if applied as learned.

But the CSM is not enough, Sutherland & Schwaber on the topic: certifications do not guarantee excellence

That is to say; scrum is a meme. And just as with the its physical cousin the gene, it is as good as impossible to make a perfect copy of a meme. Everybody who has ever learned anything about scrum has their very own unique scrum meme, some of which will be instantly recognisable as incomplete or otherwise faulty, some of which will be more difficult to assess, unless you test them. In that regard, it could be a good idea to view certification as a necessary, but not a sufficient, condition for recognising the owner of a good scrum meme. An agile hiring process could be even more useful – new scrum masters are hired initially on a one or two month contract, their performance (basic understanding, facilitation & communication skills, removal of impediments) can then be tested over a small number of sprints whereupon the team can decide whether the new scrum master is the scrum master they have been looking for.

I sincerely hope that the agile community succeeds in keeping its conflicts positive.

Wednesday, 10 August 2011

Agile contracting (testing, testing, testing)


As you may have noticed dear reader, I have been less regular of late. Holidays can do that to a body. Holidays or writer's block!

Recently I have embarked on a journey of discovery in the world of (automated) testing. A wonderful world it is too. I've subscribed to magazines, and read my Crispin & Gregory, I've discovered James Bach and Cem Kaner and Ward Cunningham, I've learned about Dan North's BDD and Liz Keogh, and that people can be test-obsessed. I've even joined a tester's social network. My appetite has been whetted and I'm keen to find out about specification by example.

Along the way, I've also been getting down and dirty with Java JDKs, Eclipse this and Eclipse that, Subversion and Maven and Sonar and Hudson, Java script, GWT, Flex, and Selenium. Hopefully I can add Fitnesse to that list shortly too. And then there's test theory; model testing, domain testing, permutations & combinations, set theory and so on... I, most recently and for many years now a project manager, am decidedly uncomfortable!

I thought all this would at least yield a couple of good blog posts (continuous improvement?). Fat chance. To-date at any rate.

In other news - our first assignment to coach a transition was confirmed today. Wouter and I are completely new to the acquisition side of things but we thought agile and guess what; it works for contracting too!

Agile contracting

We sat down together to explore possibilities
we didn't send Sales to see Purchasing
We ran one of our workshops on-site
we didn't pitch
We investigated opportunities for mutual benefit
we didn't negotiate terms up-front
We agreed on a framework for co-operation
we didn't write statements of work


    -o-

The actual contract to come out of all of this will incorporate change-for-free & money-for-nothing type clauses. 

I'm sure I will be back to my usual prolix self the next time we meet!

Wednesday, 13 July 2011

Introduction to scrum (A retrospective)


A few weeks ago I wrote about how our first workshop came about and what we had learned from it. As luck would have it, the same day that we ran that first workshop, I received a message from an ex-colleague of mine, Leonie, who was working as a business analyst at Second Floor. This small company is dealing with the difficulties of success – growth is so strong that their existing processes, which have worked well up to now, are beginning to hinder progress. One of the things they were considering in an effort to manage their success, was a transition to scrum (which some teams had already partially implemented) and they were looking for project managers versed in same.

Although I had to disappoint Leonie in as much as I am not for hire as a project manager we quickly agreed that there was potential for mutual benefit in our respective situations; Qualogy could assist Second Floor in a transition to scrum, while Second Floor could help Qualogy through structured feedback on our expanding workshop portfolio.

A meeting was arranged to discuss possibilities. We agreed, with the delivery manager and technical lead at Second Floor, to kick-start possible further cooperation with an afternoon workshop as a general introduction to scrum. If that delivered enough learning and was generally well received we would talk about collaborating for the entire transition. At that point we had already agreed in broad terms on what a transition would look like; it would progress team per team (teams already comprised all the skills required to deliver working software) and would comprise a series of workshops and some targeted coaching.

First things first though; we needed to create a “Scrum basics” workshop that would leave a bunch of very smart, semi-initiated (in terms of scrum and agile) and fully dedicated professionals with an appetite for more. From our previous experience we were very aware of the need to run any workshop as a series of broadcast-interaction cycles, but we only had a half a day for the session and we wanted to ensure that the basics were thoroughly covered. The urge to take the easy way out and create a presentation (broadcast) with a high wow-factor (assuming we could do such a thing) was strong!

Our discussion with the delivery manager and the technical lead at Second Floor had indicated that there were some obvious (and relatively inexpensive) opportunities for process improvement, quick wins if you like. We were considering the possibility of creating hooks in our presentation-to-be for these basic improvement experiments (e.g. keeping sprint length stable) when Wouter made the inspired suggestion of setting the workshop up along the lines of a retrospective. The fact remained that it would not be trivial to involve our audience actively enough while ensuring that we covered the basics, but a retrospective framework (we used Derby & Larsen's Set the Scene-Data Gathering-Generating Insights-Decide What to Do-Wrap) provided an excellent starting point for attempting just that.

We set the scene in a half an hour by introducing ourselves, relating how the workshop had come about and how things might develop thereafter. We explained how the agenda had been set up, how we intended to make the material covered as relevant as possible to Second Floor, and how we wanted to garner feedback throughout the afternoon. This was followed by a short broadcast including definitions and an extremely brief history of both scrum and agile.

To gather data, we used a (familiar) brainstorm; we asked everybody to think about the process at Second Floor and to organise their thoughts by writing down (on sticky notes) what they felt was good about their process, what was bad and what could be improved. Notes were added to a prepared flip chart in the relevant category. We then asked the group to collate and de-duplicate the notes in the various sections. Using dot voting we generated a shared view of what was most important to keep doing, and what needed changing most urgently.


Then followed three Generating Insights/Decide What To Do cycles. Each cycle was built around one of the scrum roles whereby a short broadcast was followed by some Q&A and finally an explicit effort to connect some of the material just covered with specific points raised via the brainstorm. Envisioning how exactly we would make that explicit connection beforehand was difficult; we obviously we did not know what would come out of the brainstorm. On the other hand we knew that there were some quick wins to be had. At the same time, we wanted to avoid obviously guiding people to pre-defined conclusions using leading questions.

Eventually we decided to formulate a list of quick wins per role/section as a list of Try.../Avoid... pairs (à la Larman & Vodde) and play it by ear. If during discussion following a given broadcast, an obvious opportunity to make a link to the Try.../Avoid... pairs presented itself we would capitalise on it. Also, and alternatively, if discussion stalled we would introduce the appropriate Try.../Avoid... pairs as a catalyst for conversation. Organising the information according to the scrum roles worked well; it provided a natural framework for covering required ground on artefacts, ceremonies and rules while simultaneously detailing varying perspectives. However, in spite of (or maybe precisely because of) energetic discussion with the group, we struggled to connect to the results of the brainstorm in the fashion that we had hoped.

We wound the workshop up with a retrospective. We had chosen a combination of the Happiness Metric and the Perfection Game as feedback mechanisms. Feedback from the Perfection Game was again very useful, if challenging. Positive feedback regarding the brainstorm and the general structure of the workshop was tempered by the fact that we had struggled to explicitly incorporate the results of the brainstorm into further proceedings. This was not lost on our audience who complained via the Perfection Game of occasionally missing the relevance to their own situation.

Another interesting aspect of the feedback from this session was that it seemed to contradict itself at times. While some participants felt that we had spent too much time on certain basics, others complained that we had raced through concepts that were not yet familiar. The Dreyfus Model (a modern variant of Shu Ha Ri if you will) provides insight here, although not necessarily a neatly packaged solution! The Dreyfus Model works on the premise that a person's level of training or expertise on a given subject will dictate the best way for them to gain further proficiency in that field. In the extreme that means that the kind of instruction best suited to beginners (context-free instruction to be followed to the letter) is useless, even potentially damaging, to experts honing their intuition. And vice versa. As yet, we are not decided on how to deal with this insight.

This workshop was important for us on several levels. Particularly satisfying is that fact that we have added another increment (Scrum Basics workshop) to our product (Agile Coaching service) while refining (that is; iteratively improving on) already existing elements of same on the basis of (potential) customer feedback. We are hoping that our extremely collaborative approach will serve the double function of setting us apart in an increasingly competitive market while also ensuring that our customers get what they want, much as that may change along the way.





Friday, 8 July 2011

A question of measurement


As stated somewhat flippantly in last week's post, the mystery of quantum effects is in my opinion, a question of measurement. Besides uncertainty, there is this problem with the observer, whereby the observation of a quantum system somehow resolves it's probabilities into actualities. This implies that measuring a system determines the state of that same system. If the system-to-be-measured, is not the system-once-measured, what do measurements mean?

It's a topsy-turvy world down there at the quantum level.

But all is not necessarily as it seems at the macro level either. Think of taking the temperature of a volume of liquid using a mercury thermometer – the thermometer generates a reading because there is heat exchange between the liquid being measured and the thermometer. The act of measuring has changed the system, however slightly. For many purposes the thermometer can be regarded as accurate; the change in temperature that the thermometer effects in the liquid is negligible compared to the smallest unit of measurement.

At the quantum scale however particles are so small and travel at such high speeds (close to the speed of light) that even the infinitesimal is not negligible. This makes it basically impossible to take any measurements without significantly altering that being measured.

Complex adaptive systems display the same resistance to objective observation; for instance, it's notoriously difficult to measure the behaviour of human beings without modifying the behaviour under observation – have you ever dear reader suffered a blackout when taking an exam? Another example is the stock market; measuring consumer confidence can lead directly to a dampening or buoying of the market. Otherwise there is the stock indication itself - the very fact that stock is falling is often the reason that it continues to fall (or rise and rise as the case may be).

But doesn't that make an empirical approach to anything inherently problematic? Well, yes, but it's like Churchill reputedly said of democracy; it's the best we've got! There are steps that can be taken to reduce the subjectivity of a given observation or measurement and its impact on the system being measured, experiments can be set up whereby measurements approach objective non-invasive observations. This is an arduous process, which can be deceptively difficult to get right, and it breaks down quickly in the face of complexity.

How should we mere mortals effectively inspect (let alone adapt) then in the wonderfully complex arena that is software development?

We should proceed with care, measure only the bare minimum and always interpret metrics as approximations, avoid false precision. We need to recognise that metrics will drive behaviour, and be on the lookout for how. The measurements we do decide to take, we can make as trustworthy as possible by keeping as many other important variables as stable as possible. Measuring in relative quantities also helps, as does changing only one thing at a time. In short; apply a light weight version of the scientific method, remember Occam's razor and, be humble!

For example; to make velocity measurements meaningful/useful (for planning purposes), we should fix the sprint duration and keep the configuration of the team stable. We should estimate in story points and include only the points for completed stories in our measurements. Even having done all that, we should allow for the imprecision of our measurements by predicting only on the basis of velocity ranges.

Further we should remember that, while measuring team velocity can have the side effect of driving performance, measuring individual 'velocity' will have a detrimental effect on the team's results – especially if it's a manager doing the measuring!

Wednesday, 29 June 2011

A tale of two uncertainty principles


Watts S. Humphrey formulated a Requirements Uncertainty Principle as follows: For a new software system, the requirements will not be completely known until after the users have used it.

When I first came across this idea, it reminded me of another famous uncertainty principle; Heisenberg's. Superficial research revealed that Humphrey was also a physicist (besides being an important thought leader in software engineering) and it occurred to me that he had not christened his principle randomly.

Heisenberg's uncertainty principle, a cornerstone of quantum theory, states that it is impossible to measure both the position and the speed of a quantum particle at the same time, if you measure the speed you cannot accurately determine its position simultaneously and vice versa.

This apparent anomaly, and others like it, have served to make quantum theory, or at least its philosophical interpretation(s), a domain of heated, at times even hostile, debate. Uncertainty makes people, including scientists, uncomfortable. Einstein is famously misquoted as saying “God doesn't play dice” in a discussion of quantum theory. Einstein was a determinist, which means (v. roughly) that he assumed the theoretical possibility of measuring the universe well enough and thoroughly enough to be able to predict future states accurately.

There were others, including Heisenberg himself, who employed the uncertainty principle to confer the quantum scale with almost magical properties; this has become known as the Copenhagen interpretation. This interpretation leads for instance to the many-worlds problem or the more famous problem of Schrödinger’s Cat, whereby a cat exists, dead and alive at the same moment until 'observed'. The act of observing the cat will somehow send information backwards in time, determining the outcome of an event in the past, or, more fantastically, determine which of the up to the moment of observation simultaneously existing cats will sublimate, “collapsing the wave function of the alive-dead cat” to state it correctly.

Personally, I'm with Popper: the 'mystery' of quantum effects is one of measurement. Even if every measurement were theoretically possible, it would remain theoretically impossible to measure everything objectively; the universe is indeterminate. But the alive-dead cat exists only as a mathematical construct, as a statement of probabilities (propensities as Popper called them), not as an actual array of separate physical states waiting to be resolved by some act of observation. (Although there has been some progress made recently in trying to achieve superimposed quantum states with photon pairs.)

Humphrey's requirements uncertainty principle has also generated much debate, some of it heated. How would the scientists above have reacted to Humphrey's insight?

If Einstein were a developer today he might insist that it should be possible to describe a software system fully before it is built, if the requirements gathering were just done properly. He might even be willing to accept the practical (if not theoretical) impossibility of collecting perfect requirements up front but still feel that aspiring to that theoretical goal was the best way to improve.

The Copenhagen developer, maybe Heisenberg himself, would be all for scrum or some kind of agile approach as a reaction to the requirements uncertainty principle; believing that he could then dispense with estimation, planning, meetings, documentation and all those other inconvenient barriers to his creative flow. Success would be random.

Had Popper been a developer he would probably have realised that Humphrey's requirements uncertainty principle could act as his guide in his search for great software. He would work in such a way that the customer would get to see, and use, working software at as early a juncture as possible and at regular intervals thereafter. Developer and customer would together discover the requirements as the system grew. In so doing they would embrace change (uncertainty's first cousin), and maximise learning and value on the way to successful product release.

---o---

Some eighty years after the publication of Heisenberg's uncertainty principle, quantum theory continues to provoke and engage our greatest minds, and it may yet prove to be the greatest theory of them all, capable of modelling physical existence from the quantum to the galactic scales; a so-called Grand Unified Theory.