Pagina's

Thursday 23 February 2012

(My) CALM Alpha


The faculty of CALM Alpha had said little beforehand about what to expect, there were no tracks programmed and there were no speakers announced. The stated goal of the (un)conference was:

...to grow a community of thinkers, practitioners and researchers who further the application of Complexity Science in software development as well as in the larger organisations. We want to use the experience of the real-world application of principles based on validated theory to help both theory and practice co-evolve.

As I blogged beforehand, I was excited! My (ill formulated) expectations turned out to be somewhat overblown.

Day 1

I arrived at Wokefield House just after 9 in the morning. I breezed into the conference space, still congratulating myself on how smoothly the journey had gone, and promptly shrivelled. I recognised a few faces, but (as I was aware beforehand) I didn't know anybody in the room except by reputation or as a virtual entity. I sat at the nearest table, took out my notepad and began to doodle. What to talk about with strangers?

I took in my surroundings. There was no obvious presentation space, no rollup screen, no projector. There were five or six breakout stations with flip charts and a makeshift information radiator had been taped to a wall. I doodled some more. As the conference opening approached (ten o'clock), my table filled up and some introductions were exchanged, contact was progressively easier thereafter.

Simon Bennet opened proceedings. We were welcomed and told how the two days had been envisaged; day 1 would be dedicated to justification and rationalisation (or otherwise) of agile and/or lean practices on the basis of (complexity) theory and on day 2 some of the techniques developed for dealing with (socially) complex environments and/or problems would be elucidated.

Joseph Pelrine said a little about the inter-connectedness of those attending. A questionnaire had been circulated beforehand to this end and what had been remarkable to the faculty was how loosely connected the nascent network was.

Dave Snowden took the floor to introduce some basic complexity concepts and generally kick things off. I scribbled down notes as he spoke, as is my wont, and continued to do so throughout the conference. Unfortunately, I departed on Friday afternoon without my notebook. This means that my impressions as related here will be scant, and even more subjective and (subconsciously) selective than usual. The chronology as stated might also be suspect. Luckily there was a lively twitter stream.

Snowden ran through the main points made in “The new dynamics of strategy: Sense-making in a complex and complicated world”, this had been recommended reading before the conference so the pace was merciless. The three basic assumptions prevalent in organizational decision support and strategy: assumptions of order, of rational choice, and of intent were explained and (contextually) debunked. Then the idea of contextual complexity was proposed whereby the basic claim is that complex systems composed of human beings are a breed apart. This claim is supported by three pillars as follows, humans – are 1) not restricted to one identity, 2) not limited to acting in accordance with predetermined rules and 3) not limited to acting on local patterns.

Although I am not a greedy reductionist, I have a fundamental problem with the philosophical implications hereof as I regard these observations as statements of degree. Humans may not be ants, but the interaction of several humans can better be compared to the interaction of multiple ant colonies, not the interaction of individual ants. At the same time, the stupidity of people acting locally in (very) large groups is well documented, whereby I would suggest that although 5 guys in a room may not be a flock of birds, tens of thousands of guys packed into a football stadium just might be.

I felt the same discomfort when later watching the overtly political 'All watched over by Machines of Loving Grace' as Snowden recommended in his first address to the assembled. It would turn out that this was a moot point, to a degree, a question of semantics. I came to realise that what was important was that a system of nodes should often more correctly be viewed as a system of systems.

In this first broadcast, the notions of ontology, epistemology & phenomenology were also touched on.

Further, three heuristics for dealing with complex situations were introduced; fine-grained objects, dis-intermediation & cognitive spread. This resonated immediately with me and I was surprised that it did not strike a chord with many others. We had come to a mashup and these heuristics were varieties of practices current in agile (short time-boxes & resultant strategies, transparency, team commitment) and lean (reducing batch size, gemba & kanban boards, swarming).

Then it was time for the first breakout. There was some dissatisfied mumbling at this point, some felt that our aim was unclear, and the faculty's response (actively avoiding premature convergence) did not succeed in addressing peoples' concerns. Acting in the present as opposed to aiming at a future state is difficult. Even when explicitly setting out to explore...

1st breakout

The room seemed to have been setup for open space but we were quickly disabused of that idea, we were told that open space might be harmful, and that the law of two feet did not apply; if you started a conversation for instance then it was up to you to keep it going. Everybody had one vote for this session. Individuals could propose topics and/or vote. If you proposed a topic however you had to vote for same. The top 6 topics would be discussed. I proposed a discussion labelled 'How to avoid emergent lock-in' and to my (not necessarily pleasant) surprise it got enough votes to claim a station.

When I wrote the word lock-in I was thinking specifically of Jaron Lanier's “You are not a Gadget” but when asked what lock-in was I had to face up to the fact that I couldn't actually name any of Lanier's examples. I instead used the weak example of the 'design' of the human breathing apparatus - no good engineer would knowingly (partially) combine the tubes for breathing and swallowing as observed in humans.
Having since had time to check my sources; Lanier states that lock-in “turns thoughts into facts, philosophy into reality”and cites examples such as MIDI representation, UNIX & even files (the first Macs were apparently file-less). According to Lanier, once an idea becomes reality it can be difficult to even see that there are (or were at one point) alternatives. It is difficult if not impossible to retrace the progression of a design space such that all the options that were available at one point become available again (co-evolution is irreversible). As design options diverge initially it might be simple to 'jump' from one evolutionary path to another but as time goes on many options will expire.

We covered some interesting ground. I wasn't accused of dressing up a case for big design up front with fancy terminology (which might have been fair comment). Chris Matts suggested Real Options as the remedy, as he would repeatedly throughout the two days in response to a variety of problem propositions. Matts' has seemingly turned Real Options into the proverbial hammer and that's possibly why he didn't learn anything at CALM Alpha.

The problem remained. What if Robert Frost hadn't taken the road less travelled in that yellow wood? It made all the difference after all. 

In the loosely organised feedback session afterwards neither myself nor anybody involved in our discussion felt the need to report – although the other topics covered varied, much of what came out of them was relevant to our own discussion. I decided definitively against sharing when Snowden, responding to conclusions from another group's conversation, spoke about 'the need to maintain resilient capability'.

It had emerged that some of us were actually quite uncomfortable with emergence.

Given that TDD (as called for in XP) clearly facilitates emergent design, and that the agile principles talk explicitly about emergence (The best architectures, requirements, and designs emerge from self-organizing teams) and the currently popular Kanban method is sub-titled “(Successful) Evolutionary Change for your Technology Organisation” - this was a surprise.

After lunch Joseph Pelrine talked about multi-ontolgical sense-making, Stacy's model, the Cynefin model and so on. What stuck with me was: High agreement in the face of high uncertainty is religion, disagreement in the face of certainty (that something has to change) is politics. Assuming uncertainty and that there is no universally applicable solution is science.

2nd breakout

The second breakout was organised differently. People who had ideas for discussions were given a few minutes to prepare a pitch. Then these individuals had to pair off and present their ideas to a given table. Every table could assign a total of 7 points to the ideas presented together (4/3, 0/7 etc). After every pitch the presenters paired off again and repeated the process at another table. The ideas that gained the most points were to be the discussion topics for the breakout.

I took part in a truly enlightening, even inspiring, discussion on the '(Ab)use of metaphors from the natural sciences in (change) management'. Besides talking about metaphors that might be useful for illustrating the idea of weak signals we also spoke about the need to test the metaphors themselves – how to set up a safe-fail probe for the use of given a metaphor? Dave Snowden had said earlier on that it was necessary to change language when trying to effect change in an organisation - when you started hearing your newly introduced language being bandied about, thrown back at you, you knew your ideas were gaining traction. We discussed the possibility of people parroting new language and how to discern what was actually being signalled by the use thereof. Somehow we ended up on stereotypes and archetypes.

Dinner that evening was in the luxurious setting of the old Mansion House. Talk was light-hearted and ranged from techniques for hiring quality personnel to male pattern baldness. Post-prandial we were joined by Simon Bennet and the conversation turned to emergence. In a slightly less constrained situation the talk was wild and in no time we had progressed to the noosphere and other outlandish possibilities. When we were asked to leave the dining room I headed back to my room, I had been up since four that morning.

Day 2

Dave Snowden opened the second day with some reflections on multi-ontological sense-making and the Cynefin model. It was important to remember that the model was not a categorisation tool, it was about sense-making and thus in fact subjective & contextual. Now obviously talk like this could alienate anybody you might want to help with it so it was important to avoid definitions initially. Having shown us quickly how this might work, Snowden handed over to Pelrine who talked about techniques for generating social cohesion.

How does one seed a social network? By creating shared context, designing shared containers and introducing randomness. One of the examples offered in illustration was of a team which was asked to give its sprint demo in its own team room (as opposed to sending a representative to a review in some meeting room) to the full gamut of stakeholders (shared context, container) whereafter they all walked to lunch along a narrow corridor (container) such that the group became mixed and conversations sprang up between unlikely partners (randomness). Pelrine used the analogy of trying to approach someone of the opposite sex but backing down because of the enormity of the question “What could we talk about?”

Then we ran an exercise comprising the Social Network Simulation and Ritual Dissent methods. These were great fun to do and it was immediately clear how they could be very powerful – even though our execution of them was a little stilted. Especially trying to think up of an experiment for the SNS was difficult given time restrictions and the lack of a concrete context.

There was a real buzz in the room after the Ritual Dissent.

And then things turned ugly.

Simon Bennet spoke briefly about signifying and a technique that Cognitive Edge used for same. Snowden intervened, saying that the technique was patented and it was therefore probably not a good idea to talk about it. This incident left everybody, each for their own reasons, feeling somewhat uncomfortable. The reaction of some attendees was a little exaggerated and factually incorrect (this incident was the only mention of patents throughout). Snowden's reaction on Twitter to some of this (porky pies, Cynefin has no patents) was however also disingenuous. Cynefin's methods may not be patented, but neither are they open source as claimed. See Liz Keogh's excellent blog post on the subject.

After lunch we continued with a fishbowl exercise. I thought the mashup angle was finally going to come into play. Instead an initially entertaining cooking metaphor degenerated into pretty tiresome jockeying for position, the whole agile (specifically scrum, the mention of which seems noxious to some) vs lean thing. @ThirstyBear: “barely concealed rifts in the community”, @lunivore “We're applying labels like complex and complicated to make our method look better than the other guy's!”.

I took my place in the fishbowl at one point and suggested that the 3 heuristics delineated the day before were an excellent starting point in the the search for common ground. This simply did not land. I left the circle, slightly dismayed, set-based design or Lean Startup as safe-fail probes unmentioned...

For the last session of the day we could choose form several options. I chose to take part in a Social Construction of Emergent Properties (archetype extraction) exercise run by Jospeh Pelrine. Towards the end, I was only passively involved and literally saw the properties emerging in real time as the rest were busy. I hesitate to use words like epiphany but this came close! 


My very personal conclusion is that there is much in lean and agile that is already geared towards dealing with complexity (e.g. small steps & fast feedback, visualisation & transparency, teamwork & swarming but also set-based design etc) and that the difficulty lies in identifying what aspects of our work may lie in which domain in any given context. This is what Cynefin (and/or similar frameworks) can offer us. To my mind the work being done at Cognitive Edge in the development of collaboration techniques that are tailored to the (scientifically demonstrated) cognitive peculiarities of being human are just as exciting. The lack of clarity surrounding the use and/or extension of these methods (in a commercial setting) is unfortunate...

...a fevered mind could see an elaborate marketing ploy in CALM Alpha. Otherwise the event might yet prove a successful first step towards growing a community as envisioned.


Tuesday 14 February 2012

Fractals at the office


I have been struggling with the idea for this post since LKBE11 in October of last year. For most of that period, suffering as I was from a bulging disc in my lower back, I was on large doses of morphine and I attributed my lack of progress to my induced state of mind. The one post I did complete in the same period was meandering and maudlin, and I took that as proof that the problem was the author, not his subject matter.

Now, clear-headed for almost a fortnight, I have only succeeded, despite concerted effort and a sense of urgency (I wanted to post prior to CALM Alpha), in bloating my notes. I've had to accept that the basic problem is that I do not in fact know what it is that I want to say. That is, as long as I do not try to examine my thoughts directly, I 'know' intuitively what it is that I want to express but, as soon as I try to look my subject in the eye, examine it in detail so to speak, it seems to shimmer & fade and dissolve into a million tiny pieces. Each fragment may be open to investigation but it seems that I can only grasp the whole obliquely.

I have played with a number of titles such as “The magic of feedback; the roux to your secret sauce”, “The key to Emergence (Feedback)”, “On Fractals, Feedback and Emergence (Why IT is where it's at)”, “Connectedness; Complexity simplified” and so on. To no avail.

So I have decided to proceed as follows: I will revisit LKBE11 and the topics touched there that are close to my heart, propose some auxiliary reading - inspired by the reading list for Calm Alpha - and wrap up with a brief deliberation (and an outlandish extrapolation) on an emergent order that some feel is on its way, if it isn't in fact already nascent. In doing so I hope to give you, dear reader, some idea of why I think IT is such an exciting field to work in at the moment.

Probability

At LKBE11 Don Reinersten & Alan Shalloway separately made a case against Deming's 95% (as related in an earlier post) while John Seddon wound the conference up with a stinging attack on the idea that individuals could win against the system. In fact this is a (useful) false dichotomy.

Stating that the probability of a person failing to change a system approaches 1, when he or she is a constituent of that system, would seem to preclude any individual successfully effecting change in any system. But we now know that small changes in initial conditions (of any system incorporating recursive feedback) can lead to large scale differences in output over time; the (by now slightly infamous because so widely misunderstood/frequently overstated) butterfly effect.

We humans are not good at probability. The Buddha (karma) and Jesus (Do unto others..., Live by the sword, die by the sword...) knew that the system (society in their case) was a function of the individuals composing it and their interactions. I'm pretty sure that they themselves also understood that their preaching was fundamentally flawed. Doing good acts may well increase the probability of meeting good acts but it's not a direct relationship (certainly not in the asymmetrical power distributions of old). Living by the sword most definitely increases the chances of dying violently, and yet there's many a warrior has passed away peacefully in their sleep.

Both Christianity and Buddhism dealt with this problem by introducing a non-falsifiable reckoning. If a Christian lived a good life they would see paradise in the afterlife, if a Buddhist lived a less than exemplary life he or she would be punished by dint of reincarnation (as a lower form). Don't blame the ancients though, probability continues to trouble us - see for instance the Copenhagen interpretation of Quantum Mechanics (as parodied by Schrodinger's famous thought experiment).

I have found Popper's “Quantum Theory and the Schism in Physics” and, more recently, Nassim Nicholas Taleb's “Fooled by Randomness” to be very helpful in trying to avoid probability pitfalls in my thinking.

And back to LKBE11. Despite the rhetorical enmity, the three speakers mentioned proposed similar methods for improving systems – the way to create an effective and humane system is to give those governed by it the power to adjust same according to their (quantified) experiences working in it. What they might have agreed on was that it was in fact all about the feedback...

Feedback

I was first introduced to the power of feedback, self-similarity (fractals) and all that wonderful stuff by James Gleik's seminal “Chaos”. In the days and weeks after LKBE11 I realised that the principles of the various agile methodologies and lean, and the basic practices they imply, represent in fact a fractal! Delivering value from the start; expand incrementally, refine iteratively, create a trusting/trusted environment, seek out and incorporate feedback early and often – these edicts apply equally well at all levels whether you are writing software, coaching or setting up a new business venture. They are loosely worded, ambiguous even, such that they must be adapted to any given concrete context or scale; no two “instances” of a fractal pattern are the same

And it gets more exciting, for instance, TDD is in many ways a fractal instance of the agile principles; hypothesise, test, reflect, repeat (or OODA, PDCA, do+ inspect&adapt, whatever your flavour or context). Further, Kent Beck's 4 simple design rules are:

  1. Run all the tests (create a trusting/trusted environment)
  2. Contain no duplicate code (expand incrementally)
  3. Express all the ideas the author wants to express (seek out and incorporate feedback early and often)
  4. Minimize classes and methods (refine iteratively)

So, if, for instance, a team is working scrum + lean, with XP inside, guided by A-TDD, self-similar patterns can be observed throughout the system. Multiple levels of fast feedback are then built in, allowing magic to happen, through the emergence of strange loops. Strange Loops? I only began to truly understood the implications of chaos theory, fractals etc. through reading Douglas Hofstadter's “Godel, Escher, Bach; An eternal golden braid”. To do a whopping intellectual feat absolutely no justice I will summarize it by saying that Hofstadter makes the case in this book for emergence as a function of feedback.

Emergence, Evolutionary Change

At LKBE11 I was introduced to the Marshall Model. In his original paper, Marshall states that “The basic premise of Rightshifting is that organisations can improve their effectiveness incrementally (and with little turbulence / disruption), until such time as they approach a transition zone.” but also “At these transition zones, major upheaval – in the form of a shift of mindset – is required to proceed further to the right i.e. to continue to improve the effectiveness of the organisation.



Personally, I suspect that this major upheaval is a symptom of the phase transition as much as an agent thereof. Things proceed incrementally or they do not. Think of heating water through its various physical states, from liquid to solid to gas. The change applied is the addition of heat. Heat may be applied constantly but the behaviour of the system varies. It displays punctuated equilibrium, ice remains ice even as its temperature creeps upwards, as it approaches 0 C strange things start to happen, some liquid water is produced but the rise in temperature seems to stall even as the heat applied remains constant. This is called latency (heat supplied goes into loosening bonds, in this example freeing molecules from their crystal straitjacket instead of raising temperature) and it occurs at all phase transitions. Now, there is increased turbulence too, the newly freed water molecules in our example are sliding around banging into bonds, imparting energy and amplifying the instability of the remaining crystal. So, even though the system displays major disruption and or 'sudden' state changes (large shifts), and allowing for significant qualitative difference on either side of a transition zone (liquid/gas, synergistic/chaordic mindset), it remains so that the change process (heat supply, conscious adaptation of practice) evolves step by step.

For a truly enlightening discussion on (biological) evolution by increment, and much more, see Daniel Dennett's “Darwin's Dangerous Idea” and his “skyhooks & cranes” analogy. Further understanding may be gleaned form John D. Barrow's “Impossibility” which is a delightfully accessible book on limits, tipping points and so on.

Another concept that I was introduced to at LKBE11 which struck me as oddly familiar was Real Options. I was sure I had come across it before. And I had. The central argument of Dennett's (loftily titled) “Consciousness Explained”, is that mind is an emergent property of the brain. A model for consciousness is proposed which boils down to the self as “narrative centre of gravity”; consciousness as an assemblage of mechanistic tricks including a “multiple drafts” model for action. This has largely been borne out by neuroscience since with one important caveat – we do not apply “best fit” when choosing from multiple drafts but “first fit”. For a less speculative discussion on the mechanics of mind, see the excellent and more recent “We are our Brains” by Dick Swaab.

Complexity theory

Dave Snowden also spoke at LKBE11. Cynefin and the sense-making model were an exotic beast to me, angry socialist Celts with a sense of humour less so as it happens. I did not understand everything, and am still quite vague on practical detail even having done pretty much all my reading homework for CALM Alpha, but I was immediately sold on the general framework. I undertook to re-read “Frontiers of Complexity” by Peter Coveney & Roger Highfield but haven't gotten round to it yet. What stuck with me from that book was the idea that complexity at one scale often yields to simplicity at another and vice versa. It is possible to know something about attractors, to recognise patterns, but not necessarily when or where they will arise. On the other hand, setting simple boundaries on a system with a sufficiently high number of agents can gave rise to complex patterns, emergent properties. Snowden cites the example of a flock of birds (simple rules: fly to the centre of the flock, match speed, avoid collision) in this instance saying that you can be sure they will avoid an oncoming mountain peak, just not whether they will go past it on the left or the right. For a spectacular example of flocking in action see the following video of a starling murmuration.

Elsewhere Snowden talks informatively and entertainingly about order & chaos since antiquity, which got me thinking.

This idea of patterns which arise again and again also surfaces over and over throughout the history of human thought. From the Buddha (reincarnation) to French folklore (plus ça change, plus c’est la même chose) and beyond to Nietzsche’s “eternal recurrence” and Camus' search for meaning in the Myth of Sisyphus, humanity has seen this tendency to (almost)repetition as something negative, fateful, at the very least something to be faced down, overcome. Should we despair? According to Snowden there's no need; with the right tool set, the beast can be tamed! I look forward to learning more.

To conclude, at the risk of indulging my apophenia: if mind is an emergent property of the brain with its fifty to a hundred billion neurons and their trillion connections, constrained locally by a relatively small number of variables (potential barriers, neurotransmitters), has humanity not been building its superbrain(s) since the advent of language? Or at least since agriculture was first practised? Every advance in communications since, from domesticating the horse to the printing press to the telegraph, and on to the internet, has increased our interconnectedness. Our ability to share and exchange narrative. The internet may as yet have a relatively small number of physical nodes and display only loose connectedness but phenomena like Facebook and Twitter are adding another dimension.

If you were to view users as equivalent to neurons and the internet 'merely' as infrastructure, then that one facet of the human super brain alone already has 2 billion neurons... Supra-identities like nation, tribe or corporation (that may pre-date internet) could get stronger and display more cohesion with ever more internet capacity and connectedness. I wouldn't go as far as those who talk of Gaia, or The Technium, or The Singularity. I'm not suggesting a global consciousness. For one individual human mind the self is already a much trickier idea than might appear at first glance. I'm thinking more in terms of a fragmented and multidimensional space where initially rough-hewn, badly bound and shifting personalities (consciousness as an extremely loose narrative centre of gravity) interact. But even at that scale (decidely smaller than global) some problems already seem a lot less of an issue. Inter galactic travel for example; if a consciousness can expect to exist coherently for centuries what's a few light years' travel between friends?