I have been struggling with the idea for this post since LKBE11 in October of last year. For most of that period, suffering as I was from a bulging disc in my lower back, I was on large doses of morphine and I attributed my lack of progress to my induced state of mind. The one post I did complete in the same period was meandering and maudlin, and I took that as proof that the problem was the author, not his subject matter.
Now, clear-headed for almost a fortnight, I have only succeeded, despite concerted effort and a sense of urgency (I wanted to post prior to CALM Alpha), in bloating my notes. I've had to accept that the basic problem is that I do not in fact know what it is that I want to say. That is, as long as I do not try to examine my thoughts directly, I 'know' intuitively what it is that I want to express but, as soon as I try to look my subject in the eye, examine it in detail so to speak, it seems to shimmer & fade and dissolve into a million tiny pieces. Each fragment may be open to investigation but it seems that I can only grasp the whole obliquely.
I have played with a number of titles such as “The magic of feedback; the roux to your secret sauce”, “The key to Emergence (Feedback)”, “On Fractals, Feedback and Emergence (Why IT is where it's at)”, “Connectedness; Complexity simplified” and so on. To no avail.
So I have decided to proceed as follows: I will revisit LKBE11 and the topics touched there that are close to my heart, propose some auxiliary reading - inspired by the reading list for Calm Alpha - and wrap up with a brief deliberation (and an outlandish extrapolation) on an emergent order that some feel is on its way, if it isn't in fact already nascent. In doing so I hope to give you, dear reader, some idea of why I think IT is such an exciting field to work in at the moment.
At LKBE11 Don Reinersten & Alan Shalloway separately made a case against Deming's 95% (as related in an earlier post) while John Seddon wound the conference up with a stinging attack on the idea that individuals could win against the system. In fact this is a (useful) false dichotomy.
Stating that the probability of a person failing to change a system approaches 1, when he or she is a constituent of that system, would seem to preclude any individual successfully effecting change in any system. But we now know that small changes in initial conditions (of any system incorporating recursive feedback) can lead to large scale differences in output over time; the (by now slightly infamous because so widely misunderstood/frequently overstated) butterfly effect.
We humans are not good at probability. The Buddha (karma) and Jesus (Do unto others..., Live by the sword, die by the sword...) knew that the system (society in their case) was a function of the individuals composing it and their interactions. I'm pretty sure that they themselves also understood that their preaching was fundamentally flawed. Doing good acts may well increase the probability of meeting good acts but it's not a direct relationship (certainly not in the asymmetrical power distributions of old). Living by the sword most definitely increases the chances of dying violently, and yet there's many a warrior has passed away peacefully in their sleep.
Both Christianity and Buddhism dealt with this problem by introducing a non-falsifiable reckoning. If a Christian lived a good life they would see paradise in the afterlife, if a Buddhist lived a less than exemplary life he or she would be punished by dint of reincarnation (as a lower form). Don't blame the ancients though, probability continues to trouble us - see for instance the Copenhagen interpretation of Quantum Mechanics (as parodied by Schrodinger's famous thought experiment).
I have found Popper's “Quantum Theory and the Schism in Physics” and, more recently, Nassim Nicholas Taleb's “Fooled by Randomness” to be very helpful in trying to avoid probability pitfalls in my thinking.
And back to LKBE11. Despite the rhetorical enmity, the three speakers mentioned proposed similar methods for improving systems – the way to create an effective and humane system is to give those governed by it the power to adjust same according to their (quantified) experiences working in it. What they might have agreed on was that it was in fact all about the feedback...
I was first introduced to the power of feedback, self-similarity (fractals) and all that wonderful stuff by James Gleik's seminal “Chaos”. In the days and weeks after LKBE11 I realised that the principles of the various agile methodologies and lean, and the basic practices they imply, represent in fact a fractal! Delivering value from the start; expand incrementally, refine iteratively, create a trusting/trusted environment, seek out and incorporate feedback early and often – these edicts apply equally well at all levels whether you are writing software, coaching or setting up a new business venture. They are loosely worded, ambiguous even, such that they must be adapted to any given concrete context or scale; no two “instances” of a fractal pattern are the same
And it gets more exciting, for instance, TDD is in many ways a fractal instance of the agile principles; hypothesise, test, reflect, repeat (or OODA, PDCA, do+ inspect&adapt, whatever your flavour or context). Further, Kent Beck's 4 simple design rules are:
- Run all the tests (create a trusting/trusted environment)
- Contain no duplicate code (expand incrementally)
- Express all the ideas the author wants to express (seek out and incorporate feedback early and often)
- Minimize classes and methods (refine iteratively)
So, if, for instance, a team is working scrum + lean, with XP inside, guided by A-TDD, self-similar patterns can be observed throughout the system. Multiple levels of fast feedback are then built in, allowing magic to happen, through the emergence of strange loops. Strange Loops? I only began to truly understood the implications of chaos theory, fractals etc. through reading Douglas Hofstadter's “Godel, Escher, Bach; An eternal golden braid”. To do a whopping intellectual feat absolutely no justice I will summarize it by saying that Hofstadter makes the case in this book for emergence as a function of feedback.
Emergence, Evolutionary Change
At LKBE11 I was introduced to the Marshall Model. In his original paper, Marshall states that “The basic premise of Rightshifting is that organisations can improve their effectiveness incrementally (and with little turbulence / disruption), until such time as they approach a transition zone.” but also “At these transition zones, major upheaval – in the form of a shift of mindset – is required to proceed further to the right i.e. to continue to improve the effectiveness of the organisation.
Personally, I suspect that this major upheaval is a symptom of the phase transition as much as an agent thereof. Things proceed incrementally or they do not. Think of heating water through its various physical states, from liquid to solid to gas. The change applied is the addition of heat. Heat may be applied constantly but the behaviour of the system varies. It displays punctuated equilibrium, ice remains ice even as its temperature creeps upwards, as it approaches 0 C strange things start to happen, some liquid water is produced but the rise in temperature seems to stall even as the heat applied remains constant. This is called latency (heat supplied goes into loosening bonds, in this example freeing molecules from their crystal straitjacket instead of raising temperature) and it occurs at all phase transitions. Now, there is increased turbulence too, the newly freed water molecules in our example are sliding around banging into bonds, imparting energy and amplifying the instability of the remaining crystal. So, even though the system displays major disruption and or 'sudden' state changes (large shifts), and allowing for significant qualitative difference on either side of a transition zone (liquid/gas, synergistic/chaordic mindset), it remains so that the change process (heat supply, conscious adaptation of practice) evolves step by step.
For a truly enlightening discussion on (biological) evolution by increment, and much more, see Daniel Dennett's “Darwin's Dangerous Idea” and his “skyhooks & cranes” analogy. Further understanding may be gleaned form John D. Barrow's “Impossibility” which is a delightfully accessible book on limits, tipping points and so on.
Another concept that I was introduced to at LKBE11 which struck me as oddly familiar was Real Options. I was sure I had come across it before. And I had. The central argument of Dennett's (loftily titled) “Consciousness Explained”, is that mind is an emergent property of the brain. A model for consciousness is proposed which boils down to the self as “narrative centre of gravity”; consciousness as an assemblage of mechanistic tricks including a “multiple drafts” model for action. This has largely been borne out by neuroscience since with one important caveat – we do not apply “best fit” when choosing from multiple drafts but “first fit”. For a less speculative discussion on the mechanics of mind, see the excellent and more recent “We are our Brains” by Dick Swaab.
Dave Snowden also spoke at LKBE11. Cynefin and the sense-making model were an exotic beast to me, angry socialist Celts with a sense of humour less so as it happens. I did not understand everything, and am still quite vague on practical detail even having done pretty much all my reading homework for CALM Alpha, but I was immediately sold on the general framework. I undertook to re-read “Frontiers of Complexity” by Peter Coveney & Roger Highfield but haven't gotten round to it yet. What stuck with me from that book was the idea that complexity at one scale often yields to simplicity at another and vice versa. It is possible to know something about attractors, to recognise patterns, but not necessarily when or where they will arise. On the other hand, setting simple boundaries on a system with a sufficiently high number of agents can gave rise to complex patterns, emergent properties. Snowden cites the example of a flock of birds (simple rules: fly to the centre of the flock, match speed, avoid collision) in this instance saying that you can be sure they will avoid an oncoming mountain peak, just not whether they will go past it on the left or the right. For a spectacular example of flocking in action see the following video of a starling murmuration.
Elsewhere Snowden talks informatively and entertainingly about order & chaos since antiquity, which got me thinking.
This idea of patterns which arise again and again also surfaces over and over throughout the history of human thought. From the Buddha (reincarnation) to French folklore (plus ça change, plus c’est la même chose) and beyond to Nietzsche’s “eternal recurrence” and Camus' search for meaning in the Myth of Sisyphus, humanity has seen this tendency to (almost)repetition as something negative, fateful, at the very least something to be faced down, overcome. Should we despair? According to Snowden there's no need; with the right tool set, the beast can be tamed! I look forward to learning more.
To conclude, at the risk of indulging my apophenia: if mind is an emergent property of the brain with its fifty to a hundred billion neurons and their trillion connections, constrained locally by a relatively small number of variables (potential barriers, neurotransmitters), has humanity not been building its superbrain(s) since the advent of language? Or at least since agriculture was first practised? Every advance in communications since, from domesticating the horse to the printing press to the telegraph, and on to the internet, has increased our interconnectedness. Our ability to share and exchange narrative. The internet may as yet have a relatively small number of physical nodes and display only loose connectedness but phenomena like Facebook and Twitter are adding another dimension.
If you were to view users as equivalent to neurons and the internet 'merely' as infrastructure, then that one facet of the human super brain alone already has 2 billion neurons... Supra-identities like nation, tribe or corporation (that may pre-date internet) could get stronger and display more cohesion with ever more internet capacity and connectedness. I wouldn't go as far as those who talk of Gaia, or The Technium, or The Singularity. I'm not suggesting a global consciousness. For one individual human mind the self is already a much trickier idea than might appear at first glance. I'm thinking more in terms of a fragmented and multidimensional space where initially rough-hewn, badly bound and shifting personalities (consciousness as an extremely loose narrative centre of gravity) interact. But even at that scale (decidely smaller than global) some problems already seem a lot less of an issue. Inter galactic travel for example; if a consciousness can expect to exist coherently for centuries what's a few light years' travel between friends?