As stated somewhat
flippantly in last
week's post, the mystery of quantum effects is in my opinion, a
question of measurement. Besides uncertainty, there is this problem
with the observer, whereby the observation of a quantum system
somehow resolves it's probabilities into actualities. This implies
that measuring a system determines the state of that same system. If
the system-to-be-measured, is not the system-once-measured, what do
measurements mean?
It's a topsy-turvy
world down there at the quantum level.
But all is not necessarily as it
seems at the macro level either. Think of taking the temperature of a
volume of liquid using a mercury thermometer – the thermometer
generates a reading because there is heat exchange between the liquid
being measured and the thermometer. The act of measuring has changed
the system, however slightly. For many purposes the thermometer can
be regarded as accurate; the change in temperature that the
thermometer effects in the liquid is negligible compared to the
smallest unit of measurement.
At the quantum scale
however particles are so small and travel at such high speeds (close
to the speed of light) that even the infinitesimal is not negligible.
This makes it basically impossible to take any measurements without
significantly altering that being measured.
Complex adaptive
systems display the same resistance to objective observation; for
instance, it's notoriously difficult
to measure the behaviour of human beings without modifying the
behaviour under observation – have you ever dear reader suffered a
blackout when taking an exam? Another example is the
stock market; measuring consumer confidence can lead directly to a
dampening or buoying of the market. Otherwise there is the stock
indication itself - the very fact that stock is falling is often the
reason that it continues to fall (or rise and rise as the case may
be).
But doesn't that make
an empirical
approach to anything inherently problematic? Well, yes, but it's like
Churchill reputedly said of democracy; it's the best we've got! There
are steps that can be taken to reduce the subjectivity of a given
observation or measurement and its impact on the system being
measured, experiments
can be set up whereby measurements approach objective
non-invasive observations. This is an arduous process, which can be
deceptively difficult to get right, and it breaks down quickly in the
face of complexity.
How should we mere
mortals effectively inspect (let alone adapt) then in the wonderfully
complex arena that is software development?
We should proceed with
care, measure only the bare minimum and always interpret metrics as
approximations, avoid false precision. We need to recognise that
metrics will drive
behaviour, and be on the lookout for how. The measurements we do
decide to take, we can make as trustworthy as possible by keeping as
many other important variables as stable as possible. Measuring in
relative quantities also helps, as does changing only one thing at a
time. In short; apply a light weight version of the scientific
method, remember Occam's
razor and, be humble!
For example; to make
velocity measurements meaningful/useful (for planning purposes), we should fix the sprint
duration and keep the configuration of the team stable. We should
estimate in story points and include only the points for completed
stories in our measurements. Even having done all that, we should allow for the imprecision of our measurements by predicting only on the basis of
velocity ranges.
Further we should remember
that, while measuring team velocity can have the side effect of
driving performance, measuring individual 'velocity' will have a
detrimental
effect on the team's results – especially if it's a manager
doing the measuring!
No comments:
Post a Comment