In July of 2016 I attended a 4-day workshop with Dave Snowden on complexity and the Cynefin sensemaking framework. For me, Cynefin was something that had floated around the periphery of my interest for a number of years, but whilst it seemed like a nice concept, I didn’t see how it was applicable to my day to day at all. So it continued to be one of those things that was kinda cool, but ultimately not so useful to me. That was until Martin Hynie did the workshop I described and told me that my attendance at the next workshop was compulsory. Martin is the kind of guy who tends not to rave about stuff, but the enthusiasm with which he described how meaningful this material was – it was something I could not ignore.
So armed with my tester’s skepticism, but with my curiosity well and truly piqued, I booked in and off I went. Four days later, my brain was a broken pile of loose gears and uncoiled springs. It is not inaccurate to say that I spent the next three months or so putting the pieces back together and reevaluating how I thought about pretty much everything.
@snowded You have broken my brain, sir. I feel compelled to thank you for it.
— benjaminkelly (@benjaminkelly) August 7, 2016
I won’t say that I left the workshop understanding immediately how I would go and apply a set of new found techniques (though there were some). Far from it. I can however say is that I have a far richer set of resources with which to think about and describe the space I’m working in and a number of different lenses to select from when viewing the world.
If you have been living under a rock and are not familiar with Cynefin, Here’s a gentle intro. With the luxury of hindsight, I can tell you that this is the tip of the iceberg:
I did have some epiphanies about testing whilst listening to Dave speak.
It occurred to me that traditional waterfall and V-model software development styles, and particularly the testing in these paradigms, tries to treat a project like it belongs entirely in the obvious domain. The various metrics, traceability matrices, test cases and so on attempt to define and capture project details so they can be known and managed. There is some recognition of the Complicated domain with the employment of subject matter experts and specialists, but it is safe to say the majority of the effort treats software development like something that can be known, predicted and managed accordingly.
What we tend to refer to as context-driven testing recognises that in actual fact a very tiny amount of what we deal with as testers exists in the Obvious space (and the bits that we do deal with should really be automated to whatever extent is appropriate). CDT spends far more time in the Complicated space with the occasional nod to the Complex space, but without really distinguishing the two. CDT refers to heuristics, but the popular heuristic mnemonics tend to be applied like a sort of rules engine, and it’s down to the skill of the practitioner to know what to use and when. Like anything though, these mnemonics can be misused or misapplied if they become simply a checklist of things to tick off. ‘Is there anything you can think of that might go wrong in <commence checklist>’.
What about situations where we don’t know enough to be able to say with any confidence what might go wrong, or what the consequences of a failure might be. I wonder how adept as software testers we are at recognising when the question we need to be asking is ‘what experiments should I run to tell me more about these areas?’.
It’s worth noting at this point that whilst Cynefin is not a ‘quadrants’ based framework, there is significance to the left and right side of the map. On the right (Obvious and Complicated), you have ordered systems. Think of it as ‘machinery’. There are rules and provided you have sufficient expertise, those rules are predictable. On the left (Complex and Chaos) you have unordered systems. Think of these rather as an ecology. A large number of interconnected relationships and constraints, few if any of which are understood except in hindsight. Changing one can potentially change them all. It is largely in the complex space that we begin software development projects, no matter how much your Big Design Up Front documentation might try and convince you otherwise.
In software development and particularly in Agile where (ideally) iterations are small and feedback loops are tight, what you’re really trying to do is identify what seems to be of value, explore and define that piece of value and move it to a point where that value can be realised (or discarded if the perceived value was ultimately not actually valuable). On the Cynefin framework, that cycle looks a bit like the blue circle in this image:
Essentially, you’re taking something that is not yet understood and indeed can only be understood in hindsight after having explored it (think rapid prototyping, spikes) and then moving it into a space where even if the moving parts require some expertise to comprehend, they can be understood and predicted.
The thing I find fascinating about Cynefin is that it is a framework through which to view things that you can apply at any level of granularity. It is as applicable to a story as it is to an epic or a project or a portfolio or projects. It is largely for this reason that I think it is so important for software testers and indeed other software development professionals to have an understanding of Cynefin.