Jonathan Kohl beat me to the punch on this blog entry by a good margin, but since I was already working on this thing, I’d like to go ahead and basically say exactly what he said in probably a far less cogent fashion. A lot has been written about the good and bad things about Exploratory testing and the writing and (mis)usage of test cases.
I don’t see a lot written about harmonious use of both. I’m sure people do it. I know I do. Is it like airing dirty laundry? Do context driven testers not want to admit to using test cases in case James Bach reads their blog and decides to tear them a new one? Do factory testers (sorry – what do you BDUF testers call yourselves anyway? Factory tester brings to mind a grey-clad downtrodden peon, which I suppose is appropriate, but not very flattering) – anyway, do you lot not want to admit to doing exploratory testing in case your overseers beat your for not following the script? Am I missing something? Is this just not a big deal?
Johnathan advocates using a blend of both and uses the terms prescriptive and descriptive testing, which quite appeals to me. Michael Bolton’s recent highlighting of the difference between testing and checking as something we should be more conscious of helped me to frame this better in my mind.
There are things that you can plan ahead for. There are risks you can identify and tests that you can design to mitigate against them. I see no reason why you shouldn’t use test cases for these if that is what you are comfortable with.
One of the issues I have with test cases is that some people see them as a magic artefact that covers all possible problems in a given area, because one or more tests have been written that mention it. They fail to see that many test cases are highly focused on verifying a very specific item somewhere, and just because we go through a bunch of steps to get there does not mean that those steps have also been verified.
This leads me to wonder if I have been thinking about scripts all wrong.
I recently read Michael Hunter’s work on setting up test frameworks. One of the things that was a lightbulb for me was separating the verification from the action. Very very obvious in hindsight, but if you think about it – this is basically a use case and a checklist.
We have a series of steps from point A to point B
We have a list of things that we want to make sure are as we expect them. Why do we need to mash them together? Why do we need to repeat the list of steps over and over and over so that we can list a different verification point next to it?
An argument I hear for test cases is that if you write them well enough, then anyone can execute them. I really hate the way that thinking goes, mostly because I read it as ‘anyone can do testing, you just follow the scripts’. That being said, a clear set of instructions and a checklist could mean that you could get a non-tester to do checking, even of mission-critical information. That, I might hold with.
Checklists and Check scripts? I think this is something worth exploring further.