…Unless you want bad testing.
Claire Moss writes:
I am surprised that you say that scripted testing is harder for novice
testers. I would have expected that having so much structure around
the tests would make getting into testing easier for someone with less
experience and that the scripted instructions would make up for a lack
of discipline on the part of the tester.
Structure != “being told what to do”
First, you are misusing the word “structure.” All testing is structured. If what you mean by structure is “externally imposed structure” then say that. But even if you are not aware of a structure in your testing, it is there. When I tell a novice tester to test, and don’t tell him how to test, he will be dominated by certain structures he is largely unaware of– or if aware he cannot verbalize or control them much. For instance: the user interface look and feel is a guiding structure for novice testers. They test what they see.
Cognitive science offer plenty of ideas and insights about the structures that guide our thinking and behavior. See the book Predictably Irrational by Dan Ariely for more on this.
Scripted testing always has at least two distinct parts: test design and test execution. They must be considered independently.
Scripted test execution is quite a bit more difficult than exploratory testing, unless you are assuming that the tester following the script has exactly the same knowledge and skill as the test designer (even then it is a qualitatively different sort of cognitive process than designing). An exploratory tester is following (indeed forming as he goes) his own intentions and ideas. But, a scripted tester, to do well, must apprehend the intent of the one who wrote the script. Moreover, the scripted tester must go beyond the stated intent and honor the tacit intent, as well– otherwise it’s just shallow, bad testing.
Try using a script to guide a 10 year-old to drive a car safely on a busy city street. I don’t believe it can be done. You can’t overcome lack of basic skills with written instructions.
And sure, yeah, there is also the discipline issue, but that’s a minor thing, compared to the other things.
As for scripted test design, that also is a special skill. I can ask my son to put together a computer. He knows how to do that. But if I were to ask him for a comprehensive step-by-step set of instructions to allow me to do it, I doubt the result would help me much. Writing a script requires patience, judgment, and lots of empathy for the person who will execute it. He doesn’t yet have those qualities.
Most people don’t like to write. They aren’t motivated. Now give them a task that requires excellent writing. Bad work generally results.
Both on the design side and the execution side, scripted testing done adequately is harder than exploratory testing done adequately. It’s hard to separate an integrated cognitive activity into two pieces and still make it work.
The reason managers assume it’s simpler and easier is that they have low standards for the quality of testing and yet a strong desire for the appearances of order and productivity.
When I am training a new tester, I begin with highly exploratory testing. Eventually, I will introduce elements of scripting. All skilled testers must feel comfortable with scripted testing, for those rare times when it’s quite important.
1. Start browser
2. Go to CNN.com
3. Test CNN.com and report any problems you find.
This looks like a script, and it is sort of a script, but the interesting details of the testing are left unspecified. One of the elements of good test scripting is to match the instructions to the level of the tester as well as to the design goal of the test. In this case, no design goal is apparent.
This script does not necessarily represent bad testing– because it doesn’t represent any testing whatsoever.
1. Open Notepad
2. Type “hello”
3. Verify that “hello” appears on the screen.
This script has the opposite problem. It specifies what is completely unnecessary to specify. If the tester follows this script, he is probably dumbing himself down. There may be some real good reason for these steps, but again, the design goal is not apparent. The tester’s mind is therefore not being effectively engaged. Congratulations, designer, you’ve managed to treat a sophisticated miracle of human procreation, gestation, mothering, socializing, educating, etc. as if he were the equivalent of an animated poking stick. That’s like buying an iPad, then using it as a serving tray for a platter of cheese.
In my early days as a tester and before I knew any better this is exactly what we used to do with our “novice testers”, who were business users seconded for a week or two of testing. I would spend days (if not weeks) preparing step by step scripts for them to follow. We would dutifully tick them off as they were completed but we still managed to miss important bugs.
“The reason managers assume it’s simpler and easier is that they have low standards for the quality of testing and yet a strong desire for the appearances of order and productivity.”
In retrospect I can see that is exactly what I was dealing with, although I’m pretty sure that management wouldn’t have even known that to be true. It is also apparent that those business users who deviated from the scripts and were confident enough to “just try stuff” were a lot more effective than those who were following the scripts exactly. That should have been an early signal that the scripted approach was not the most effective. Unfortunately at the time I simply didn’t know any better so things were done “as they always had been”.
I’m no longer with that company but can thankfully say that by the time I’d left, their testing had matured well beyond the approach I’ve just described. It wasn’t due to any individual but simply a result of the improved knowledge of the fledgling test team over time.
If you have an experienced tester (which I wasn’t at the time) in place overseeing the novice then I see total benefit in starting with exploratory testing. It provides knowledge and confidence to the novice, allowing them to engage themselves in the process and thereby contribute and participate rather than simply fill in a space and tick some boxes (i.e. sign off completed scripts).
I wonder if the advent of easily accessible commercial tools has contributed to this problem?
Pre word processor tools, writing was expensive and you kept it to a minimal. You documented only what you had to and you made sure you put careful thought into writing a test script. With the advent of MS Word its easy to write really bad test scripts. Write a test , copy and paste, change a few words and bingo- mass production of worthless information!
I see automation going down the exact path. The tools for automation are cheap and accessible and so the quality of the scripts goes down. Instead of dozens of documents choking up our computers, we now have lines of ineffective and badly written code.
i generally agree with this post. scripted testing is tough for new testers to do well, though I think some initial paired testing with a more experienced qa can solve some of that. however, like everything in our industry, much of it depends on the specific project. i work on mobile applications. They tend to be fairly easy to navigate and most testers, and most teenagers for that matter, have experience with mobile applications; which makes reading a script a bit more straight forward.
[James’ Reply: You can’t describe testing with a script. What you are talking about is checking. In the act of writing your checker scripts, you are either A) dumbing down your testers, who even as novices are capabilities far exceeding anything scriptable, or B) they are not dumbing down, but rather doing things that are actually exploratory testing, only they aren’t getting credit for it. Unless your scripts are very carefully designed and your testers are well-trained, procedurally scripted testing is an impediment to and a waste of valuable resources.]
I think the first example above is almost along the lines of exploratory testing with a charter, which i think is a good place to start new testers, though that particular charter is a bit wider that i might normally use.
[James’ Reply: Yes, that’s what I meant when I said that it doesn’t describe testing at all. It’s just delegating the test design to the tester. In other words– ET!]
Jesper L Ottosen says
“This script does not necessarily represent bad testing– because it doesn’t represent any testing whatsoever.”
1. type in stuff in the comment field on the blog
2. click submit
3. VERIFY that the comment was submitted
[James’ Reply: This is not anything worth saying at all. If you think you need to tell this to a tester, I have news for you: that’s an incompetent tester. He should not be working by himself. You need to put someone in charge of him, train him up, or else fire him.]
if it’s checking or testing – it’s a matter of context.
[James’ Reply: Not really a matter of context. It more a matter of you haven’t defined what your “test script” means. So, here’s what’s going to happen… The tester, who didn’t need to be told any of this, will probably not engage his brain as he otherwise would have. He will probably make a set of simple obvious-to-him assumptions and do some basic thing. Of course if stupid simplistic testing is what you wanted, you could have gotten that without writing anything down.
Except here’s the weird thing. If you don’t write anything down, or perhaps just offer a feature list, and you ask a motivated tester to test and find bugs before it’s too late, you’ll probably get a lot more mental engagement than you will with your silly scripts.]
Recall the if-statement of RST that was part of a space-thing. Consider what context the above would be a test/check of business value to some stakeholder in some given context. It’s not testing to you, but it might well provide valueable info to someone else.
[James’ Reply: The if statement you are talking about was part of a demonstration about how testing is never a matter of asking testers simple silly questions. Testers who think that way do bad testing.]
This is all true enough, particularly for a new project. But what about application regression 2-3 years later? Any documentation of the original features is outdated, and the original development team (including testers and designers) are long gone. A nice body of test cases helps communicate to the future tester all of the deep thinking and exploring you did the first time around. I’ve yet to find a better way to communicate this knowledge (though I’m open to new ideas).
[James’ Reply: Oh for crying out loud, Carlos. You mean you’ve yet to TRY any other way. Are you telling me that you have found no better way of communicating than to write detailed procedures for every little thing? You haven’t tried making lists of ideas? You haven’t tried video? You haven’t tried just reinventing the tests? If I were bequeathed a bunch of old test scripts I would probably throw them away. You speak as if testers are timorous, stupid little baby birds, desperately hoping someone else will take care of them.
People have such fantasies about how well past testers documented their thinking, and how well future testers will pay attention to it. Generally, it doesn’t work that way. Testing is a HIGHLY cognitive activity. It is not communicated well in prose.]
An example: Once while testing, I discovered that addresses for outgoing letters to Tonga were different than the addresses originally input. I could only reproduce with Tonga addresses. After a day of digging, we discovered that this was a feature that was added to address a problem with undeliverable addresses. It turns out there were a handful of countries that needed such a feature, and they no longer were. Yeah bug! For 6+ months the outgoing correspondences for a couple of countries had the wrong (per business rules) addresses on them.
This bug likely would have been found pre-release, if the original tester/checker had written a couple of test cases for future testers to check this functionality. These scenarios are what keep me writing explicit tests.
[James’ Reply: Perhaps the bug would have been found pre-release if you were a better tester. Perhaps the original tester used his time to find a lot more bugs that he wouldn’t have found if he was more worried about coddling you.]
I want them to know know/remember that you use a totally separate set of rules if husband and wife have different citizenship. I want to give them some SQL code that will take a common user from the US, and turn them into the once-a-year case of a user from Pakistan.
I’ve done deep digging. If I can make it easy for them to check those holes, they won’t get lost and they can spend time digging new ones.
[James’ Reply: I admire a tester who lays a foundation for better work in the future. The idea that the ONLY way to do that is to invest your precious time in arbitrarily detailed test documentation is absurd. The idea that you can make such an investment without impoverishing your testing here and now is dangerous.
That’s why I minimize test documentation. I write concisely. I use lists and tables. I expect future testers to study the product and be as familiar with it as I am. I can’t and I won’t compensate for the ignorance of other testers by squandering my own resources that should be used for finding bugs right now.]
I like the idea of a feature list, and use something similar for regression tests. It’s a handy mental jog when you haven’t looked at the application for a few months but anything more and it’s too easily”switch brain off” inducing.
However, UAT is a different matter. Scripted tests cases are the norm for this business and what they want.
[James’ Reply: People can want bad testing. That doesn’t mean you have to give it to them.]
It’s a long road to move them away from this, but with the last release steps were taken to begin this (i.e. moving from highly-scripted “click here, check this” to “look for this”), and even then, a few issues snuck through because the business testers didn’t deviate / explore any of the functionality off the critical paths (but training business users to think as a tester is a subject for another day, and one that’s probably been covered already…).
Ultimately, it’s trying to find the mix between something that can be easily documented (yes, “easily” != “effectively”) and can be ticked off/reviewed/whatever by the business that still encourages the testers (whoever they are) to engage with the testing. Come to think of it, it really is more a training issue than perhaps what I first thought…
Mark Tomlinson says
Once upon a time I spent a good 2 months reviewing test cases and procedures (scripts) and our team sought to document anything that was unseen, or presumed by the end user for each step in the process. Such as:
Action 1) Open Notepad
Presumption 1) end user uses menu, command line or doubleclick of icon and then a process is instantiated called notepad.exe, allocating memory and bootstrapping processes to display a graphical user interface to the end-user
Action 2) Type “hello world” into Notepad
Presumption 2) end user knows how to type using a keyboard, or voice command inputs and the keystrokes sent mechanically into the computers keyboard buffer are picked up and translated into characters on the GUI that are displayed to the end-user so they might see the results (or evidence) that the typing is producing activity in the Notepad application
Action 3) Verify that “hello world” shows on the screen
Presumption 3) the end user knows where to look at the screen, or what if they are blind and need to verify this by some other assisted device, and assuming the text is not present in binary or hex characters, and should be written in a language that the end-user speaks.
The act of engaging in the thinking and writing of all the presumptions we wrote…was testing, IMHO. The two things I wish we had done differently – engage real end users in the process and engage the original testers (who were unavailable). That would have made it a collaborative testing experience – far more valuable in my mind.
[James’ Reply: I see that you did not document the biggest presumption of all: this this sort of thing is worth doing.
It’s worth learning how to do, so that knowledge of tacit assumptions can inform your testing. But making every tacit truth explicit is impossible and unnecessary.]
Mark Tomlinson says
I exaggerated in my examples there, but to your point “the biggest presumption of all is this: this sort of thing is worth doing” – is an answer absented from the pre-sales process; ergo “if [the customer’s] money is worth spending on test consulting, then anything you want to have done in the name of testing is worth doing.” This is perpetually frustrating in my past roles as a test consultant, where my own job has been threatened by sales management who are pissed off about my advice to the customer about “not doing something” because it was the best advice I could give.
One outcome from documented test steps would be to use them to prove how valuable (or not) they are to the testing effort, or to the customer. But in not-knowing, it’s a hard sell. 😉
Malcolm Chambers says
I have been thinking about this post for a couple of days, and feel that there is another Highly important reason that ‘scripted tests’ are not for novice testers for that matter not sure that they are particularly useful for anyone other than as memory joggers.
What happens with scripted tests is that when they fail the tester tends to report that Test XYZ has failed at step 123. This is neither useful or complete. Teaching our newest members of the profession how to write an effected and understandable defect report is one of the first steps that I have been trying to develop in my team of testers.
This has been brought home this morning when Myself, the Creative director both wasted about an hour trying to recreate a Defect, to determine how we were going to fix it. In the end it was necessary to refer it back to the tester (working in another timezone and country) to explain.
The result was that the described defect was not the defect he was reporting but was related. It was a useful find but became less than useful until the defect report was corrected.
Much has recently been made in this blog on how we should all become more effective, but i feel that we as a community need to re-enforce that unless the defect is reported clearly and understandable we as testers have not done our job for the client, the Developer, or the project manager.
Brian Osman says
Thanks for the great post James!
Unfortuantely, this is something in the top 3-4 common excuses that I’ve heard for scripted tests – to make it easier for the junior testers to understand what is going on!
In my current project, I have a tester who is BRAND NEW to testing. In order to quickly erase the fallacies he learnt from an ISTQB certification course, i sat with him and revised the oracles available to us (mostly the documentation in the first instance which was nearly worthless and out of date). Howevere, the most significant learnings came from touring, learning and documenting the product as we went through it. I didn’t get him to write/execute test scripts because it would be no where near as effective as learning by exploring/discovery.
As a result of having him more engaged than not, he quickly demonstrated a knack for exploration and found a number of bugs through his approach. e is known on the project as the ‘bug magnet’!
I have serious doubts that this skill would’ve been cultivated as quickly if he had been told to “just follow the test scripts”. He most likely would’ve become bored and left the industry all together and we may have lost a potentially skilled tester.
@Dean – “However, UAT is a different matter. Scripted tests cases are the norm for this business and what they want.”
Well, no – that’s what i would call possum testing (‘testing you don’t value motivated by fear on some level’).
Using my current project as an example, I sold to the business to the value of having engaged testing from the end users. I was able to *sell* the idea that detailed, *precise* scripts was not the most effective way to go. Rather, what we utilised were scenarios outlining what we wanted the end users to test (using their own knowledge, understanding and experience) without constraining them too much. I encouraged them as much as possible to explore within the scenario (and outside if they wished). This led to a ton of very interesting bugs (including a rather nifty BSOD on a time out) that *most likely* would not have been found if the end users had followed scripts based on requirements documents of some sort.
I used to write detailed test scripts when I first started testing because that’s how i was taught and that was the extent of my understanding. When i discovered a wider world of testing (actually through stickyminds.com and an article written by yourself, James on ET) that I began to challenge the fallacies of scripting and my own misunderstandings. Professionally, this was the best thing to have happened to me because I now have a variety of approaches to design tests (including scripts IF it was relevant for some reason). Again using my current project as an example, we use scenarios (high level test cases), test charters, mindmap of features and threads checklists (to check off requirements, data, set ups) and freestyle ET. Yes we document, but we document where it is relevant, required, necessary or all three.
Thanks again for the post and look forward to meeting up again in the new year!
Anand Badurkar says
A new tester (just college passed out) joined my team. He started playing with application and found very interesting defects. Here he got freedom to test application with his style and ideas which resulted in finding good bugs. I agree with the post.
Hi, I am a new tester to a simulation project team and I find exploratory testing more interesting as well as informative or rather develops out of the box thinking. It all depends upon how much creative you are (I did not mean to say I am proud of myself).
How would i convince a manager that exploratory testing is a way better than Step to step Scripted testing, who is about to bring an application to maintain test cases and gather bugs on pass/fail criteria?
[James’ Reply: Does your manager consider you competent? If so, why doesn’t he listen to you? But aside from that, if your manager is the sort of person who uses logic and reason to arrive at a decision, then you can point out that there is no evidence that writing “test cases” finds more bugs more quickly than simply sitting with the product and systematically interacting with it. Meanwhile it’s a great deal more expensive to create and maintain all those little artifacts than it is to settle down and just test the thing.
But the best way is to build your own credibility so that silly people don’t think they can force their silly ideas about how you should work onto you. If you don’t have control over your own process, then someone thinks you are a junior tester.]
Interesting discussion. I was a linguist by trade who worked for a software company way back. Not much training given to me but it turned out I had a knack for finding and replicating bugs in applications. I behaved as user on a tired day, or as a human trying to push boundaries of software. I loved the developers and shared my findings with them as constructively as I could. A background in coding would probably at that stage have lead to my being a poorer tester. As a trainer tasked with delivering end user scenario training I frequently found bugs not found in scripted UAT testing. What I did detest was atmospheres of them and us – where does that get all of us and most of all out customers? Do we really live in a dualist world?
I left IT because I was felt unsupported. I miss the challenge of the puzzle of testing and of the learning. Just was exploratory by nature and still am. Maybe it would be a good time to dip the toe in again as long as the tester’s role is valued.