Talking about software testing is not easy. It’s not natural! Testing is a “meta” activity. It’s not just a task, but a task that generates new tasks (by finding bugs that should be fixed or finding new risks that must be examined). It’s a task that can never be “completed” yet must get “done.”
Confusion about testing leads to ineffective conversations that focus on unimportant issues while ignoring the things that matter. Here are some specific ways that testing conversations fail:
- When people care about how many test cases they have instead of what their testing actually does. The number of test cases (e.g. 500, 257, 39345) tells nothing to anyone about “how much testing” you are doing. The reason that developers don’t brag about how many files they created today while developing their product is that everyone knows that it’s silly to count files, or keystrokes, or anything like that. For the same reasons, it is silly to count test cases. The same test activity can be represented as one test case or one million test cases. What if a tester writes software that automatically creates 100,000 variations of a single test case? Is that really “100,000” test cases, or is it one big test case, or is it no test case at all? The next time someone gives you a test case count, practice saying to yourself “that tells me nothing at all.” Then ask a question about what the tests actually do: What do they cover? What bugs can they detect? What risks are they motivated by?
- When people speak of a test as an object rather than an event. A test is not a physical object, although physical things such as documentation, data, and code can be a part of tests. A test is a performance; an activity; it’s something that you do. By speaking of a test as an object rather than a performance, you skip right over the most important part of a test: the attention, motivation, integrity, and skill of the tester. No two different testers ever perform the “same test” in the “same way” in all the ways that matter. Technically, you can’t take a test case and give it to someone else without changing the resulting test in some way (just as no quarterback or baseball player will execute the same play in the same way twice) although the changes don’t necessarily matter.
- When people can’t describe their test strategy as it evolves. Test strategy is the set of ideas that guide your choices about what tests to design and what tests to perform in any given situation. Test strategy could also be called the reasoning behind the actions that comprise each test. Test strategy is the answer to questions such as “why are these tests worth doing?” “why not do different tests instead?” “what could we change if we wanted to test more deeply?” “what would we change if we wanted to test more quickly?” “why are we doing testing this way?” These questions arise not just after the testing, but right at the start of the process. The ability to design and discuss test strategy is a hallmark of professional testing. Otherwise, testing would just be a matter of habit and intuition.
- When people talk as if automation does testing instead of humans. If developers spoke of development the way that so many people speak of testing, they would say that their compiler created their product, and that all they do is operate the compiler. They would say that the product was created “automatically” rather than by particular people who worked hard and smart to write the code. And management would become obsessed with “automating development” by getting ever better tools instead of hiring and training excellent developers. A better way to speak about testing is the same way we speak about development: it’s something that people do, not tools. Tools help, but tools do not do testing.There is no such thing as an automated test. The most a tool can do is operate a product according to a script and check for specific output according to a script. That would not be a test, but rather a fact check about the product. Tools can do fact checking very well. But testing is more than fact checking because testers must use technical judgment and ingenuity to create the checks and evaluate them and maintain and improve them. The name for that entire human process (supported by tools) is testing. When you focus on “automated tests” you usually defocus from the skills, judgment, problem-solving, and motivation that actually controls the quality of the testing. And then you are not dealing with the important factors that control the quality of testing.
- When people talk as if there is only one kind of test coverage. There are many ways you can cover the product when you test it. Each method of assessing coverage is different and has its own dynamics. No one way of talking about it (e.g. code coverage) gives you enough of the story. Just as one example, if you test a page that provides search results for a query, you have covered the functionality represented by the kind of query that you just did (function coverage), and you have covered it with the particular data set of items that existed at that time (data coverage). If you change the query to invoke a different kind of search, you will get new functional coverage. If you change the data set, you will get new data coverage. Either way, you may find a new bug with that new coverage. Functions interact with data; therefore good testing involves covering not just one or the other but also with both together in different combinations.
- When people talk as if testing is a static task that is easily formalized. Testing is a learning task; it is fundamentally about learning. If you tell me you are testing, but not learning anything, I say you are not testing at all. And the nature of any true learning is that you can’t know what you will discover next– it is an exploratory enterprise.It’s the same way with many things we do in life, from driving a car to managing a company. There are indeed things that we can predict will happen and patterns we might use to organize our actions, but none of that means you can sleepwalk through it by putting your head down and following a script. To test is to continually question what you are doing and seeing.
The process of professional testing is not design test cases and then follow the test cases. No responsible tester works this way. Responsible testing is a constant process of investigation and experiment design. This may involve designing procedures and automation that systematically collects data about the product, but all of that must be done with the understanding that we respond to the situation in front of us as it unfolds. We deviate frequently from procedures we establish because software is complicated and surprising; and because the organization has shifting needs; and because we learn of better ways to test as we go.
Through these and other failures in testing conversations, people persist in the belief that good testing is just a matter of writing ever more “test cases” (regardless of what they do); automating them (regardless of what automation can’t do); passing them from one untrained tester to another; all the while fetishizing the files and scripts themselves instead of looking at what the testers are doing with them from day to day.
Erik J says
Nice list! I would like to add:
7. When people talk as if knowledge can be certain. This leads to all sorts of confusions that interferes with how most good testers work. All knowledge is fallible (following Popper) and progress therefore is, in a sense, unlimited.
What we then have to act on is not “When have we infallibly learned all there is to know about this thing we are testing?”. Instead, we have to try to balance risks and uncertainty while utilizing the time/resources available as best as we can to try and gain the knowledge we think matters the most, however inaccurate.
[James’ Reply: I like this.]
Simon Godfrey says
Amen, James. #4 particular resonates as the demand from management to “automate everything” has grown and grown in the last 6-8 years.
P.S Where are the sharing buttons on this blog? 😉
[James’ Reply: Sharing buttons? I suppose there is some plug-in for that. Let me check.]
Chris George says
Possibly the level up from this, but when discussions happen when they are not talking about testing (but include everything else!), or talking about testing without including anyone who actually knows about testing.
Interesting question – what’s worse; talking about testing in the wrong way, or not talking about testing?
[James’ Reply: I guess that depends on the substance of the conversation, and the substance of the testing.]
Stephen Brown says
Great list and I particularly like the item about a test as an event rather than an object. Recently, I’ve been testing much nearer to the customer/business end and they see tests as events. They want to know what works and not how many tick boxes we’ve ticked on a test schedule.
Insightful, as always, thanks.
Tester R says
A great list, and I also like #7 from Erik J.
I tried to create a graphic that conveyed the same point:
[James’ Reply: I like the graphic style… Now can you diagram something about HOW we decide the status of the things we test?]
Paul Broom says
Great post James thank you. I have currently been tasked with writing scripts for an upgraded mobile APP before I have been able to have a release of the updates in my hand. Sure I have seen the requirements etc but I cannot really know what I am going to do until I start playing with it and learning it’s behavior and the order in which functions occur. I have written the scripts just to please the project manager but I know deep down that I will probably not follow them and will end up re-writing them as I am actually testing.
[James’ Reply: Yes. The next step is just to be up front with your project manager and teach him to be pleased with real work instead of fake work.]
Henry Petard says
I would like to extend your point #3 – isn’t a common problem of talk and discussions around testing disconnected from the project, program, team, or purpose that it is intended to work with?
I mean if the effort and expectations around the testers and testing (and every other person) within a product development effort is not aligned then there will (often or inevitably) be a mismatch / disappointment in outcomes and wishes.
So, I guess my questions would be: what would the strategy for a product, or project, or team effort be called – and does that and the testing strategy align?
So, if I was to extend your point it would be: “3. When people can’t describe the test strategy” – then I’d add in the description that the whole team (or project) should be able to understand the test strategy in their current context – the expectation is then on someone (tester) being able to describe the details and others having an aligned view – I think this goes to the issue of “did someone understand me the way I’d intended” that I think you write a lot about.
Does this make sense?
[James’ Reply: If you are saying that it’s important for everyone, not just responsible testers, to understand the test strategy for the project, then I don’t think I agree with that. It’s true that we want the team to be in alignment with the test strategy, but they can be in alignment without being able to explain it. In fact, a tester can be in alignment with his own test strategy even if he doesn’t know what a “strategy” is and when pressed to describe it has no words at all! This is true in development, too. I can use someone’s class hierarchy and understand how it works without knowing anything about the design principles that led its developer to architect it one way instead of a different way.
The reason I’m saying responsible testers should know how to talk about test strategy is so they can deal with objections to what they are doing as testers, should any objections arise. Also, so that they can be better able to optimize their strategies.]
Henry Petard says
I agree with your point: “The reason I’m saying responsible testers should know how to talk about test strategy is so they can deal with objections to what they are doing as testers, should any objections arise. Also, so that they can be better able to optimize their strategies.”
My point (maybe not so well explained) was to be able to do that up-front, or earlier, as well and not rely on it being reactive – and I was wondering how the description of that might look (in your point #3) if the emphasis was on continually doing this as a tester’s approach evolved.
If a responsible tester was describing their ideas and thoughts around their test strategy – as best as they could – testing ideas against the rest of the team (or project) I intuitively, as I haven’t thought of a way to prove it, think that would bring some form of alignment already early on. At least it might help to optimize their strategies.
With this feedback I might re-state the headline for point #3 as: “When people can’t describe their test strategy as it evolves, during and after execution”
Maybe I lose half (or more of) the audience now…
[James’ Reply: Okay, I get it. I can agree to that.]
G’day James excellent post as always, I have a few questions on explaining the test strategy, just wondering if you had any pointers on how to elevate testing in the Agile world?
What I am finding is that as a sole tester in a product team I am having issues in getting my point across that testing is still mandated in Agile.
There is a big push for “Test Automation” I divert this in my team into the broad category of automation being great for regression which takes the focus off trying to script what I do for a brief moment.
Any pointers would be greatly appreciated!
[James’ Reply: That’s a sad situation, which is unfortunately playing out in a lot of companies, right now. Too many people need to learn the hard way about the value of testing.]
Rick Hough says
I’ve been doing similar stuff in the UK for the last (too many) years and I’ve bumped into the same problem moving from client to client, mostly Dot Coms.
My approach, when discussing automation, is to ask the client to detail a specification for the automation, write me a user story. What exactly, within a page of html, do they need me to cover, why is that feature important, how many ways can it break, and what does it cost us when it does break.
This gets them thinking about what they are really asking for and helps me to help them create an approach to test automation that is not onerous and doesn’t stop me testing. It also helps them to understand that they are asking for products to be delivered alongside the software they initially asked for, and that it requires maintenance, care, attention, time and nurturing.
I actually really enjoy the puzzle of making (e.g) Selenium do the checks, especially when freed from the constant questioning about “what percentage of our tests are automated?”
I also point them at the numerous works by Mr Bach, who appears to have a pragmatic handle on testing, and has a knack of explaining the subject and it’s intricacies far better than I can.
I am too old and to grumpy to be good at that!
Isaak Mendoza says
Points #1 and #4 really hit home for me because at the current place of employment they believe metrics are good when it comes to test cases and when it comes to automation that it can replace manual testing. This is a great article and thank you for this
Matt De Young says
James, I have shared this post with my team hoping to get them to start to think of things differently than our current process of simply fact checking. the other challenge that we face is that the CTO wants to see more test cases written, after reading this I think we are going to instead of creating more numbers, we are going to clean up what we have so that the mandate of creating new tests will be met, as well as making our test more free, and fact check.
i’m just testing your comment box
[James’ Reply: If you were really testing it, wouldn’t you want to now the outcome?]
Tyler Gray says
Hi there, Thank you for your professional work. This is one of the best software testing articles I’ve ever read. Very thorough and complete.
Fred Steenbergen says
How do you know that it is complete?
I read your book ‘lessons learned in software testing’
Maybe I have a different opinion about automation.
[James’ Reply: I’m wondering how much of my book you read.]
I would say that the process of testing is an endless tree.
You have a start state and endless end states, which are more likely nodes than
leafs, because actions could continue.
A test is an execution of steps, where each test step is an edge of the tree, and you have a start
and an end state. The test succeeded if your end state is the expected state,
which is a not broken system in the most cases.
Often you don’t know the number of possible test steps.
The user could always plug an usb device in any state to the device under test for instance.
So testing is searching for the most important paths.
[James’ Reply: Certainly there are aspects of testing that are like this, but there are also aspects which are not tree-like so much as cyclic or web-like. I’m concerned that the image you depict, here, sounds rather mechanistic and simplistic. I am not limited to “steps,” nor am I always sure, when I perform a “step,” that I have performed only that step and not some other step or steps. Maybe what I do cannot be reduced to the concept of steps at all. The software cranks away whether or not the user takes an action. How many steps are being performed when I witness the analysis of data set that results in a report? Is it 4,345,322,398 steps or is that one step?]
When I test, I try to create an environment where I am god 😉
So I want to control the NTP-Server, DHCP-Server, the power-supply, IP-Connections, etc.
With all Inputs under control, I can automate a lot of test steps and search for
errors in the tree.
[James’ Reply: You have said nothing about test oracles, which are not merely explicit but also tacit and unencodeable. In other words, when you perform your steps, how do you know that there are bugs? That discovery is not something you can expect to entrust to automation, although we might do a lot with automation in some cases.]
I work in an agile environment, so automated checks are very
helpful. My main requirements for automated ‘tests/checks’ are that they are very
easy to understand, maintain and change. So in practice they are a text file
with instructions, which is done by a program which haven’t been changed much in
the last months.
I mostly like strategy and simulation games and often play the same scenario again and again.
I like to see what happens, if I changed some actions (build this and that unit).
So in testing I often do the same. Give some device a lot of load and see what happens.
Hhm. device works well with short data, try a little bit bigger data.
Automation helps me to work faster without spending too much time on trivial things.
[James’ Reply: No competent tester eschews automation. So this is not news. The more important thing to discuss is what part of the testing process automation can serve, and in what ways it serves. I would like to hear evidence that you are applying automation with a healthy understanding of your responsibility as a tester. This responsibility cannot be given over to automation, which is why I say that testing cannot be automated.]
All automation is mostly to create exact the I-am-God-environment that I want to have.
Often I notice some erros, but in an unstable environment, I cannot reproduce them, so having
control over the environment is my absolute priority.
[James’ Reply: Controlling my environment is one role that automation may serve pretty well.]
I don’t like documentation very much. Often the documentation is just a complicated
description of the environment. So if the automation is easy to understand, it is much
easier to read than hundreds of pages of description of the test scenario.
[James’ Reply: Maybe, but of course, it may be a dumb idea to document test scenarios in detail in the first place. Documentation, such as it is, must serve its purpose or be deleted.]
I therefore often find bugs if I generate a lot of testdata, which checks that the
device behaves like expected.
[James’ Reply: This is not exactly a testerly thing to say. I urge you to speak in a different way. When we test, we are not checking that things behave as expected– instead we are searching for problems. Something can behave in exactly the way you expect and yet contain important problems. When something behaves as expected, I therefore simply say “I have not yet seen a problem.” This reminds me and others that testing is open-ended. It’s not merely a fact-checking process, although it does involve fact-checking.]
Our Code is written in C++, so the compiler has a lot of easy checking mechanism to detect
possible errors. I also experimented with libclang, which gives a good overview how code works.
So I appreciate automated tools. I would not buy automated test tools suite from company Y.
But using as much automation as possible (when it is easy to understand) works very
well for us.
[James’ Reply: I appreciate tools, too. So in that we are not different. However, I am aware of the danger of becoming absorbed in tools and losing sight of my testing mission. Testing cannot be automated, and my responsibility as a tester cannot be given over to machines.]
I always try to reduce complexity so a tree is my abstraction for testing.
Even if the tree just exists in my mind.
[James’ Reply: Sure, we all do that. What I’m suggesting is that your complexity reduction process is ignoring some very important things.]
Probably there are scenarios which cannot be represented like that, then I
will find another model. For me that model is similar to chess, which is very simplistic, too.
[James’ Reply: Your model is all about paths, but seems to ignore interpretations. This is a problem because a tester’s job is not just to visit states, but also to evaluate them. That is not and cannot be a purely mechanistic process.]
But it is therefore very difficult to find a good strategy for chess, because the search area
is so exponential big. That is where testing by humans crosses my mind.
To decide which paths are valuable and which not has to be decided by the tester.
With automation I can visit more paths of the tree, which of course is not
so valuable as done by the tester itself, but it can find bugs if the program crashed
or the results are strange ( e.g. results plotted to a graph, memory leak or time duration).
[James’ Reply: Yes, this is an ordinary and reasonable use of automation (assuming it doesn’t absorb all your energy). It has important limitations though, and you can’t just ignore those limitations… well, you CAN ignore them, but then you aren’t being a good tester– and this blog is about being a good tester.]
the tester can investigate that part of the graphs more detailedly.
That kind of investigation may not hinder to test that paths by the tester again.
Of course the test oracle is always the tester.
[James’ Reply: We use automated oracles, too. The human is a special oracle because the human has access to tacit knowledge. Tacit knowledge, by definition, is that which is not and probably cannot be encoded.]
In my automation strategy I want to keep the automation simple and traceable.
I have some small tools where some are replaceable and I am flexible to create an
environment I might expect some errors.
For me automation is the written down execution of my testing idea which serves
me as a documentation (for reproducibility, too) and speeding up the testing progress by
replacing trivial tasks to focus on real problems.
Often if I stay with manual executing tests, there is not enough time to cover all
the stuff I wanted to test.
[James’ Reply: A “manually executed” test is the only kind of test there is. Whatever you automate is not in and of itself testing. In other words, when you automate a manual process of any kind, you are not automating and cannot automate the aspects of that process which are not algorithmic. Much of testing, and all the most important parts of testing, are non-algorithmic.]
Maybe I am misinterpreting your position.
I do not believe anyone thinks of the absence of testers when he says
automated testing, but I am only working in one company and not as an external.
[James’ Reply: Dude, you just have to read your own words about automation, which reduce it to tree traversal and say nothing about sensemaking, modeling, hypothesizing, literate observation, etc. If you think people matter in testing, why not talk about what testers do? I think you have misinterpreted my position, because my position is that I use tools in lots and lots of ways– so why are you writing as if you think I am afraid of tools. I’ve been a programmer for 34 years– I just don’t confuse what tools do with what testers do.]
“The more important thing to discuss is what part of the testing process automation can serve, and in what ways it serves.”
What is your point to that topic? I only know your message “automatic testing” is not done by
machines, which is true, but does not help me in my agile development world.
[James’ Reply: What kind of question is that? What is my POINT? I suppose if you don’t understand what testing is, and you are unable to distinguish a testing process from a code execution process, my words will seem strange. But you said you read my book. I would urge you to read the first two chapters again. This time, read more carefully.]
Therefore I found an interesting article:
Nash Stewart says
Great post James! I would like to add
#7 When people talk about test coverage percentage rather than which features need to be tested and how much. There is never time to test everything, because like you’ve said by testing you increase development speed and thus introduce more bugs, etc.
#8 When people talk about every automated test as a test. An “automated test” is not a test it is a repetition of a manual test, so it’s actually more a validation than a test. Unless performing something like stress testing or complex combination testing.
I stumbled upon your post amidst of a troubled mind when I went through a scene with my programme managers yesterday on some of the points. It’s really belittling when they throw numbers and estimate the velocity of the testing by a mere division of the total against the size of the team. I still hold the belief that people’s knowledge will be better when there are countless lessons learn available in public forum on the mistake that people commit on testing estimation. Still staying positive to help people to gain better knowledge and avoid known pit holes.
Wendye Peckham says
An insightful article, as always! This will clear the many confusions that many people have about testing. Like the way you refer to testing as a task that cannot be “completed” yet must get “done”!
Thomas Klein says
A wholehearted “amen” to all of this, Michael.
This perfectly goes along with what I hate most about “testing” talks: When people mix up QA with Testing. Like
– when they say “QA” needs to be involved right form the beginning and then wait for test case numbers to go up
– when they say “testing” and pay for a ridiculous amount of testing at the latest possible moment in time but expect a wholistic QA to happen along the entire project lifetime