Six Things That Go Wrong With Discussions About Testing

Talking about software testing is not easy. It’s not natural! Testing is a “meta” activity. It’s not just a task, but a task that generates new tasks (by finding bugs that should be fixed or finding new risks that must be examined). It’s a task that can never be “completed” yet must get “done.”

Confusion about testing leads to ineffective conversations that focus on unimportant issues while ignoring the things that matter. Here are some specific ways that testing conversations fail:

  1. When people care about how many test cases they have instead of what their testing actually does. The number of test cases (e.g. 500, 257, 39345) tells nothing to anyone about “how much testing” you are doing. The reason that developers don’t brag about how many files they created today while developing their product is that everyone knows that it’s silly to count files, or keystrokes, or anything like that. For the same reasons, it is silly to count test cases. The same test activity can be represented as one test case or one million test cases. What if a tester writes software that automatically creates 100,000 variations of a single test case? Is that really “100,000” test cases, or is it one big test case, or is it no test case at all? The next time someone gives you a test case count, practice saying to yourself “that tells me nothing at all.” Then ask a question about what the tests actually do: What do they cover? What bugs can they detect? What risks are they motivated by?
  2. When people speak of a test as an object rather than an event. A test is not a physical object, although physical things such as documentation, data, and code can be a part of tests. A test is a performance; an activity; it’s something that you do. By speaking of a test as an object rather than a performance, you skip right over the most important part of a test: the attention, motivation, integrity, and skill of the tester. No two different testers ever perform the “same test” in the “same way” in all the ways that matter. Technically, you can’t take a test case and give it to someone else without changing the resulting test in some way (just as no quarterback or baseball player will execute the same play in the same way twice) although the changes don’t necessarily matter.
  3. When people can’t describe their test strategy as it evolves. Test strategy is the set of ideas that guide your choices about what tests to design and what tests to perform in any given situation. Test strategy could also be called the reasoning behind the actions that comprise each test. Test strategy is the answer to questions such as “why are these tests worth doing?” “why not do different tests instead?” “what could we change if we wanted to test more deeply?” “what would we change if we wanted to test more quickly?” “why are we doing testing this way?” These questions arise not just after the testing, but right at the start of the process. The ability to design and discuss test strategy is a hallmark of professional testing. Otherwise, testing would just be a matter of habit and intuition.
  4. When people talk as if automation does testing instead of humans. If developers spoke of development the way that so many people speak of testing, they would say that their compiler created their product, and that all they do is operate the compiler. They would say that the product was created “automatically” rather than by particular people who worked hard and smart to write the code. And management would become obsessed with “automating development” by getting ever better tools instead of hiring and training excellent developers. A better way to speak about testing is the same way we speak about development: it’s something that people do, not tools. Tools help, but tools do not do testing.There is no such thing as an automated test. The most a tool can do is operate a product according to a script and check for specific output according to a script. That would not be a test, but rather a fact check about the product. Tools can do fact checking very well. But testing is more than fact checking because testers must use technical judgment and ingenuity to create the checks and evaluate them and maintain and improve them. The name for that entire human process (supported by tools) is testing. When you focus on “automated tests” you usually defocus from the skills, judgment, problem-solving, and motivation that actually controls the quality of the testing. And then you are not dealing with the important factors that control the quality of testing.
  5. When people talk as if there is only one kind of test coverage. There are many ways you can cover the product when you test it. Each method of assessing coverage is different and has its own dynamics. No one way of talking about it (e.g. code coverage) gives you enough of the story. Just as one example, if you test a page that provides search results for a query, you have covered the functionality represented by the kind of query that you just did (function coverage), and you have covered it with the particular data set of items that existed at that time (data coverage). If you change the query to invoke a different kind of search, you will get new functional coverage. If you change the data set, you will get new data coverage. Either way, you may find a new bug with that new coverage. Functions interact with data; therefore good testing involves covering not just one or the other but also with both together in different combinations.
  6. When people talk as if testing is a static task that is easily formalized. Testing is a learning task; it is fundamentally about learning. If you tell me you are testing, but not learning anything, I say you are not testing at all. And the nature of any true learning is that you can’t know what you will discover next– it is an exploratory enterprise.It’s the same way with many things we do in life, from driving a car to managing a company. There are indeed things that we can predict will happen and patterns we might use to organize our actions, but none of that means you can sleepwalk through it by putting your head down and following a script. To test is to continually question what you are doing and seeing.

    The process of professional testing is not design test cases and then follow the test cases. No responsible tester works this way. Responsible testing is a constant process of investigation and experiment design. This may involve designing procedures and automation that systematically collects data about the product, but all of that must be done with the understanding that we respond to the situation in front of us as it unfolds. We deviate frequently from procedures we establish because software is complicated and surprising; and because the organization has shifting needs; and because we learn of better ways to test as we go.

Through these and other failures in testing conversations, people persist in the belief that good testing is just a matter of writing ever more “test cases” (regardless of what they do); automating them (regardless of what automation can’t do); passing them from one untrained tester to another; all the while fetishizing the files and scripts themselves instead of looking at what the testers are doing with them from day to day.

12 thoughts on “Six Things That Go Wrong With Discussions About Testing

  1. Nice list! I would like to add:

    7. When people talk as if knowledge can be certain. This leads to all sorts of confusions that interferes with how most good testers work. All knowledge is fallible (following Popper) and progress therefore is, in a sense, unlimited.
    What we then have to act on is not “When have we infallibly learned all there is to know about this thing we are testing?”. Instead, we have to try to balance risks and uncertainty while utilizing the time/resources available as best as we can to try and gain the knowledge we think matters the most, however inaccurate.

    [James’ Reply: I like this.]

  2. Amen, James. #4 particular resonates as the demand from management to “automate everything” has grown and grown in the last 6-8 years.

    P.S Where are the sharing buttons on this blog? 😉

    [James’ Reply: Sharing buttons? I suppose there is some plug-in for that. Let me check.]

  3. Possibly the level up from this, but when discussions happen when they are not talking about testing (but include everything else!), or talking about testing without including anyone who actually knows about testing.

    Interesting question – what’s worse; talking about testing in the wrong way, or not talking about testing?

    [James’ Reply: I guess that depends on the substance of the conversation, and the substance of the testing.]

  4. Great list and I particularly like the item about a test as an event rather than an object. Recently, I’ve been testing much nearer to the customer/business end and they see tests as events. They want to know what works and not how many tick boxes we’ve ticked on a test schedule.
    Insightful, as always, thanks.

  5. Great post James thank you. I have currently been tasked with writing scripts for an upgraded mobile APP before I have been able to have a release of the updates in my hand. Sure I have seen the requirements etc but I cannot really know what I am going to do until I start playing with it and learning it’s behavior and the order in which functions occur. I have written the scripts just to please the project manager but I know deep down that I will probably not follow them and will end up re-writing them as I am actually testing.

    [James’ Reply: Yes. The next step is just to be up front with your project manager and teach him to be pleased with real work instead of fake work.]

  6. Hello James,

    I would like to extend your point #3 – isn’t a common problem of talk and discussions around testing disconnected from the project, program, team, or purpose that it is intended to work with?

    I mean if the effort and expectations around the testers and testing (and every other person) within a product development effort is not aligned then there will (often or inevitably) be a mismatch / disappointment in outcomes and wishes.

    So, I guess my questions would be: what would the strategy for a product, or project, or team effort be called – and does that and the testing strategy align?

    So, if I was to extend your point it would be: “3. When people can’t describe the test strategy” – then I’d add in the description that the whole team (or project) should be able to understand the test strategy in their current context – the expectation is then on someone (tester) being able to describe the details and others having an aligned view – I think this goes to the issue of “did someone understand me the way I’d intended” that I think you write a lot about.

    Does this make sense?

    /H

    [James’ Reply: If you are saying that it’s important for everyone, not just responsible testers, to understand the test strategy for the project, then I don’t think I agree with that. It’s true that we want the team to be in alignment with the test strategy, but they can be in alignment without being able to explain it. In fact, a tester can be in alignment with his own test strategy even if he doesn’t know what a “strategy” is and when pressed to describe it has no words at all! This is true in development, too. I can use someone’s class hierarchy and understand how it works without knowing anything about the design principles that led its developer to architect it one way instead of a different way.

    The reason I’m saying responsible testers should know how to talk about test strategy is so they can deal with objections to what they are doing as testers, should any objections arise. Also, so that they can be better able to optimize their strategies.]

  7. Hello James,

    I agree with your point: “The reason I’m saying responsible testers should know how to talk about test strategy is so they can deal with objections to what they are doing as testers, should any objections arise. Also, so that they can be better able to optimize their strategies.”

    My point (maybe not so well explained) was to be able to do that up-front, or earlier, as well and not rely on it being reactive – and I was wondering how the description of that might look (in your point #3) if the emphasis was on continually doing this as a tester’s approach evolved.

    If a responsible tester was describing their ideas and thoughts around their test strategy – as best as they could – testing ideas against the rest of the team (or project) I intuitively, as I haven’t thought of a way to prove it, think that would bring some form of alignment already early on. At least it might help to optimize their strategies.

    With this feedback I might re-state the headline for point #3 as: “When people can’t describe their test strategy as it evolves, during and after execution”

    Maybe I lose half (or more of) the audience now…

    [James’ Reply: Okay, I get it. I can agree to that.]

    /H

  8. G’day James excellent post as always, I have a few questions on explaining the test strategy, just wondering if you had any pointers on how to elevate testing in the Agile world?

    What I am finding is that as a sole tester in a product team I am having issues in getting my point across that testing is still mandated in Agile.

    There is a big push for “Test Automation” I divert this in my team into the broad category of automation being great for regression which takes the focus off trying to script what I do for a brief moment.

    Any pointers would be greatly appreciated!

    [James’ Reply: That’s a sad situation, which is unfortunately playing out in a lot of companies, right now. Too many people need to learn the hard way about the value of testing.]

  9. Points #1 and #4 really hit home for me because at the current place of employment they believe metrics are good when it comes to test cases and when it comes to automation that it can replace manual testing. This is a great article and thank you for this

  10. James, I have shared this post with my team hoping to get them to start to think of things differently than our current process of simply fact checking. the other challenge that we face is that the CTO wants to see more test cases written, after reading this I think we are going to instead of creating more numbers, we are going to clean up what we have so that the mandate of creating new tests will be met, as well as making our test more free, and fact check.

Leave a Reply

Your email address will not be published. Required fields are marked *