This was on my old site as an HTML file for a long time. I’ve re-edited and corrected it for modern times.
A classic question asked about test strategy is “How much testing is enough?” If you’re testing strictly from pre-scripted procedures or automation, the answer may seem obvious: You’ve done enough testing when you’ve run all of that. But that answer is not worthy of a thoughtful tester. A thoughtful tester answers the question in a way that addresses the mission of testing, not merely the buttons that get pushed along the way. All the test procedures that currently exist might not be enough to satisfy the mission… or they may be more than needed.
Our mission is not to perform a certain set of actions. For most of us, our mission is to learn enough important information about the product so that our clients (developers and managers, mainly) can make informed decisions about it.
Testing as Storytelling
When you test, you are doing something much like composing an investigative news story. It’s a story about what you know about the product, why you think you know it, and what implications that knowledge has for the project. Everything you do in testing either comprises the story or helps you discover the story. You’ve done enough testing when you can tell a compelling story about your product that sufficiently addresses the things that matter to your clients. Since your compelling story amounts to a prediction about how the product will be valued by its users, another way of saying this is that your testing is finished when you believe you can make a test report that will hold true over time—so try to write a classic.
For instance, I once tested the install process of a complex product. My mission was to assess and catalog all the changes that this product made to systems on which it is installed. So my first step was to analyze the install process. Then I diagrammed it, decided how to test the important parts of that process, and found ways to do that under reasonably controlled conditions. I came to a conclusion about this product that flowed logically from the testing, and then I checked the conclusion to be sure that each aspect of it was indeed corroborated and supported by the tests I performed. I needed this to be a good, compelling story, so I tried to anticipate how it could be criticized by my audience. Where is my story weak? How might my story turn out to be false? I ran additional tests to rule out alternative hypotheses. I ran tests multiple times to improve my confidence that the results I was seeing were related to the processes and variables I was controlling, and not coincidental events.
When I exhausted the concerns of my internal critic (and external critics I asked to review my work), I decided it was good enough.
A Short Story Can Be Just as Complete as a Novel
Perfect testing is potentially an infinite process. If complete testing means you have to run all possible tests, you will never finish. But you can say you’re done when you have a testing story with all the major plot points, and you can make the case that additional tests will probably not significantly change your story. Here’s the thing: Although you never know for sure if you have reached that point of diminishing returns, you don’t need to know for sure! All that’s required, all that anyone can expect of you, is that you have a compelling story for why a thoughtful and responsible tester like you might come to the judgment that you know enough about the product under test. In some situations, that will be months of testing; in other situations, only hours.
And maybe you don’t yet know how much testing that could be, because you are still in the middle of all that learning. You may have to walk the rest of the Yellow Brick Road, Dorothy, before you get to click your heels and go home.
Plot Points for a Testing Story
A complete testing story answers the questions: What is the status of the product (bug, etc.)? How do you know (test strategy, including information about test coverage and oracles)? How good is that testing?
The testing usually unfolds in a complicated way. There are false starts. I report bugs that turn out not to be bugs. I investigate automation that might help. I try to secure the test environments I need, and often am only partly successful. I develop rich test data. Real testing is a complicated story, so I need to find ways to simplify. One way is by using Session-Based or Thread-Based test management. Another way is to simply not tell the whole story. I have to take care, though, because when I hide details of the testing, other people on the project may think that there isn’t much to testing.
One way that a lot of testers simplify the testing story is to hide it all behind test cases. Their story becomes “I wrote test cases. I ran test cases. The test cases passed.” There is no content to that story. It is generic and, I believe, vapid and irresponsible. You can do better! Talk about what those test cases mean. What do they cover? What risks do you investigate using them?
The concept of the testing story is not only about reporting, it also helps you manage yourself. It helps you decide when enough is enough. For this reason, in the Rapid Software Testing Framework the testing story has a central place.