This diagram is a roadmap of the major issues and elements of
the Rapid Software Testing methodology. I use it as for managing and
navigating the testing story. In Rapid Testing, the testing story is a
dominant heuristic for keeping things on track. When I speak of the
testing story, I'm talking about a narrative structure that represents
the logic of the test process. I'm speaking literally of a story. In
prospect, it's the general form of our plan. In retrospect, it maps to
the true story of what happened-- if what happened was good testing.
Diagram created by James Bach and Michael Bolton
This diagram lays out all the major issues and elements of a good test estimation done in the Rapid Software Testing style. When I am solving an estimation problem, I walk myself through it. There are some elements here that are far from self-explanatory. In the upcoming guide to Rapid Software Testing, all will be explained.
Diagram created by James Bach, Michael Bolton and thanks to Mary Alton for visual design consulting.
Diagram from Freedom's Triumph: The Why, When and Where of the European Conflict Published by The Magazine Circulation Company, Inc. 1919
This is an unordered repository of a few of the test methodology documents that exemplify our approach to testing.
This is my description and list of ideas for what makes a product more testable (completely revised and expanded as of November 2013). It can help testers and developers improve the testability of a product so that testing goes faster and takes less effort.
Exploratory testing (sometimes referred to as "ad hoc" testing) is a creative, intuitive process. Everything testers do is optimized to find bugs fast, so plans often change as testers learn more about the product and its weaknesses. Session-based test management is one method to organize and direct exploratory testing. It allows us to provide meaningful reports to management while preserving the creativity that makes exploratory testing work. This page includes an explanation of the method as well as sample session reports, and a tool we developed that produces metrics from those reports.
When you're doing a 1.0 product, you can't rely much on experience to guide your SQA process. You also don't have regression test suites or any other specialized test materials that you can reuse. Meanwhile, the product itself is probably changing at a high rate. It's poorly documented and you may not be the first to know about major changes.
This document is a set of ideas for dealing with that situation. It begins with the idea that you have to change your thinking from a task orientation to a risk orientation.
(for Microsoft Windows 2000 Application Certification)
I produced this procedure for Microsoft to help them do a better job of assuring that applications that claim to be Windows 2000 compatible really are compatible. The procedure itself is documented in 6 pages. As far as I know it is the first published exploratory testing procedure. It's used along with a second non-exploratory procedure (which is 400 pages long!) to perform the certification test. What's interesting about that is the fact that my 6 pages represent about one third of the total test effort.
This is version 1.0 of a model I use to help me analyze software test projects. It depicts the major elements in the context of testing that should influence choices about test strategy, test logistics, and testing products.
This model is a comprehensive set of lists that help a tester think through test strategy. I use this model to organize my thoughts about all the elements of test design. By referring to this model, I am able to rapidly generate lots of ideas for how to test anything. This is a classic example of "guideword heuristics.
This is a model I use when I'm reviewing and critiquing a test plan. It lays out what are, in my opinion, all the interesting issues to consider concerning a test plan and associated documents. One of the interesting features of this model is a set of test project heuristics.
This is an experimental process for evolving a good test plan. I'm still experimenting with it. It's an example of a "forward-backward" process, where you proceed concurrently on all tasks, rather than linearly through each task in a predefined order. It's also yet another example of a heuristic approach to testing. This procedure doesn't tell you what to do, so much as suggest what to think about.
It seems to me most large automation efforts come to nothing, or if they come to something, it’s at a ridiculous
cost compared the modest value of the testing that eventually happens. This paper has been published on my website for some years.
It’s based on a study I did for a Big Famous ISP that asked me to come in and review its test automation strategy. I found the typical
nonsense: a programmer who had spent nine months trying to turn manual scripts into programmatic checks through the GUI. He was able
to show me a couple thousand little test programs that couldn’t execute due to changes in the product and changes to the tools.
Meanwhile, all around him there were many small opportunities to use tools in a way that would help the testers now.
This is an analysis of what one particular team actually did to investigate bugs. Their stated process did not match their actual process, and that's a well known phenomenon in social science. This is why just writing down what people say they do and calling that a "process" is a bad idea that has led to an amazing amount of waste. Instead, we must learn to use the methods of participant-observer studies, as I demonstrate in this article.