Test Methodology

Rapid Testing Framework

This diagram is a roadmap of the major issues and elements of the Rapid Software Testing methodology. I use it as for managing and navigating the testing story. In Rapid Testing, the testing story is a dominant heuristic for keeping things on track. When I speak of the testing story, I'm talking about a narrative structure that represents the logic of the test process. I'm speaking literally of a story. In prospect, it's the general form of our plan. In retrospect, it maps to the true story of what happened-- if what happened was good testing.

Diagram created by James Bach and Michael Bolton

Test Estimation Landscape

This diagram lays out all the major issues and elements of a good test estimation done in the Rapid Software Testing style. When I am solving an estimation problem, I walk myself through it. There are some elements here that are far from self-explanatory. In the upcoming guide to Rapid Software Testing, all will be explained.

 

Diagram created by James Bach, Michael Bolton and thanks to Mary Alton for visual design consulting.

 

 

Military Scouting as Exploratory Testing

This diagram comes from a book about World War I. It shows the progress of a scout starting in the lower right corner and progress into enemy territory. Notice that the scout makes ad hoc decisions about where to go and what to do. Notice how the scout must be ready to pick up on any little sign and draw inferences from them as he goes. This is software testing, friends. This is what good testers do.

 

Diagram from Freedom's Triumph: The Why, When and Where of the European Conflict Published by The Magazine Circulation Company, Inc. 1919

 

 

This is an unordered repository of a few of the test methodology documents that exemplify our approach to testing.

Heuristics of Testability V2.2

This is my description and list of ideas for what makes a product more testable (completely revised and expanded as of November 2013). It can help testers and developers improve the testability of a product so that testing goes faster and takes less effort.

Session-Based Test Management

Exploratory testing (sometimes referred to as "ad hoc" testing) is a creative, intuitive process. Everything testers do is optimized to find bugs fast, so plans often change as testers learn more about the product and its weaknesses. Session-based test management is one method to organize and direct exploratory testing. It allows us to provide meaningful reports to management while preserving the creativity that makes exploratory testing work. This page includes an explanation of the method as well as sample session reports, and a tool we developed that produces metrics from those reports.

SQA for New Technology Projects

When you're doing a 1.0 product, you can't rely much on experience to guide your SQA process. You also don't have regression test suites or any other specialized test materials that you can reuse. Meanwhile, the product itself is probably changing at a high rate. It's poorly documented and you may not be the first to know about major changes.

This document is a set of ideas for dealing with that situation. It begins with the idea that you have to change your thinking from a task orientation to a risk orientation.

General Functionality and Stability Test Procedure

(for Microsoft Windows 2000 Application Certification)

I produced this procedure for Microsoft to help them do a better job of assuring that applications that claim to be Windows 2000 compatible really are compatible. The procedure itself is documented in 6 pages. As far as I know it is the first published exploratory testing procedure. It's used along with a second non-exploratory procedure (which is 400 pages long!) to perform the certification test. What's interesting about that is the fact that my 6 pages represent about one third of the total test effort.

Satisfice Test Context Model

This is version 1.0 of a model I use to help me analyze software test projects. It depicts the major elements in the context of testing that should influence choices about test strategy, test logistics, and testing products.

Satisfice Heuristic Test Strategy Model

This model is a comprehensive set of lists that help a tester think through test strategy. I use this model to organize my thoughts about all the elements of test design. By referring to this model, I am able to rapidly generate lots of ideas for how to test anything. This is a classic example of "guideword heuristics.

Test Plan Evaluation Model

This is a model I use when I'm reviewing and critiquing a test plan. It lays out what are, in my opinion, all the interesting issues to consider concerning a test plan and associated documents. One of the interesting features of this model is a set of test project heuristics.

Test Plan Building Process

This is an experimental process for evolving a good test plan. I'm still experimenting with it. It's an example of a "forward-backward" process, where you proceed concurrently on all tasks, rather than linearly through each task in a predefined order. It's also yet another example of a heuristic approach to testing. This procedure doesn't tell you what to do, so much as suggest what to think about.

Agile Test Automation

It seems to me most large automation efforts come to nothing, or if they come to something, it’s at a ridiculous cost compared the modest value of the testing that eventually happens. This paper has been published on my website for some years. It’s based on a study I did for a Big Famous ISP that asked me to come in and review its test automation strategy. I found the typical nonsense: a programmer who had spent nine months trying to turn manual scripts into programmatic checks through the GUI. He was able to show me a couple thousand little test programs that couldn’t execute due to changes in the product and changes to the tools.
Meanwhile, all around him there were many small opportunities to use tools in a way that would help the testers now.

Investigating Bugs: A Testing Skills Study

This is an analysis of what one particular team actually did to investigate bugs. Their stated process did not match their actual process, and that's a well known phenomenon in social science. This is why just writing down what people say they do and calling that a "process" is a bad idea that has led to an amazing amount of waste. Instead, we must learn to use the methods of participant-observer studies, as I demonstrate in this article.

 

 

 

Upcoming Events

2014  

September 01
Orcas Island, Washington
Interview with Joerg Droege, via skype

September 03-05
Broadcast from Orcas Island, Washington, USA
WEBINAR: Rapid Testing Intensive ONLINE with James Bach

September 15-17
Fairmont Resort Blue Mountains near Sydney, Australia
Conference: LET'S TEST OZ, Context-Driven Software Testing

September 18
Sydney, Australia
Sydney Testers Meet-UP

September 15-19
The Bronx, New York, New York
On-Site Training: Rapid Testing Intensive at Per Scholas, instructed by Michael Bolton

September 22-24
The Bronx, New York, New York
On-Site Training: Rapid Software Testing at Per Scholas, instructed by Michael Bolton

October 06-08
Mequon, Wisconsin
Corporate Training: Rapid Software Testing

October 09-10
Mequon, Wisconsin
Corporate Consulting

October 27-29
Brighton, England
Public Class: Rapid Software Testing, organized by Rosie Sherry of Software Testing Club

October 31
Brighton, England
Public Class: Rapid Software Testing for Programmers, organized by Rosie Sherry of Software Testing Club

 


(click here to see the whole schedule...)