Session-Based Test Management

A method for measuring and managing exploratory testing


Exploratory testing is unscripted, unrehearsed testing. Its effectiveness depends on several intangibles: the skill of the tester, their intuition, their experience, and their ability to follow hunches. But it's these intangibles that often confound test managers when it comes to being accountable for the results. For example, at the end of the day, when the manager asks for status from an exploratory tester, they may get an answer like "Oh, y'know... I tested some functions here and there, just looking around." And even though the tester may have filed several bugs, the manager may have no idea what they did to find them. Even if the manager was skilled to ask the right questions about what the tester did, the tester may have forgotten the details or may not be able to describe their thinking out loud, on-the-fly.

We had this problem when doing exploratory testing for a client. We wanted to be accountable for our work. We wanted to give status reports that reflected what we actually did. We wanted to show that we could be creative, skilled explorers, yet produce a detailed map of our travels.

How it's done

We invented Session-Based Test Management as a way to make those intangibles more tangible. It can be thought of as structured exploratory testing, which may seem like a contradiction-in-terms, but "structure" does not mean the testing is pre-scripted. It means we have a set of expectations for what kind of work will be done and how it will be reported. As in a recording studio, this work is done in "sessions." Sessions range from 45 minutes to several hours, but no matter the length, it is time spent testing against a charter for the session. The nuts-and-bolts of sessions are described in further detail in an article Jonathan Bach wrote for STQE magazine.

At the end of a session, the tester hands in a session report, tagged with important information about what they did. Here is a sample.

Session metrics

The session metrics are the primary means to express the status of the exploratory test process. They contain the following elements:

This is what the metrics look like.

To create the metrics, information in session files is scanned by a tool we wrote in Perl.


At the end of each session, the tester and manager get together to talk about it. We've discovered that the value of SBTM relies on the ability of the test manager to talk with the tester about the work that was done, so to help the tester and manager make the most out of that meeting (which takes about 15-20 minutes), we've compiled a checklist of questions.

The Scan Tool

Our tool "scans" session reports by looking at the tagged headings within them. Scans are cumulative, which means information from all of the tags in all of the sessions are collected every time a scan is run.

The prototype is available here in a self-extracting executable, but you'll need to install Perl first, which is available free at

Manager's Guide

We've discovered that this methodology relies on the skill of the test manager, so we're working on a Manager's Guide which will discuss session protocols, the benefits and the problems we encountered when using SBTM. When that guide is ready, we'll post it here.


Jonathan Bach first presented Session-Based Test Management at STAR West 2000, in a talk titled "How to Measure Ad Hoc Testing"