Articles

This is a repository of some interesting articles (most, but not all, written by James). Introductions written by James Bach.

Recently Released

Omega Tester

First Published: STP, June 2010

This article is about what to do if you are the only tester surrounded by people who are not testers-- a common situation on Agile projects.

Dinner with a Test Manager

This article is about my brother Jon's innovative ways to motivate and train a test team.

The Good Enough Approach

The Challenge of Good Enough Software

First Published: American Programmer, October 1995

In 1995, I was hot on the topic of agile software development. Only no one called it that, back then...

Good Enough Quality: Beyond the Buzzword

First published: Computer, 8/97

This article contains a heuristic model that describes exactly what I mean when I say that when we ship software we should strive for good enough quality. What I hope will happen is that you'll read this article, and never again will be able to utter the words "good enough" without thinking of the subtleties and challenges of doing a truly good enough job.

A Framework for Good Enough Testing

First published: Computer, 10/98

This article applies the Good Enough model I developed to the problem of software testing, and suggests a set of leading questions we should be asking about our software testing process.

Exploratory Testing

Exploratory Testing Explained

First Published: as a chapter in The Test Practitioner, 2002.

This is an introduction to exploratory testing. It's the latest and greatest I have to say on it, outside of my Rapid Testing Class.

The essays below are earlier attempts to explain it:

What is Exploratory Testing?

Exploratory Testing and the Planning Myth

Where Does Exploratory Testing Fit?

Session-Based Test Management

First Published: STQE magazine, 11/00

(Jonathan Bach wrote this article)

My Review: Is there a way to make exploratory testing just as accountable as pre-scripted testing without sacrificing the creativity of the exploratory approach? My brother Jonathan and I set this challenge for ourselves on a project in the summer of 2000. We wanted to give our client more than just a list of bugs and an invoice-- we wanted to provide an audit trail of our work along with meaningful productivity metrics. So, instead of counting test cases, we counted blocks of test design and execution time (test sessions) and recorded test results in a tagged text format that was then parsed to produce the metrics. This approached worked well, but proved to be quite challenging to manage, as Jonathan explains in the article. We think a lot of the difficulty is a by-product of the learning curve. We've stumbled into a new field here: test accounting. We're continuing to explore it.

General Testing

Reasons to Repeat Tests

There is one important reason to run new tests instead of old tests: because you probably don't have enough tests, yet. A new test may find a bug that no old test can find. But there are at least nine reasons why it might be better to repeat a test that already exists. In this article, I explore the minefield analogy of testing. Thinking through the minefield helps you refine your test strategy so that you strike an appropriate balance between new and old tests.

How Much Is Enough?

First Published: www.stickyminds.com as column feature, 9/12/01

How do you know when you're done testing? There are a lot of answers to this question. In a lot of projects, the question is not seriously asked or answered — time just runs out. But whenever management is serious about good testing, then whatever else they do to decide they've tested enough, it involves the test manager telling a compelling story.

How Do You Spell Testing: a Mnemonic to Jumpstart Testing

First Published: www.stickyminds.com as column feature, 5/8/01

Learn SFDPO, "San Francisco Depot", and use it analyze products to plan testing from five different angles.

Boost Your Testing Superpowers

First Published: www.stickyminds.com as column feature, 11/21/01

This is an article about cheap test tools.

Heuristic Risk-Based Testing

First Published: Software Testing and Quality Engineering, 11/99

This article is a specific process for doing heuristic risk-based software testing. I've been practicing these techniques for years, but until this article had never published them. As far as I know, at the time I wrote it, it was the only article on this subject. While writing it, I had the good fortune to get detailed reviews from Brian Marick, Brian Lawrence, and Cem Kaner. They helped me make this the article I'm most proud of having written.

Troubleshooting Risk-Based Testing

First published: Software Testing and Quality Engineering, 5/03

This article is a follow-on to my Heuristic Risk-Based Testing article. It discusses four common problems with risk-based testing and how to deal with them. I focus especially on how to involve a team in the risk-based test planning process.

Risk and Requirements-Based Testing

First published: Computer, 6/99

This article is a follow-up to the Reframing Requirements Analysis piece. I try to show how introducing the concept of product risk into the standard truisms about requirements-based testing leads to a substantially different approach to the process.

Explaining Testing to Them

First Published: STQE Magazine, Nov/Dec. 2001

Testers are sometimes ambassadors of the craft. We have to deal with other roles on each project: writers, programmers, tech support, product managers, etc. Often we are challenged about what we do when we test, or why it takes so long. That means you not only need to learn testing, you need to learn how to explain testing.

Test Automation Snake Oil

First published: Windows Tech Journal, 10/96

This article is my attack on the dangerously simplistic way that some GUI test tool companies peddle their wares. They lead their clients down a red carpet to waste and frustration by making reckless claims about the wonders of test automation, while downplaying all the problems associated with it. They give test atuomation a bad name.

I believe in responsible and useful test automation, so this article debunks the common "rah rah" surrounding the subject in the hope that you won't be played for a chump.

One example: Contrary to the implication of typical marketing literature for GUI test tools, automated testing is not the same as automatic manual testing. What observant humans do when they go through a test process is in no way duplicated or replaced by test automation, because automation cannot be aware of all the hundreds of different failure modes that a human can spot easily. I have to explicitly program automation to look for suspicious flickers and performance problems, but with humans I can say "be alert for anything strange." To paraphrase Yogi Berra, people can observe lots of problems just by looking. My automation typically finds few problems (I like automation for other reasons than that it finds problems, but the fact remains that somehow we need to find those problems, and that somehow is generally not through GUI test automation).

Value Without Numbers

First published: Software Testing and Quality Engineering Magazine, 7/02

This article addresses the question of how to explain or justify the value of testing without having a direct numerical way of measuring it.

Software Development Process

What Software Reality is Really About

First published: Computer, 12/99

This was my final article for the Software Realities column in Computer magazine. I tried to sum up my entire position on software processes in 1400 words, so it's pretty terse. If you haven't read any other articles of mine, and you want to know what my philosophy is and what I write about, this would be a good place to start.

Process Evolution in a Mad World

First published: Software Quality Week, 1994

This is probably my most famous article. I sure worked like hell on it. I wrote it out of my experiences at Borland International from '91 to '94, as a QA manager on the C++ product line. In this article, I contrast risk-oriented process management with control-oriented process management. Although I have traveled far and wide since I first published this, and seen many things, I still think Process Evolution in a Mad World is a strong statement of how software development processes work.

The Hard Road from Methods to Practice

First published: Computer, 2/97

This was the inaugural article for my Software Realities column in IEEE Computer magazine. I edit the column, but I also write most of the articles (I am always looking for other contributors, though).

This article is about methodology gaps: gaps between what people say they do and what they really do on software projects. I'm interested in methodology gaps because of what they indicate, not about lack of discipline in this industry, but about where we need better ways of thinking about ourselves and our work.

Good Practice Hunting

First published: Cutter IT Journal, 2/99

This article is about the struggle to identify good practices. I think most of our practices evolve and become favored through a mythological, rather than scientific process. That's okay, we can still grope toward better practices. Even so, there are some traps we can avoid, and fields of study that we can mine for insight into a better mythology of software process.

Microdynamics of Process Evolution

First published: Computer, 2/98

This article is about how software process improvement happens in the small; how tiny events shape organizations. I find that software projects, when they are good, are good because people have made them good. If process helps, it helps because people are in relationship with process, and because the process solves a problem that we perceive as important. The central role people play in making software also means that grandiose schemes of process improvement usually fail-- we're too busy and worried about doing a good job to allow ourselves to be distracted by improvement projects. For me, opportunistic improvement, solving problems that are right in front of us, is a surer way to evolve better processes.

Plans, Lies, and Videotape

First published: Computer, 6/98

Why is it so hard to make and keep commitments in software projects? Why don't people deliver on schedule? This article is about that puzzle. Here we examine a situation, caught on videotape, that occurred during a project simulation in Jerry Weinberg's Software Engineering Management workshop. We learn that there are answers to these questions other than "people are stupid" and "people don't care".

Highs and Lows of Change Control

First published: Computer, 8/98

This is partly an article about how one Silicon Valley company does change control. It's also an article where I take you behind the curtain of the official process to see what really happens in the hallway.

Reframing Requirements Analysis

First published: Computer, 2/99

In this article I relate an incident that happened to me at Apple Computer, in 1987. The incident transformed how I think of the mission of QA and the role of requirements documents in software projects. In the article I show how the requirements development process can be usefully reframed from a process of analysis and recording to one of ongoing exploratory dialog.

Enough About Process: What We Need are Heroes

First published: IEEE Software, 3/95

I'm fascinated by the fact that people who take initiative to solve ambiguous problems are key to the success of every software project. I call this software process heroism, based on Joseph's Campbell's description of the hero's journey. Unfortunately, it's a subject that provokes a lot of strong feelings and confusion. There seems to be a lot of fear surrounding the simple idea that people are not easily controlled, influenced, or interchanged with one another.

I think this controversy is the single biggest indicator of how immature our field is. We can't even agree on what we are doing or what we mean, as people, in the midst of our process creations. I believe we will never be able to control software projects well, unless we give up on treating software development as a mechanical task, and start embracing it as a human system. Anyway, that's what works for me.

Gray Rebuts Bach

First published: Computer, 4/98

(Lewis Gray wrote this article, I tacked a rebuttal on to the end of it.)

My Review: This article is Lewis Gray's rebuttal of some of the points in my Microdynamics article. Lewis is a good writer, and he raises some genuine important issues. It's only his reasoning and conclusions that I dispute. My rebuttal is included.

Lewis believes in standards. I believe in people. Lewis believes in people, too, but he thinks they perform better when they use standards. I believe in standards, too, but only so long as they are good standards, relevant to the situation, and subordinate to human judgment. I suspect Lewis and I differ mainly in what we trust and distrust. Certainly, our work experiences are very different.

I challenge you, on your software project, to look around once in a while and notice what people are doing with their time. Notice how rarely skilled people refer to process standards. Notice how they use their minds, their technology, and their relationships to get things done. It's not enough to ask them what they do (often we tell each other that we do things even when we really don't do very much of it), you have to observe personally. Observe yourself, too.

Sure, we all must establish certain ground rules and agreements by which we will work. All I'm saying is hey, let's keep perspective: software projects are 99% people. Let's look to our own skills and collaborations for salvation from failing projects, not to some corporate process standard.

The Essence of the CMM

First published: Computer, 6/97

(Judy Bamberger wrote this article. I merely edited.)

My Review: I am no fan of the Capability Maturity Model. I think it will ultimately go down as an embarrassing dead-end in the annals of software process engineering.

But I am a fan of Judy Bamberger's way of thinking about the CMM. As long as I can have Judy (or another CMM tamer, Judah Mogilensky) on my team, I wouldn't mind implementing the CMM. That's because Judy maintains perspective on how people figure in the maturity equation. Here are her suggestions for approaching CMM compliance.

Of Crazy Numbers and Release Criteria

First published: Computer, 12/98

(Johanna Rothman wrote this article. I merely edited.)

My Review: This is a great article! Johanna takes us behind the scenes of a curious project objective: "no more than 36 open high-priority bugs." She responds to my initial shock and consternation at such an arbitrary number by explaining the interesting social dynamics that guided its selection.

Someday I would like to write another article about how Johanna wrote this one. Our dialog began as sort of a "'tis and 'tisn't" argument about numeric release criteria (I'm opposed to them, as a general rule; Johanna recommends them, as a general rule). As we examined each others premises, especially the ones that started as unspoken assumptions, we discovered that our thought processes had a lot more in common than we first realized. For both of us, general rules of software project management are entry points for exploring solutions, not sacred beliefs. Once she revealed the hidden reasons for choosing 36, I found myself agreeing with her. In fact, I think it was darn clever to do what she did, in the way that she did it.

So, the story of this article is partly a story about reconciling apparently irreconcilable points of view, through mutual respect and a passion to understand. I think the software methodology industry needs a lot more of that.

Playing the Expert Game

First Published: Computer, 8/99

(Jonathan Bach wrote this article. I edited it.)

My Review: This article is the very condensed story (we had to cut about 2000 words) of my brother's technical education at Microsoft. I love my brother, but that's not good enough reason for you to read this. You should read it because, odds are, his story has a lot to do with the experience you had when you first started working in the technical business (or will have, if you're a newbie). It also has a lot to do with how all of us struggle to keep up with technological trends.

The title refers to a phenomenon we call "The Expert Game" which is our willingness to overlook the lack of technical knowledge in our co-workers as long as they can do a good simulation of someone who knows what they are doing. We are willing to do this because we know that nobody knows everything we'd like them to know, and because we ourselves don't know everything we ourselves should know.

Software Engineering and UCITA

First Published: Journal of Computer and Information Law, Vol 18, #2, Winter 1999/2000

(by Cem Kaner, J.D, Ph.D.)

My Review: A comprehensive opinon about the dangers of the Uniform Computer Information Transaction Act -- a stand-alone draft law with many profound restrictions for consumers, small software companies, and licencees of software. Previously known as Article 2B (a proposed amendment to the Uniform Commercial Code), the law has disturbing implications -- one of which prohibits licensees to publish criticism of the licensed software. Cem was present at meetings held by the American Law Institute and the National Conference of Commissioners on Uniform State Laws to discuss and draft the article before it became UCITA and presents here a detailed and well-researched account of the law's implications.

 

 

Upcoming Events

2014  

October 06-08
Mequon, Wisconsin
Corporate Training: Rapid Software Testing

October 09-10
Mequon, Wisconsin
Corporate Consulting

October 13-15
Albuquerque, New Mexico
Corporate Training: Rapid Software Testing

October 24
Richmond, England
Corporate Training: Rapid Software Testing for Programmers

October 27-29
Brighton, England
Public Class: Rapid Software Testing, organized by Rosie Sherry of Software Testing Club

October 31
Brighton, England
Public Class: Rapid Software Testing for Programmers, organized by Rosie Sherry of Software Testing Club

November 7
Malmo, Sweden
Conference: Oredev 2014

November 09
Zurich, Switzerland
Dinner Meeting with Paul Haye

November 10-12
Zurich, Switzerland
Public Class: Rapid Software Testing, organized by Abraxas Informatik AG

November 13
Zurich, Switzerland
Public Class:  Session and Thread-based Testing Management, organized by Abraxas Informatik AG

November 17-19
Dordrecht, Netherlands
Public Class: Rapid Software Testing, organized by codecentric

November 20
Antwerpseweg, 2803 Gouda, The Netherlands
Corporate On-Site


(click here to see the whole schedule...)