Exploratory Testing Skaters

When Cem Kaner introduced the term “exploratory testing” in the mid-80’s, everyone ignored it. When I picked up the term and ran with it, I was mostly ignored. But slowly, it spread through the little community that would become the Context-Driven School. I began talking about it in 1990, and created the first ET class in 1996. It wasn’t until 1999 that Cem and I looked around and noticed that people who were not part of our school had begun to speak and write about it, too.

When we looked at what some of those people were saying, yikes! There was a lot of misunderstanding out there. So, we just kept plugging along and running our peer conferences and hoping that the good would outweigh the bad. I still think that will happen in the long run.

But sometimes it’s hard to stomach how the idea gets twisted. Case in point: James Whittaker, an academic who has not been part of the ET leadership group, and also has little or no experience on an industrial software project as a tester or test manager, has published a book called Exploratory Software Testing.

Whatever Whittaker means when he talks about exploratory testing is NOT what those of us mean who’ve been working on nurturing and developing ET for the last 20 years. As far as I can tell, he has not made more than a shallow study of it. I will probably not write a detailed review (though his publisher asked me to look at it before it was published), because I get too angry when I talk about it, and I would rather not be angry. But Adam Goucher has published his review here.

Another guy who shows up at the conferences, B.J. Rollison, also gets ET wrong. He’s done what he calls “empirical research” into ET, at Microsoft. Since he, again, has not engaged the community that first developed the concept and practices of ET, it’s not altogether surprising that his “research” is based on a poor understanding of ET (for instance, he insists that it’s a technique, not an approach. This is similar to confusing the institution of democracy with the mechanics of voting), and apparently were carried out with untrained subjects, since Rollison himself is not trained in what I recognize as exploratory testing skills.

Experimental research into ET can be done, but of course any such work is in the realm of social science, not computer science, because ET is a social and psychological phenomenon. (see the book Exploring Science, for an example of what such research looks like).

Now even within the group of us who’ve been sharing notes, debating, and discovering the roots of professional exploratory thinking in the fields of epistemology and cognitive psychology and the philosophy and study of scientific practice, there are strong differences of opinion. There are people I disagree with (or who just dislike me) whom I still recognize as thoughtful leaders in the realm of exploratory testing (James Lyndsay and Elisabeth Hendrickson are two examples). Perhaps Whittaker and Rollison will become rivals who make interesting discoveries and contributions, at some point. Time will tell. Right now, in my opinion, they are just skating on the surface of this subject.

Sapience and Blowing Peoples’ Minds

I told a rival that I don’t use the term “manual testing” and that I prefer the term “sapient testing” because it’s more to the point. This is evident in the first definition of the word “manual” in the Oxford English Dictionary: 1. a. Of work, an action, a skill, etc.: of or relating to the hand or hands; done or performed with the hands; involving physical rather than mental exertion. Freq. in manual labour. Sapient, on the other hand, means “wise.”

He laughed and said “Bach, you are always making words up.” And then told me that in his opinion manual testing did not evoke the concept of unskilled manual labor. Now, other than establishing that the guy doesn’t have an online account to the O.E.D. (definition of “sweeeeet!” is “an online O.E.D. account.”), or perhaps doesn’t consider dictionaries to be useful sources of information about what words mean, I see something else in his reaction: I blew his mind. What I said doesn’t intersect with anything in his education.

To understand me, the man will have to use his sapience, rather than responding manually (i.e. with his hands).

In other words, I notice that some of my rivals in the testing industry don’t merely disagree with me, they apparently don’t comprehend what I’m saying. Example: after some ten hours of solid debate with me, over several sittings, Stuart Reid (who is working on a software testing standard of all preposterous things), told a colleague of mine that he believed I don’t truly mean the things I said in those debates, but merely said them to “be provocative.” Huh. That’s some serious cognitive dissonance you got going, Stu, when the only way you can process what I’m saying is to declare, essentially, that it was all just a dream.

Of course, I don’t think this is an intelligence problem. I think this is a lack-of-effort-to-use-intelligence problem. It’s not convenient for certain consultants to tear up their out-of-date books and classes and respond to the challenge of, um, the last 30 years of development of the craft. So they continue to teach and preach ideas from the seventies (or create testing standards based on them, because they believe not enough people appreciate testing disco).

Anyway, in the Context-Driven community’s latest attempt to explain the ins and outs and vital subtleties of testing, Michael Bolton has come up with a promising tack. Maybe this will help. He’s drawing a distinction between testing and checking.

Brace yourselves for insight. A lot of what people call testing is actually mere checking. But even checking requires testing intelligence to design and do well. This gives more specifics to my concept of sapient testing. Here are Michael’s seminal posts on the subject:

  1. http://www.developsense.com/2009/08/testing-vs-checking.html
  2. http://www.developsense.com/2009/09/transpection-and-three-elements-of.html
  3. http://www.developsense.com/2009/09/pass-vs-fail-vs-is-there-problem-here.html
  4. http://www.developsense.com/2009/09/elements-of-testing-and-checking.html

When Michael first made the distinction between testing and checking, I was annoyed. Truly. It blew my mind in that bad way. I thought he was manufacturing a distinction that we didn’t need. I decided to ignore it. Then he called me and asked “So what do you think of my checking vs. testing article?” I had to say I didn’t like it at all. We argued…

…and he convinced me that it was a good idea. Thank you dialectical learning! Debate has saved me again!

I now agree that it’s a practical distinction that can be used as a lens to focus on the quality of a test process. I do have to get used to the words, though. I now see a difference between automated testing and automated checking, for instance: automated testing means testing supported by tools; automated checking means specific operations, observations, and verifications that are carried out entirely with tools. Automated testing may INCLUDE automated checking as one component, however automated checking does NOT include testing.

Making this distinction is exactly like distinguishing between a programmer and a compiler. We do not speak of a compiler “writing a program” in assembly language when it compiles C++ code. We do not think that we can fire the programmers because the compiler provides “automated programming.” The same thing goes for testing. Or… does that blow your mind?