Why Labels For Test Techniques?

Steve Swanson is very new to testing. I predict he has a great future. He has already noticed that the common idea of boundary testing is almost content-free. Michael Bolton and I do a whole session on how to turn boundary testing into a much more intellectual engaging activity. At the end of his post, he identifies one of the major weaknesses of the classic notion of boundary testing. This confirmed for me that he is a mind to watch.

Steve questions the idea of naming test techniques:

What’s the point of having names for things? To me having a name limits what you see and limits creativity. If you feel that certain things are not to be considered boundary tests, then maybe you won’t do them. Maybe you are pigeon-holing yourself into constraints that do not need to be there. Furthermore it seems that everyone has a different idea of what a “boundary” test is. If that is the case then why even have a name for it?

Dear Steve,

I’ve been studying testing for 19 years, and let me tell you, a lot of things people write about it are fairy tales. This is the first reason why you are confused about what’s in a name: most people (not everyone) are also confused, and thus just copy what they see other people write, without thinking much about it.

To use an example from my own history, I used to talk about the difference between conformance testing and deviance testing. I learned this distinction at Apple Computer. For about five years I talked about them, until I one day I realized that it is an empty and unhelpful distinction. It was not a random realization, but was part of a process of systematically doubting all my terminology and assumptions about testing, in traditional Cartesian fashion. I just couldn’t find a good enough reason to retain the distinction of testing into conformance and deviance.

It looks like you are on a similar path. If you continue on it, the kind of thinking you are doing, will A) lead you to better resources to draw from about testing, and B) allow you to become one the leaders helping us out of this morass, C) ensure that you will be attacked by certain bloggers from Microsoft and elsewhere who just hate people who apply philosophical methods to such an apparently straightforward and automatable task like testing. (Speaking of philosophy of terminology, you may be interested in the Nominalism vs. Realism conversation, or how the pragmatism movement swept aside that whole debate, or how the structuralists and post-modernists study the semiotics of discourse. All these things relate to the issues you raise about terminology.)

I will tell you now what book you need to read that will help you more than any other on this planet: Introduction to General Systems Thinking, by Gerald M. Weinberg. It’s what I consider to be the fundamental textbook of software testing, yet not 1 in 100 testers knows about it.

A quick answer to your issue with names…

Terminology is useful for at least these reasons:

  1. A term can be a generative tool. It can evoke an idea or a thought process that you are interested in. (This is different from using a term to classify or label something, which as you point out limits us without necessarily helping us.) An example of the generative use of terms in this way is the HAZOP process which uses “guidewords” to focus analysis. Even a generative usage is susceptible to bias, which is why I use multiple, diverse, overlapping terms.
  2. A term can serve as a compact way to direct a co-worker. When I manage a test team, I establish terminology so that when I say “do boundary testing” I can expect the tester to know what I’m asking him to do without necessarily explaining every little thing. The term is thus attached to training, dialog, and shared experiences. (This needn’t be very limiting, although we must guard against settling into a rut and having only a narrow interpretation of terms.)
  3. A term can serve as a label indicating the presence of a bigger story under the surface, much like a manilla folder marked “boundary testing” would be expected to hold some papers or other information about boundary testing. The danger, I think you’ve noticed, is that ignorant testers may quite happily pass folders back and forth that are duly labelled but are quite perfectly empty. You have to open the folders on a regular basis by asking “can you describe, specifically, what you did when you did ‘exploratory testing’? Can you show me?”
  4. A term can hypnotize people. (I’m not recommending this; I’m warning you against it). Terminology, especially obscure terminology, is often used in testing to give the appearance of knowledge where there is no knowledge, in hopes that the client will fall asleep in the false assumption that everything is oooookkkkkkaaaayyyyyy. You appear not to be susceptible to such hypnosis. (Adam White has a similar resistance.)

I expect to see more example of skeptical inquiry on you blog, as you wrestle with testing, Steve. I hope you find, as I do, that it’s a rewarding occupation.

Can a Non-Sapient Test Exist?

My vote for the World’s Most Inquisitive Tester is Shrini Kulkarni. He asks: Do you believe that “non sapientâ€? tests exist or for that matter – any part (seems like a very negligibly small portion of entire universe of skilled human testing) of testing be “non sapientâ€? ?

A sapient process is any process that relies for its success on an appropriately skilled human. A sapient test would therefore be a test that relies for its success on an appropriately skilled human.

Is there such a thing as a non-sapient test? My answer is yes, depending on what you consider to be part of the test.

An appropriately skilled programmer could decide to write a clever bit of test code that automatically watches for a potentially dangerous condition and throws an exception if that condition occurs, at which point an appropriately skilled human would have to make some appropriate response. In that case, there are three interesting parts to this testing:

  • the test code itself (not sapient)
  • the test design process (sapient)
  • the process of reacting to the test result (sapient)

All three parts together would be called testing, and that would be sapient testing.
The test artifact (what a lot of people would call the test) is not sapient. However the process of understanding it and knowing how to maintain it and when to remove it is sapient.

The process of running the test is sapient, because running it necessarily means doing the right thing if it failed (or if it passed and wasn’t supposed to pass).

A lot of automation enthusiasts apparently think that the test code is the important part, and that test design and test execution management are but tiny little problems. I think the truth is that most automation enthusiasts just like making things go. They are not comfortable with testing sapiently, since probably no one has trained them to do that. They focus instead on what is, for them, the fun part of the problem, regardless of whether it’s the important part. What aids them in this delinquency is that their clients often have no clue what good testing is, anyway, so they are attracted to whatever is very visible, like pretty green bars and pie charts!

I experienced this first hand when I wrote a log file analyzer and a client simulator driver for a patent management system. During the year I managed testing on that project, this tool was the only thing that caused the CEO to come into the testing group and say “cool!” Yes, he liked the pretty graphs and Star Trekkie moving displays. While I was showing the system to my CEO, one of the testers who worked for me– a strictly sapient tester– watched quietly and then interjected a simple question: “How is this tool going to help us find any important problem we wouldn’t otherwise have found.” Being the clever and experienced test manager that I was at the time, I was able to concoct a plausible sounding response, off-the-cuff. But in truth his question rocked me, because I had become so interested in my test tool that I actually had stopped asking myself what test strategy goal I was supposed to be serving with it.

Bless that tester for recalling me to my duty.

What is Test Automation?

There seems to be a lot of confusion about this.

Test automation is any use of tools to aid testing. Test automation has been around since DAY ONE of the computing industry. And never in that history has test automation been an “automatic way of doing what testers were doing before”, unless you ignore a lot of what testers actually do.

For the same reason, a space probe is not “an automatic way of doing what planetary scientists were doing before”, but rather a tool that extends the reach of what scientists can do. Test automation means extending the reach of testers.

Test automation is not at all new. What’s comparatively new is the idea of a tester. Long ago, in the late 40’s, dedicated testers were almost unknown. Programmers tested software. Throughout the sixties, papers about testing, such as the proceedings of the IFIPS conferences, almost always assumed that programmers tested the software they built. Testing was often not distinguished from debugging. As larger and more complex systems arose, the idea of dedicated software testers came into vogue. In 1972, at Chapel Hill, the first conference on testing software was held, and the proceedings of that conference show that testing was beginning to be thought of as a discipline worthy of study apart from programming.

At that conference, I think they took a wrong turn. There was much hope and enthusiasm for the future of test automation. That hope has not panned out. Not for lack of trying. More for lack of understanding.

What they didn’t understand, and what many contemporary programmers and testers also don’t get, is that good testing is an inherently a human process– not incidentally, not accidentally, but INHERENTLY. It’s highly social and psychological. The more complex software is, the more important that humans engage intellectually to identify and solve testing problems. But the Chapel Hill conference was dominated by men trained as programmers and electrical engineers, not people who professionally thought about thinking or who trained people to think.

(Who did, you ask? Jerry Weinberg. His 1965 Ph.D. thesis on problem solving is fabulous. He had written The Psychology of Computer Programming in 1970, a number of papers about testing during the sixties, including a section on testing in his 1961 book, Fundamentals of Computer Programming. He taught problem solving classes during the 60’s too, culminating in his 1974 book Introduction to General Systems Thinking. I regret that Jerry didn’t keynote at the Chapel Hill conference, but he will at the next CAST conference, in Toronto.)

The idea of a dedicated trained tester is newer than the idea of test automation, but unlike test automation, it’s an idea that hasn’t been tried on a large scale, because tester training is so terrible.

Pretending that testing is a simple technical process of making API calls doesn’t make the boogie beasts go away. They are still there, Microsoft. My wife still needs me to troubleshoot Microsoft Office, a product produced increasingly, I’m told, by programmers who work intensively on test tools because they never learned how to test. (My colleague Michael Bolton recently ran a testing class at Microsoft, though, so perhaps there is hope.)

Test automation cannot reproduce the thinking that testers do when they conceive of tests, control tests, modify tests, and observe and evaluate the product. Test automation cannot perform sapient testing. Therefore, automation of testing does NOT mean automation of the service provided by the software tester.

In summary, test automation means applying tools to testing. Test automation is ancient, non-programming testers are a newer idea, but what the industry has not yet really tried, except on a small and local scale, is systematically training the minds of testers, instead of just calling them “test engineers” or “SDETs”, giving them tools they don’t know how to use, and hoping for the best.

(BTW, I’m a programmer. I was hand-coding machine language on my Apple II before I ever heard of things called “assemblers” that automated the process. I also ran the Borland Turbo Debugger test team on the Borland C++ project, in the early 90’s. Before that I ran a test tool development team at Apple Computer. When it comes to programmer testing, GUI test automation, and non-GUI test automation, I’ve been there and done that.

It is my experience of the profound limitations of test automation that makes me a bit impatient with the way new generations of testers and programmers who seem to think no one ever thought of test tools before they came along.)

Sapient Testing Rules

Hey, somebody at AST must have read my blog when I coined the term “sapient testing“, because they named their magazine after it.

I’m still waiting for people to pick up on my other coinage: mythomimetic, which is an adjective meaning “not informed by experience or wisdom, but rather hearsay and wishful thinking.” I’ll use it in a sentence: “The speaker peppered his talk with mythomimetic cliches such as ‘you can’t control what you can’t measure’.”

Sapient testing is the antithesis of mythomimetic test automation.

Self-Discipline Redux

I want to follow up on what I said about discipline in yesterday’s post. I said that discipline is “motivation without reason.” I stand by that, as far as it goes, but there’s more to discipline than that, and I want to tell the whole story.

The sense in which I am using the word discipline is the fourth definition in the O.E.D.: “A system or method for the maintenance of order; a system of rules for conduct.” Self-discipline means a way to keep yourself in order. To have discipline you need some idea of order, and some way of maintaining order.

By motivation without reason, I mean a way of compelling someone to act without appealing to their judgment or desires. “Don’t argue with me, just do as I say” is an attempt to invoke discipline. When my executive mind tries to force my sleepy body to get out of bed and go to work, it’s trying to maintain order in my life. From the point of view of my body, some outsider is telling me to wake up when waking up does not seem reasonable. Hence motivation without reason.

As I was saying yesterday, I don’t have a lot of self-discipline. In other words, I don’t go around with a strong and specific idea of what I should do at any particular time, and even when I do have a strong idea, I don’t generally force myself to do those things. What I do instead is I manage myself by a sort of “consensus” of my various parts. Sort of like a parliamentary system in my head.

If sometimes I look like I am exhibiting a lot of discipline, that’s probably not because I am controlling my wild sinning self with the iron bridle of wisdom. More likely, I’m just not experiencing any conflict about what I want to do. That’s what I meant about not needing discipline if you know what you’re doing. When all your parts are in harmony, order unfolds spontaneously and naturally. You don’t need much of a system to maintain order.

But getting to harmony can be challenging. It means, in my projects and in my life, I have to do a lot of wandering and playing and procrastinating until I decide what I really want to do.

I wouldn’t necessarily recommend that anyone else live this way– I often envy people who can point themselves in one direction with conviction– but wandering does have its advantages, especially for a tester.

Methodology Debates: Traps and Transformations

(This article is adapted from work I did with Johanna Rothman, at the 1st Amplifying Your Effectiveness conference. It’s never been widely published, so here you go.)

As a context-driven testing methodologist, I am required to think through the methods I use. Sometimes that means debating methodology with people who have a different view about what should be done. Over time, I’ve gained a lot of experience in debate. One thing I’ve learned is that most people have good ideas, but few people know how to debate them. This is too bad, because a successful debate can make a community stronger, while avoiding debates creates a nurturing environment for weak ideas. Let’s look at how to avoid the traps that make debates fail, and how to transform disagreement into powerful consensus.

Sometimes a debate is really part of a war. The advice below won’t help much if that is the case. This advice is more for situations where you are highly motivated to create or maintain a working relationship with someone you disagree with– such as when you work in the next cubicle from the guy.


  • Conflicting Terminology: Be alert to how you are using technical terms. A common term like “bug” has different meanings to different people. If someone says “Unit testing is absolutely essential to good software quality” among your first concerns should be “What does he mean by ‘unit testing’, ‘essential’, and ‘quality’?” Beware, sometimes a debate about definitions bears important fruit, but it can also be another trap. You can spend all your energy on them without necessarily touching the marrow of the subject. On the other hand, you can allow yourself to understand and even use someone else’s terminology in your debate without committing yourself to changing your preferred terminology in general.
  • Paradigm Conflict: A paradigm is an all-inclusive way of explaining the world, generally tied into terminology and assumptions about practices and contexts. Two different paradigms may explain the same phenomena in totally different ways. When two people from different paradigms come together, each may seem insane to the other. Whenever you feel that your opponent is insane, maybe that’s time to stop and consider that you are trying to cross a paradigmatic boundary. In which case, you should talk about that, first.
  • Ambiguous Metrics: Don’t be seduced by numbers. They can mean anything. The problem is knowing what they do, in fact, mean. When someone quotes numbers at me, I wonder how the metric was collected, and what influenced the people who collected them. I wonder if the numbers were sanitized in any way. For instance, when someone tells me that he performed 1000 test cases, I wonder if he’s talking about trivial test cases, or vital ones. There’s no way to know unless I personally review the tests, or conduct a detailed interview of the tester.
  • Confusing Feeling and Rationality: Beware of confusing feeling communication with rational communication. Be alert to the intensity of the feelings associated with the ideas being presented. Many debates that seem to be about ideas may indeed be about loyalty, trust, respect, and other fundamental issues. A statement like “C++ is the best language in the world. All other languages are garbage” may actually mean “C++ is the only language I know. I am comfortable with what I know. I don’t want to concern myself with languages I don’t already know, because then I feel like a beginner, again.” There’s an old saying that you can’t use logic to refute a conclusion that wasn’t arrived at through logic. That may not be strictly true, but it’s a helpful guideline. So, if you sense a strange intensity around the debate, your best bet may be to stop talking about ideas and start exploring the feelings.
  • Confusing Outcome and Understanding: Sometimes one person can be debating for the purpose of getting a particular outcome, while the other person is debating to understand the subject better, or help you understand them. Confusing these approaches can lead to a lot of unnecessary pain. So, consider saying what your goal is, and ask the other person what they want to get out of the debate.
  • Hidden Context: You may not know enough about the world the other person lives in. Maybe work life for them is completely different than it is for you. Maybe they live under a different set of requirements and challenges. Try saying “I want to understand better why you feel the way you do. Can you tell me more about your [life, situation, work, company, etc.]?”
  • Hidden History: You may not know enough about other debates and other struggles that shaped the other person’s position. If you notice that the other person seems to be making many incorrect assumptions about what you mean, or putting words in your mouth, consider asking something like “Have you ever had this argument with someone else?”
  • Hidden Goals: Not knowing what the other person wants from you. You might try learning about that by asking “Are we talking about the right things?”or “What would you like me to do?” Keep any hint of sarcasm out of your voice when you say that. Your intent should be to learn about what they want, because maybe you can give it to them without compromising anything that’s important to you.
  • False Urgency: Feeling like you are trapped and have to debate right now. It’s always fair to get prepared to discuss a difficult subject. You don’t have to debate someone at a particular time just because that person feels like doing it right then. One way to get out of this trap is just to say “This subject is important to me, but I’m not prepared to debate it right now.”
  • Flipping the Bozo Bit: If you question the sanity, good faith, experience, or intelligence of the person who disagrees with you, then the debate will probably end right there. You’ll have a war, instead. So, if you do that, in the heat of the moment, your best bet for recovery may be to take a break. When you come back, ask questions and listen carefully to be sure you understand what the other guy is trying to say.
  • Short-Term Focus: Hey, think of the future. Successful spouses know that the ability to lose an argument gracefully can help strengthen the marriage. I lose arguments to my wife so often that she gives me anything I want. The same goes for teams. Consider a longer term view of the debate. For instance, if you sense an impasse, you could say “I’m worried that we’re arguing too much. Let’s do it your way.” or “Tell you what: let’s try it your way as an experiment, and see what happens.” or “Maybe we need to get some more information before we can come to agreement on this.”

Transforming Disagreement

An important part of transforming disagreement is to synchronize your terminology and frames of reference, so that you’re talking about the same thing (avoiding the “pro-life vs. pro-choice” type of impasse). Another big part is changing a view of the situation that allows only one choice into one that allows many reasonable choices (the “reasonable people can bet on different horses” position). Here are some ideas for how to do that:

  • Transform absolute statements into context-specific statements. Consider changing “X is true” to “In situation Y, X is true.” In other words, make your assumptions explicit. That allows the other person to say “I’m talking about a different situation.”
  • Transform certainties into probabilities and alternatives. Consider changing “X is true” to “X is usually true” or”X, Y, or Z can be true, but X is the most likely” That allows the other person to question the basis of your probability estimate, but it also opens the door to the possibility of resolving the disagreement as a simpler matter of differing opinions on probability rather than the more fundamental problem of what is possible.
  • Transform rules into heuristics. Consider changing “You should do X” to something like”If you have problem Y and want to solve it, doing something like X might help.” The first statement is probably a suggestion in the clothing of a moral imperative. But in technical works, we are usually not dealing with morals, but rather with problems. If someone tells me that I should write a test plan according to the IEEE-829 template, then I wonder what problem that will solve, whether I indeed have that problem, how important that problem is, whether 829 would solve it, and what other ways that same problem might be solved.
  • Transform implicit stakeholders and concerns into explicit stakeholders and concerns. Consider changing “X is bad” to “I don’t like X” or”I’m worried about X” or “Stakeholder Y doesn’t like X.” There are no judgments without a judger. Bring the judger out into the open, instead of using language that make an opinion sound like a law of physics. This opens the door to talk about who matters and who gets to decide, which can be a more important issue than the decision itself. Another response you can make to “X is bad” is to” ask compared to what?” which will bring out the unspecified standard.
  • Translate the other person’s story into your terms and check for accuracy. Consider saying something like “I want to make sure I understand what you’re telling me. You’re saying that…” then follow with “Does that sound right?” and listen for agreement. If you sense a developing impasse, try suspending your part of the argument and become an interviewer, asking questions to make sure the other person’s story is fully told. Sometimes that’s a good last resort option. If they challenge you to prove them wrong or demand a reply, you can say “It’s a difficult issue. I need to think about it some more.”
  • Translate the ideas into a diagram. Try drawing a picture that shows both views of the problem. Sometimes that helps put a disagreement into perspective (literally). This can help especially in a “blind men and the elephant” situation, where people are arguing because they are looking at different parts of the same thing, without realizing it. For instance, if I argue that testing should start late, and someone else argues that testing should start early, we can draw a timeline and put things on the timeline that represent the various issues we’re debating. We may discover that we are making different assumptions about the cost of bugs curve, and which point we can draw several curves and discuss the forces that affect them.
  • Translate total disagreement into shades of agreement. Do you completely disagree with the other person, or disagree just a little? Consider looking at it as shades of agreement. Is it total opposition or is it just discomfort. This is important because I know, sometimes, I begin an argument with a vague unease about someone’s point of view. If they then react defensively to that, as if I’ve attacked them, then I might feel driven firmly to the other side of the debate. Sometimes when looking for shades of agreement, you discover that you’ve been in violent agreement all along.
  • Transform your goal from being right to being a team. Is there a way to look at the issue being debated as related to the goal of being a strong team? This is something you can do in your own mind to reframe the debate. Is it possible that the other person is arguing less from the force of logic and more from the fear of being ignored? If so, then being a good listener may do more to resolve the debate than being a good thinker. Every debate is a chance to strengthen a relationship. If you’re on the “right” side, you can strengthen it by being a gracious winner and avoiding I-told-you-so behavior. If you’re on the “wrong” side, you can strengthen the team by publicly acknowledging that you have changed you mind, that you have been persuaded. When you don’t know who is right, you can still respect feelings and consider how the outcome and style of the debate might harm your ability to work together.
  • Transform conclusions to processes. If the other person is holding onto a conclusion you disagree with, consider addressing the process by which they came to adopt that conclusion. Talk about whether that process was appropriate and whether it could be revisited.
  • Express faith in the other person. If the debate gets tense, pause and remind the other person that you respect his good faith and intentions. But only say that if it’s true. If it’s not true, then you should stop debating about the idea immediately, and deal instead with your feelings of mistrust. Any debate that’s not based on trust is doomed from the start, unless of course it’s not really a debate, but a war, a game, or a performance put on for an audience.
  • Wait and listen. Sometimes, a conversation looks like a debate, and feels like a debate, but is actually something else. Sometimes we just need to vent for a bit, and be heard. That’s one reason why being a good listener is not only polite, but eminently practical.
  • Express appreciation when someone tries to transform your position. When you notice someone making an effort to use these transformations in a conversation with you, thank them. This is a good thing. It’s a sign that they are trying to connect with you and help you express you ideas.


Recently I posted a sort-of attack on Jim Pensyl, who had posted a sort-of attack on my community. Then an interesting thing happened. He withdrew his blog post and called me on the phone. That was unexpected. Almost nobody de-escalates that way. The two normal responses are A) abandon the debate, or B) continue the debate until everyone is exhausted or a stable point of agreement or disagreement is achieved. Jim chose a third way C) transcend the debate by bidding for a better relationship with his opponent. I want to try C more often. C takes special skill and guts.

Jim and I talked on the phone for two hours about testing and context, and in that conversation I think we forged a collegial bond. I think I know what he’s trying to do, now, and I suspect I can be more a service to him than a hindrance; and he to me. Considering how I felt before the phone call, it’s a spectacular result.

Collegiality is much needed in our industry. And I’m not talking about mere politeness or live-and-let-live passive rivalry. I’m talking about people who do the hard work of creating connections with each other that allow for differences while also constructively questioning differences. This is a matter of chemistry, sometimes. There are people whom I have been completely unable to connect with, despite my best efforts. It’s also a matter of motivation, because best efforts take a lot of energy.

My favorite colleagues are the ones that know how to criticize me and aren’t afraid to do it. Steve Smith provides a great example. I’ve known Steve for 10 years.

This has been a pretty good year for me, in colleague recruiting terms. I’ve gotten to know two new systems thinkers, Ben Simo and Matthew Heusser. And two people I knew before, Karen Johnson and Antony Marcano, have revealed depths of wisdom and talent I had not expected.

One colleague whose ideas are beginning to command my attention is John McConda, who just wrote this wonderful post about collegiality. Check it out.