Manual Tests Cannot Be Automated (DEPRECATED)

[Note: This post is here only to serve as a historical example of how I used to speak about “automated testing.” My language has evolved. The sentiment of this post is still valid, but I have become more careful– and I think more professional– in my use of terms.]

I enjoy using tools to support my testing. As a former production coder, automated tests can be a refreshing respite from the relatively imponderable world of product analysis and heuristic test design (I solve sudoku puzzles for the same reason). You know, the first tests I ever wrote were automated. I didn’t even distinguish between automated and manual tests for the first couple of years of my career.

Also for the first six years, or so, I had no way to articulate the role of skill in testing. Looking back, I remember making a lot of notes, reading a lot of books, and having a feeling of struggling to wake up. Not until 1993 did my eyes start to open.

My understanding of cognitive skills of testing and my understanding of test automation are linked, so it was some years before I came to understand what I now propose as the first rule of test automation:

Test Automation Rule #1: A good manual test cannot be automated.

No good manual test has ever been automated, nor ever will be, unless and until the technology to duplicate human brains becomes available. Well, wait, let me check the Wired magazine newsfeed… Nope, still nothing human brain scanner/emulators.

(Please, before you all write comments about the importance and power of automated testing, read a little bit further.)

It is certainly possible to create a powerful and useful automated test. That test, however, will never have been a good manual test. If you then read and hand-execute the code– if you do exactly what it tells you– then congratulations, you will have performed a poor manual test.

Automation rule #1 is based on the fact that humans have the ability to do things, notice things, and analyze things that computers cannot. This is true even of “unskilled” testers. We all know this, but just in case, I sprinkle exercises to demonstrate this fact throughout my testing classes. I give students products to test that have no specifications. They are able to report many interesting bugs in these products without any instructions from me, or any other “programmer.”

A classic approach to process improvement is to dumb down humans to make them behave like machines. This is done because process improvement people generally don’t have the training or inclination to observe, describe, or evaluate what people actually do. Human behavior is frightening to such process specialists, whereas machines are predictable and lawful. Someone more comfortable with machines sees manual tests as just badly written algorithms performed ineptly by suger-carbon blobs wearing contractor badges who drift about like slightly-more-motivated-than-average jellyfish.

Rather than banishing human qualities, another approach to process improvement is to harness them. I train testers to take control of their mental models and devise powerful questions to probe the technology in front of them. This is a process of self-programming. In this way of working, test automation is seen as an extension of the human mind, not a substitute.

A quick image of this paradigm might be the Mars Rover program. Note that the Mars Rovers are completely automated, in the sense that no human is on Mars. Yet they are completely directed by humans. Another example would be a deep sea research submarine. Without the submarine, we couldn’t explore the deep ocean. But without humans, the submarines wouldn’t be exploring at all.

I love test automation, but I rarely approach it by looking at manual tests and asking myself “how can I make the computer do that?” Instead, I ask myself how I can use tools to augment and improve the human testing activity. I also consider what things the computers can do without humans around, but again, that is not automating good manual tests, it is creating something new.

I have seen bad manual tests be automated. This is depressingly common, in my experience. Just let me suggest some corollaries to Rule #1:

Rule #1B: If you can truly automate a manual test, it couldn’t have been a good manual test.

Rule #1C: If you have a great automated test, it’s not the same as the manual test that you believe you were automating.

My fellow sugar blobs, reclaim your heritage and rejoice in your nature. You can conceive of questions; ask them. You are wonderfully distractable creatures; let yourselves be distracted by unexpected bugs. Your fingers are fumbly; press the wrong keys once in while. Your minds have the capacity to notice hundreds of patterns at once; turn the many eyes of your minds toward the computer screen and evaluate what you see.

My Commenting Policy

Here is my policy for accepting comments that you make on this blog:

1. I moderate all comments. I accept comments for one or more of the following reasons:

– I value a dialectic approach to learning. So I appreciate critical comments, and I will respond to them. If I don’t write a specific response, that means my response is “Hmm. Interesting point.”

– I value an incisive mind. I get excited when someone is not merely critical, but engages an argument I make and skillfully picks it apart. I definitely respond to those comments.

– I consider comments to be a great way to help me communicate better. In other words, by commenting on my blog, you help me see my own message more clearly. Sometimes I accept a comment just because it gives me an invitation to riff or rant on something that I care about.

– A non-critical, supportive comment is cool, too. I prefer comments that add some new evidence or analysis to the discussion.

– It is okay to challenge my competence or ethics, unless you offer no evidence or argument to back up your challenge.

2. I will probably not approve a comment that merely insults me or dismisses my arguments without engaging them. I may make an exception if I feel like ridiculing you, but on my better days I feel that publicly humiliating a crazy person isn’t the best strategy for making the world a better place.

3. If I don’t publish your comment, feel free to ask me why. I promise to explain.

4. I will not edit or redact a comment that you submit unless I have your permission, with the possible exception of fixing an obvious typo. I may interpolate my replies, however. If you don’t like that, you can email me privately to complain, or you can post your comments about my stuff on your own blog, and then you’ll have total control.

5. If you want to comment on a reply I made to one of your comments, consider replying to me privately, so we can have the whole conversation. Then when you are ready to make your follow-up comment, I’m more likely to approve it.

6. By publishing your comment, I am implicitly endorsing it as potentially useful to the audience of this blog.

7. If you want me to remove or modify an earlier comment of yours, I will do so.

8. You retain copyright over your comments.

“Intuition” and “Common Sense” Considered Harmful

Sometimes, you can improve your thinking just by avoiding using certain terms. I stopped using “best practice”, years ago. When I am tempted to use the term in a serious discussion of methodology, I am forced to use an alternative, and that alternative is always superior.

It’s just like giving up the use of the goto statement. I feel free to use it when I program, but I find that I almost always have a better option available.

Two other terms I have stopped using are “intuition” and “common sense.” Here’s why:

I understand “intuition” to mean the mysterious source of ideas that have no other apparent source. When I say that I used my intuition to solve a problem, I feel like it’s code for saying that I don’t have any idea how I solved the problem. To ascribe something to intuition, however, gives the impression of explaining it, even though nothing has been explained. What we call intuition is exactly the same thing we used to call divine inspiration. The Gods gave me that idea!

I understand “common sense” to be a skill or skill set (and to some extent knowledge) that we assume everyone has, and therefore everyone can simply employ it to solve the problem at hand. If everyone can solve the problem, then there’s no reason to worry about the problem or the solution. To invoke common sense is to banish both problem and solution to obscurity.

In both these cases, the terms are often used as opiates, to dull or stall discussion. Strangely enough, I hear the terms invoked most often in the midst of arguments that where the supposedly commonsense or intuitive issue is, in fact, the subject of the dispute: the very existence of the discussion proves that at least one person doesn’t share that sense in common or share that intuition. In situations like that, invoking common sense or intuition is just another way of saying “if you disagree with me, you must be crazy.”

That leads to this subtitle heuristic: “Intuition” and “Common Sense” really means “I have no idea what’s going on, or at any rate, I won’t respond to your questions about it.”

(I’ve come to a point where I prefer to admit that I don’t have a clue, in those situations where I really don’t have a clue. I feel better being open about that. I feel dirty when I dish out explanatives rather than come to grips with a subject. Besides, I risk being exposed as a fool. Few things feel worse to me than being exposed. My favorite defense against exposure is to avoid enclosing anything important to begin with.)

Fortunately, I’ve found that I do have some idea what’s going on. I usually can cite a heuristic or a pattern that offers some insight. And I’m getting better with practice.

Should Developers Test the Product First?

When a programmer builds a product, should he release it to the testers right away? Or should he test it himself to make sure that it is free of obvious bugs?

Many testers would advise the programmer to test the product himself, first. I have a different answer. My answer is: send me the product the moment it exists. I want avoid creating barriers between testing and programming. I worry that anything that may cause the programmers to avoid working with me is toxic to rapid, excellent testing.

Of course, it’s possible to test the product without waiting to send it to the testers. For instance, a good set of automated unit tests as part of the build process would make the whole issue moot. Also, I wouldn’t mind if the programmer tested the product in parallel with me, if he wants to. But I don’t demand either of those things. They are a lot of work.

As a tester I understand that I am providing a service to a customer. One of my customers is the programmer. I try to present a customer service interface that makes the programmers happy I’m on the project.

I didn’t always feel this way. I came to this attitude after experiencing a few projects where I drew sharp lines in sand, made lots of demands, then discovered how difficult it is to do great testing without the enthusiastic cooperation of the people who create the product.

It wasn’t just malicious behavior, though. Some programmers, with the best of intentions, were delaying my test process by trying to test it themselves, and fix every bug, before I even got my first look at it (like those people who hire house cleaners, and then clean their own houses before the professionals arrive).

Sometimes a product is so buggy that I can’t make much progress testing it. Even then, I want to have it. Every look I get at it helps me get better ideas for testing it, later on.

Sometimes the programmer already knows about the bugs that I find. Even then, I want to have it. I just make a deal with the programmers that I will report bugs informally until we reach an agreed upon milestone. Any bugs not fixed by that time get formally reported and tracked.

Sometimes the product is completely inoperable. Even then, I want to have it. Just by looking at its files and structures I might begin to get better ideas for testing it.

My basic heuristic is: if it exists, I want to test it. (The only exception is if I have something more important to do.)

My colleague Doug Hoffman has raised a concern about what management expects from testing. The earlier you get a product, the less likely you can make visible progress testing it– then testing may be blamed for the apparently slow progress. Yes, that is a concern, but that’s a question of managing expectations. Hence, I manage them.

So, send me your huddled masses of code, yearning to be tested. I’ll take it from there.