How Challenging Each Other Helps the Craft

Regular readers know that I’m dissatisfied with the state of the testing industry. It’s a shambles, and will continue to be as long as middle managers in big companies continue to be fat juicy targets for scam-artists (large tool vendors, consulting firms, and certain “professional” organizations) and well-meaning cargo cultists (such as those who think learning testing is the same as memorizing definitions of words and filling in templates).

What I can do about it is to develop my personal excellence, and associate myself with others who wish to do likewise. Someday, perhaps we will attain a critical mass. Perhaps the studious will inherit the Earth.

In that spirit, I’m constantly looking for colleagues, and bouncing ideas off of them to make us all better. I challenge people, and to me this is a virtue. It’s how I separate those who will help the craft from those who probably won’t. Sometimes people don’t react well to my challenges. Sometimes that’s because they are bad people (in my estimation); sometimes it’s because they are good people having a bad day; sometimes it’s because I’m having a bad day; or may it’s because I’m a bad person (in their estimation).

Nevertheless, this is a big part of what I do, and I will continue to do it. You have been warned. Also, you have been cheerfully invited to participate.

An Example Challenge and What Came of It:

Lanette Creamer, unlike me, is not a werewolf (though she describes me by the politically correct term “hairy”). If you read her tweets and her blog then you also know she’s, uh, what’s the opposite of brutal? Anyway, I bet she owns at least one calendar featuring pictures of kittens in hilarious costumes.

I met Lanette a few years ago but as I do with most people, I forgot about her (fun fact: I suffer from a mild case of associative prosopagnosia which, for instance, is why I didn’t recognize my own wife consistently until a few months after we were married). Then I met Lanette again at PNSQC, last year, where she made an impression on me as someone easy to talk to. I checked out her blog and liked what I saw.

2009-11-12 11:51:49 jamesmarcusbach: @lanettecream I’ll go look at your blog.

2009-11-12 12:16:23 jamesmarcusbach: Another must read blog for testers. This one by Lanette Creamer (@lanettecream). It’s the attack of the tester ladies.

2009-11-12 12:19:22 jamesmarcusbach: I think I encouraged @lanettecream to blog a couple of years ago, and then forgot to follow-up. Guess that worked out.

One thing I liked is that she identified herself, on her blog, as being a member of the Context-Driven School of testing. It means that I can reasonably expect such a person to be self-critical and to accept a challenge from me– a leader in that school.

A couple of days later I happened to see a paper Lanette wrote about “reducing test case bloat.” It was sitting on the desk of Matt Osborn while I was visiting him at Microsoft. I flipped through it and found a definition of “test case” that bugged me.

“Clinically defined a test case is an input and an expected result. For my purposes it doesn’t matter if a test case is automated or manual so long as it is a planned test. For the purpose of reducing test case bloat, I’d go further and say that it is a test you plan to execute a minimum of once in the product lifecycle.”

Lanette was referencing the IEEE with her definition. I hate the IEEE definition of test case. If I ever meet the guy who wrote it, I will bite him on the nose. It’s a narrow-minded supercilious idea of test cases, straight from Colonel Blimp. I prefer a broad definition that encompasses the actual field use of the term. For instance: “An instance or variation of a test or test idea.” By my definition, you can point to a list of test ideas, in bullet form, and call them test cases, just like real people at real companies already do, and not be committing a crime against terminology. Also, my definition does not attempt to enforce a specific organization’s notion of what a test must look like. It has to have inputs or it’s not a test? It has to have specific planned expected result? Not when I test, buddy.

I also didn’t like that Lanette presented it as if it were a universally accepted definition. That’s an appeal to authority, which we in the Context-Driven community do our best to avoid.

From Twitter:

2009-11-14 19:40:16 jamesmarcusbach: @lanettecream Yes, listening to the IEEE is fine if you’re not a true student of testing. But people like us ARE the IEEE (or better).

2009-11-14 19:41:10 jamesmarcusbach: @lanettecream I followed the IEEE, too, for a few years, and then realized that whoever came up with those defs wasn’t very thoughtful.

2009-11-14 19:41:44 jamesmarcusbach: @lanettecream Welcome to software testing leadership, where there is no appeal to authority allowed.

2009-11-14 19:43:22 jamesmarcusbach: @lanettecream The reason I bring it up is that I’ve generalized it myself, and I’m curious if your analysis will reveal something new.

2009-11-14 19:45:18 jamesmarcusbach: @lanettecream One way to frame the question: What exactly do you mean by “input?” What exactly do you mean by “expectation?”

2009-11-14 19:46:50 jamesmarcusbach: @lanettecream I think it’s shallow. I think you can do a lot better. Anyway, I’d be interested to see your analysis of that definition.

2009-11-14 19:48:34 jamesmarcusbach: @lanettecream Another way to say it: maybe that definition is okay– but what does it MEAN? Do you know? Have you really thought it through?

2009-11-14 19:50:49 jamesmarcusbach: @lanettecream IEEE is not a person we can cross-examine. It doesn’t think anything. But for the record, it’s totally wrong about planning!

2009-11-14 19:51:22 jamesmarcusbach: @lanettecream That planning stuff is just propaganda. Ask yourself “what does planning MEAN?”

2009-11-14 19:51:57 jamesmarcusbach: @lanettecream They throw around a lot of words without really thinking about them, it seems to me.

2009-11-14 19:52:41 jamesmarcusbach: @lanettecream I can tell you my opinions of all this. But I’d really love to see you blog about it, first. I’m following your blog now.

[sadly, I cannot obtain Lanette’s side of the conversation because Twitter sucks in that particular way…]

I did worry a little bit that Lanette would freak out and think I was attacking her. I’m a little nervous when engaging women this way, especially, since I have more concern about being seen as a big bully. (A man might see me that way, too, but as a fellow man I would have little sympathy. He just has to learn to cope.)

Dialog with Michael Bolton

While waiting to see what Lanette would come up with, I decided to transpect with Michael Bolton on the same topic in the hope that our good natured arguing would help Lanette feel better about the challenge.

James Bach: hey, to help Lanette, could we transpect through IM?
Michael Bolton: Heh.
Michael Bolton: Sure.
James Bach: then I can show her the transcript
Michael Bolton: If you like.  Pray, proceed.

Michael subsequently edited and published the transcript of that conversation.

During that interaction I came up with a thought experiment with which to question the Lanette/IEEE view of test cases. Can you test a clock when you can’t give it input?

The Clock Problem

I have since used this scenario to help explain to my students what I mean by a test.

Lanette’s Response

To my surprise, she wrote two entries. The first one worried me: What Did I Say a Test Case Was?

I went into damage control mode on Twitter…

2009-11-15 03:27:16 jamesmarcusbach: @lanettecream Your post seems a bit defensive. I wasn’t attacking you, I was trying to find out what you meant by what you said.

2009-11-15 03:30:44 jamesmarcusbach: @lanettecream I want to help real testers, too, and when I seek clarity in myself and other testers, it’s because that helps us avoid waste.

2009-11-15 05:09:55 jamesmarcusbach: @lanettecream I feel better hearing that. Questioning you is, from me, a sign of respect. But I don’t mean to push too hard.

2009-11-15 05:22:46 jamesmarcusbach: @lanettecream From your blog, I can tell you are talented. I’m eager to help your talent blossom. One requirement for that is confidence.

2009-11-15 05:25:55 jamesmarcusbach: @lanettecream One source of confidence is to practice working through ideas with your colleagues.

Lanette tried again. Her second post embraced the spirit of my challenge: What is a Test?

Notice how her second post is in the classic form of an exploratory essay. That’s perfect! I wasn’t asking for an ultimate argument and perfect analysis. I was looking for inquiry, insight, and self-examination.

Why should anyone put up with my challenges?

Well, how about career advancement? This can happen in a couple of ways. First, by publicly accepting and responding to my challenge, she improved her reputation for all to see. She shows that she is someone to be taken seriously, because she takes her own learning seriously. Second, she gained the first level of my gratitude and respect, and these things can be redeemed for professional favors of various kinds. When you are part of a community in good standing, you may holler for support and your fellow citizens will turn out in force to help you. When Lanette puts out a question on Twitter, lots of people will try to answer. It’s a great feeling to know you aren’t alone in the industry.

Plus Lanette was later interviewed by uTest. That was partly from how she impressed some of us on Twitter. I also profiled her in my talk on “buccaneer-testers” at Star East.

I hear that Lanette and my brother are collaborating on something together. I’m eager to see what comes out of that.

Another reason people should put up with challenges is that it makes the industry better. We practice our rhetoric and rapid learning. We grow. I’ve said it many times: the major reason all the terrible misconceptions about testing persist after all these years is that there is a worldwide conspiracy among testing writers and consultants not to debate with each other. Live and let live. Don’t rock the boat that feeds you, etc. Yech.

Finally, there’s personal pride. You feel good about yourself when you can take the heat.

When People Run Away

I don’t mind when people say no to a challenge, unless they are claiming to be expert testers. When a consultant or writer in the field won’t engage me, then I have to dismiss him. I can’t take him seriously. Just as I would not expect to be taken seriously if I held myself above the duty of defending my ideas in public. There’s a pretty substantial list of well known people who are professionally non-existent to me, but I don’t know how else to deal with them. We have to have intellectual standards or we can’t get anywhere.

(I know of a couple of exceptions to that rule, both women, whom I won’t name here. They are people who have strong aversions to debate (at least to debating me) and yet have great ideas and have contributed lots of good to the field. I can never be a close colleague of people like that, but I’m glad that they’re out there.)

Remembering Anna Allison

All this reminds me of Anna Allison. She was a rising star in 2001. I had dinner with her after she approached me at a conference and begged for a conversation (anyone can talk to me, at any time, if they give me food). At dinner, she mentioned that she was a bug metrics expert. I rolled my eyes and drew a bug metrics graph, daring her to tell me what it meant. What followed was a tour de force of questioning and analysis. She uncovered every trap that I had put into the graph. I told her she should write an article about our conversation and she did!

Tragically, she was on one of the planes that went into the Twin Towers on 9/11, on her way to a consulting gig in LA. This affected me more than I expected it to, because while I didn’t know her well, personally, professionally she was one of the few people I’ve known for whom debate was great fun. The Context-Driven community lost a happy tigress in her. We need more leaders like that. We really couldn’t spare her when she left us, and no one like her has yet stepped up: a non-threatening personality who is a role model for debate. I think that may be why I have high hopes for Lanette. (Also for Meeta Prakash, BTW.)

What happened yesterday?

Yesterday I issued a challenge to new blogger Michael Alexander. He responded promptly and in admirable fashion.

Lanette subsequently did a video blog about why she reacted to my challenge so constructively.

These events inspired me to explain all this. And so, I call upon all testers to challenge me, challenge yourselves, and challenge each other. Let’s blow out the cobwebs. Let’s be testers, not followers.

A Six-fold Example from Pradeep Soundararajan

Pradeep blogged this, today.

I need to amplify it because it provides a nice example of at least six useful and important patterns all in one post. This is why I believe Pradeep is one of the leading Indian testers.

Practical advice: “Ask for testability”

His story is all about asking for testability and all the good things that can come from that. It’s rare to see a good example present so vividly. I wanted more details, but the details he gave were enough to carry the point and fire the imagination.

Practical advice: “Try video test scripting”

I have never heard of using videos for scripted testing. Why didn’t I think of that?

Testing as a social process

Notice how many people Pradeep mentions in his post. Notice the conversations, the web of relationships. This aspect of testing is profoundly important, and it’s one that I find Pradeep to excel in. It’s kind of like x-ray vision– the ability to see past the objects of the project to the true bones of it, which is how people think of each other, communicate with, and influence each other. Pradeep’s story is a little bit technical, but it’s mostly social, as I read it.

Experience report

Pradeep’s post is an example of an experience report. Not many of them around. It’s like sighting a rare orchid. He published it with the support of his client, otherwise we’d never have seen it. That’s why there can never be an accurate or profound history written about the craft of testing: almost everything is kept secret. The same dynamic helps preserve bad practice in testing, because that bad practice thrives in the darkness just as roaches do.

Sapient tester blogging

I have referred in the past to a phenomenon I call “sapient tester blogs.” These are introspective, self-critical, exploratory essays written by testers who see testing as a complex cognitive activity and seek to expand and develop their thinking. It’s particularly exciting to see that happening in India, which brings me to the final point…

Leadership in Indian testing

There’s not a lot of good leadership in Indian testing. Someday there will be. It’s beginning to happen. Pradeep’s post is an example of what that looks like.

There must be more than a hundred thousand testers in India. (I wonder if some agency keeps statistics on that?) I would expect to see at least a hundred great tester blogs from India, not six!