CC to Everyone

I sent this to someone who’s angry with me due to some professional matter we debated. A colleague thought it would be worth showing you, too. So, for whatever it’s worth:

I will say this. I don’t want anyone to feel bad about me, or about my behavior, or about themselves. I can live with that, but I don’t want it.

So, if there is something simple I can do to help people feel better, and it does not require me to tell a lie, then I am willing to do so.

I want people to excel at their craft and be happy. That’s actually what is motivating me, underneath all my arguing.

My Stockholm Syndrome

Stockholm, the city where Rene Descartes spent his last days, and which now hands out Nobel prizes, is also now becoming a capital of Context-Driven testing thinking. The cool kid of the North. (Oh, why can’t your brothers Germany and Netherlands be more like you?)

This past weekend I shared a room with some of the leading context-driven and anti-ISTQB testers in Sweden. This was the Swedish Workshop on Exploratory Testing, the second peer conference I’ve attended in Sweden.

The Swedish testing community had no definite form or presence that I ever heard of, before Tobbe Ryber and Anders Claesson came to my first class in Stockholm back in– when was it? 2005? 2006?– and represented their version of testing leadership.

Tobbe went on to write a respectable book on test design. Anders went on a journey of self-discovery that led him away from and back to testing, to return like some Gandalf the Grey-turned-White as an exploratory testing coach.

Michael Albrecht and Henrik Andersson contacted me a few years ago and became regular correspondents in middle and South of Sweden, respectively. Each of them is bold and confident in his craft, and innovates in Session-Based Test Management.

Simon Morley and Christin Wiedemann took my class only last year, but they earned their way to the conference all the same. Simon does his “Tester’s Headache” rollup blog and seems to have read EVERY book, and Christin is a physicist who discovered testing last year and is applying all that brainpower toward achieving her version of true sapient testing.

I actually flipped the bozo bit on Rikard Edgren, at one time. I unflipped it when I met him in person and discovered that what I thought was obstinacy was more a determination to think things through at his own pace and in his own way. He’s one of those guys who thinks he has to reinvent everything for himself. Yeah, I’m also like that.

Henrik Emilsson and Martin Jansson share a blog with Rikard. They are energetic testing minds. Somehow they seem like bounding sheepdogs to me, asking questions, raising issues, and generally herding testing ideas into neat pens.

Petter Mattson gave an experience report about introducing session-based test management into two different companies. I was pleased, although a little jealous, that Petter hired Michael Bolton instead of me to teach the Rapid Testing class. But Michael is very good at what he does. Damn him. He’s good.

I wanted to hear more from Johan Hoberg, Oscar Cosmo, Johan Jonasson. But they did ask some questions. Next time we’ll make them give full experience reports.

Christin gave an excellent report of how she thawed out the testing practices at her company using the exploratory approach. Not bad for a newbie. But the award for learning the hard way has to go to young Ann Flismark. She stood up to give an experience report about SBTM that somehow turned into a request for “KPIs” (which apparently means nonsense metrics demanded by her management). Several of us made a fuss about how that’s not really an experience report. I made the biggest fuss. Well, perhaps “brutal attack on the whole idea” would be a more accurate way to say it. Ann was pretty rattled, and disappeared for a while.  She was upset partly because she had a nice experience report planned (I’d seen her give it on stage at SAST) and decided to change it at the last minute.

But that’s a peer conference for you. It’s the fastest way to gain or lose a reputation. You have to stand and face your peers. Ann will bounce back with new and better material. She’ll be all the better for having had to pass through the baptismal fire.

[Update: Oh I forgot… I also gave an experience report. I told the story of how I noticed and named the practice of Thread-Based Test Management. My goal was partly to help everyone in the room feel like a co-inventor of it.]

I’m in Estonia, now. My mission is to rally the testing masses here and get them excited about becoming true professionals (not fake ones, but thanks anyway Geoff Thompson!). Oliver Vilson is Estonia’s answer to Michael Albrecht. 25 years old, but such ambition and intelligence!

My advice to Oliver is: look to Sweden and see your future.

Ron Jeffries and Engineering for Adults

Ron Jeffries, one of the capital “A” Agile people, provides a great example of context-imperial talk in his critique of the context-driven approach to methodology:

Well, my dear little children, I’ve got bad news for you. It is your precious context that is holding you back. It is your C-level Exeuctives and high-level managers who can’t delegate real responsibility and authority to their people. It is your product people who are too busy to explain what really needs to be done. It is your facilities people who can’t make a workspace fit to work in. It is your programmers who won’t learn the techniques necessary to succeed. It is your managers and product owners who keep increasing pressure until any focus on quality is driven out of the project.

There is an absolute minimum of practice that must be in place in order for Scrum or XP or any fom of Agile to succeed. There are many other elements of practice that will need to be put in place. And yes, the context will matter … and most commonly, if you really want to be successful, it is the context that has to change.

Wow, he even addresses us as children! That completes the picture nicely. (The context-imperial approach to process generally involves appeals to authority, or a presumption of authority.) This is why I’m proud to be a part of the small “a” agile community, which is not about bowing to priests, but rather each of us developing our own judgment about agility.

Context-imperial methodology means changing problems to fit your favorite solutions. We are all a bit context-imperial (for instance, I prefer not to work in an environment where I’m expected dodge bullets).  We are all a bit context-driven, too.

The interesting question is when should we change the problem and when should we try different solutions? For me, the starting point for an answer is skill. To develop skill is to develop both the judgment about context variables and ability to craft solutions for them.

It would help if Ron could explain the dynamics of project, as he sees them. It would help if he offered experience reports. It does NOT help for him to ridicule the notion that competent practitioners ought to evaluate and respond to what’s there and what’s happening on a project.

When Ron says there is an “absolute minimum of practice” that must be in for an agile project to succeed, I want to reply that I believe there is an absolute minimum of practice needed to have a competent opinion about things that are needed– and that in his post he does not achieve that minimum. I think part of that minimum is to understand what words like “practice” and “agile” and “success” can mean (recognizing they are malleable ideas). Part of it is to recognize that people can and have behaved in agile ways without any concept of agile or ability to explain what they do.

My style of development and testing is highly agile. I am agile in that I am prepared to question and rethink anything. I change and develop my methods. I may learn from packaged ideas like Extreme Programming, but I never follow them. Following is for novices who are under active supervision. Instead, I craft methods on a project by project basis, and I encourage other people to do that, as well. I take responsibility for my choices. That’s engineering for adults like us.

The Gerrard School of Testing

Paul Gerrard believes there are irrefutable testing axioms. This is not surprising, since all axioms are by definition irrefutable. To call something an axiom is to say you will cover your ears and hum whenever someone calls that principle into question. An axiom is a fundamental assumption on which the rest of your reasoning will be based.

They are not universal axioms for our field. Instead they are articles of Paul’s philosophy. As such, I’m glad to see them. I wish more testing authors would put their cards on the table that way.

I think what Paul means is that not that his axioms are irrefutable, but that they are necessary and sufficient as a basis for understanding what he considers to be good testing. In other words, they define his school of software testing. They are the result of many choices Paul has made that he could have made differently. For instance, he could have treated testing as an activity rather than speaking of tests as artifacts. He went with the artifact option, which is why one of his axioms speaks of test sequencing. I don’t think in terms of test artifacts, primarily, so I don’t speak of sequencing tests, usually. Usually, I speak of chartering test sessions and focusing test attention.

Sometimes people complain that declaring a school of testing fragments the craft. But I think the craft is already fragmented, and we should explore and understand the various philosophies that are out there. Paul’s proposed axioms seem a pretty fair representation of what I sometimes call the Chapel Hill School, since the Chapel Hill Symposium in 1972 was the organizing moment for many of those ideas, perhaps all of them. The book Program Test Methods, by Bill Hetzel, was the first book dedicated to testing. It came out of that symposium.

The Chapel Hill School is usually called “traditional testing”, but it’s important to understand that this tradition was not well established before 1972. Jerry Weinberg’s writings on testing, in his authoritative 1961 textbook on programming, presented a more flexible view. I think the Chapel Hill school has not achieved its vision, it was largely in dissatisfaction with it that the Context-Driven school was created.

One of his axioms is “5. The Coverage Axiom: You must have a mechanism to define a target for the quantity of testing, measure progress towards that goal and assess the thoroughness in a quantifiable way.” This is not an axiom for me. I rarely quantify coverage. I think quantification that is not grounded in measurement theory is no better than using numerology or star signs to run your projects. I generally use narrative and qualitative assessment, instead.

For you context-driven hounds out there, practice your art by picking one of his axioms and showing how it is possible to have good testing, in some context, while rejecting that principle. Post your analysis as a comment to this blog, if you want.

In any social activity (as opposed to a mathematical or physical system), any attempt to say “this is what it must be” boils down to a question of values or definitions. The Context-Driven community declared our values with our seven principles. But we don’t call our principles irrefutable. We simply say here is one school of thought, and we like it better than any other, for the moment.

Can a Non-Sapient Test Exist?

My vote for the World’s Most Inquisitive Tester is Shrini Kulkarni. He asks: Do you believe that “non sapientâ€? tests exist or for that matter – any part (seems like a very negligibly small portion of entire universe of skilled human testing) of testing be “non sapientâ€? ?

A sapient process is any process that relies for its success on an appropriately skilled human. A sapient test would therefore be a test that relies for its success on an appropriately skilled human.

Is there such a thing as a non-sapient test? My answer is yes, depending on what you consider to be part of the test.

An appropriately skilled programmer could decide to write a clever bit of test code that automatically watches for a potentially dangerous condition and throws an exception if that condition occurs, at which point an appropriately skilled human would have to make some appropriate response. In that case, there are three interesting parts to this testing:

  • the test code itself (not sapient)
  • the test design process (sapient)
  • the process of reacting to the test result (sapient)

All three parts together would be called testing, and that would be sapient testing.
The test artifact (what a lot of people would call the test) is not sapient. However the process of understanding it and knowing how to maintain it and when to remove it is sapient.

The process of running the test is sapient, because running it necessarily means doing the right thing if it failed (or if it passed and wasn’t supposed to pass).

A lot of automation enthusiasts apparently think that the test code is the important part, and that test design and test execution management are but tiny little problems. I think the truth is that most automation enthusiasts just like making things go. They are not comfortable with testing sapiently, since probably no one has trained them to do that. They focus instead on what is, for them, the fun part of the problem, regardless of whether it’s the important part. What aids them in this delinquency is that their clients often have no clue what good testing is, anyway, so they are attracted to whatever is very visible, like pretty green bars and pie charts!

I experienced this first hand when I wrote a log file analyzer and a client simulator driver for a patent management system. During the year I managed testing on that project, this tool was the only thing that caused the CEO to come into the testing group and say “cool!” Yes, he liked the pretty graphs and Star Trekkie moving displays. While I was showing the system to my CEO, one of the testers who worked for me– a strictly sapient tester– watched quietly and then interjected a simple question: “How is this tool going to help us find any important problem we wouldn’t otherwise have found.” Being the clever and experienced test manager that I was at the time, I was able to concoct a plausible sounding response, off-the-cuff. But in truth his question rocked me, because I had become so interested in my test tool that I actually had stopped asking myself what test strategy goal I was supposed to be serving with it.

Bless that tester for recalling me to my duty.

What is Test Automation?

There seems to be a lot of confusion about this.

Test automation is any use of tools to aid testing. Test automation has been around since DAY ONE of the computing industry. And never in that history has test automation been an “automatic way of doing what testers were doing before”, unless you ignore a lot of what testers actually do.

For the same reason, a space probe is not “an automatic way of doing what planetary scientists were doing before”, but rather a tool that extends the reach of what scientists can do. Test automation means extending the reach of testers.

Test automation is not at all new. What’s comparatively new is the idea of a tester. Long ago, in the late 40’s, dedicated testers were almost unknown. Programmers tested software. Throughout the sixties, papers about testing, such as the proceedings of the IFIPS conferences, almost always assumed that programmers tested the software they built. Testing was often not distinguished from debugging. As larger and more complex systems arose, the idea of dedicated software testers came into vogue. In 1972, at Chapel Hill, the first conference on testing software was held, and the proceedings of that conference show that testing was beginning to be thought of as a discipline worthy of study apart from programming.

At that conference, I think they took a wrong turn. There was much hope and enthusiasm for the future of test automation. That hope has not panned out. Not for lack of trying. More for lack of understanding.

What they didn’t understand, and what many contemporary programmers and testers also don’t get, is that good testing is an inherently a human process– not incidentally, not accidentally, but INHERENTLY. It’s highly social and psychological. The more complex software is, the more important that humans engage intellectually to identify and solve testing problems. But the Chapel Hill conference was dominated by men trained as programmers and electrical engineers, not people who professionally thought about thinking or who trained people to think.

(Who did, you ask? Jerry Weinberg. His 1965 Ph.D. thesis on problem solving is fabulous. He had written The Psychology of Computer Programming in 1970, a number of papers about testing during the sixties, including a section on testing in his 1961 book, Fundamentals of Computer Programming. He taught problem solving classes during the 60’s too, culminating in his 1974 book Introduction to General Systems Thinking. I regret that Jerry didn’t keynote at the Chapel Hill conference, but he will at the next CAST conference, in Toronto.)

The idea of a dedicated trained tester is newer than the idea of test automation, but unlike test automation, it’s an idea that hasn’t been tried on a large scale, because tester training is so terrible.

Pretending that testing is a simple technical process of making API calls doesn’t make the boogie beasts go away. They are still there, Microsoft. My wife still needs me to troubleshoot Microsoft Office, a product produced increasingly, I’m told, by programmers who work intensively on test tools because they never learned how to test. (My colleague Michael Bolton recently ran a testing class at Microsoft, though, so perhaps there is hope.)

Test automation cannot reproduce the thinking that testers do when they conceive of tests, control tests, modify tests, and observe and evaluate the product. Test automation cannot perform sapient testing. Therefore, automation of testing does NOT mean automation of the service provided by the software tester.

In summary, test automation means applying tools to testing. Test automation is ancient, non-programming testers are a newer idea, but what the industry has not yet really tried, except on a small and local scale, is systematically training the minds of testers, instead of just calling them “test engineers” or “SDETs”, giving them tools they don’t know how to use, and hoping for the best.

(BTW, I’m a programmer. I was hand-coding machine language on my Apple II before I ever heard of things called “assemblers” that automated the process. I also ran the Borland Turbo Debugger test team on the Borland C++ project, in the early 90’s. Before that I ran a test tool development team at Apple Computer. When it comes to programmer testing, GUI test automation, and non-GUI test automation, I’ve been there and done that.

It is my experience of the profound limitations of test automation that makes me a bit impatient with the way new generations of testers and programmers who seem to think no one ever thought of test tools before they came along.)

Sapient Testing Rules

Hey, somebody at AST must have read my blog when I coined the term “sapient testing“, because they named their magazine after it.

I’m still waiting for people to pick up on my other coinage: mythomimetic, which is an adjective meaning “not informed by experience or wisdom, but rather hearsay and wishful thinking.” I’ll use it in a sentence: “The speaker peppered his talk with mythomimetic cliches such as ‘you can’t control what you can’t measure’.”

Sapient testing is the antithesis of mythomimetic test automation.

Sapient Processes

Have you ever done something manually? Have you ever tried to automate it? Did you successfully automate what you were doing?

If you answered yes for any of these questions, then I’m afraid I’m being too vague– because at least three very different kinds of things are tangled together in a simple answer. In any activity done by a human, there is the human aspect (not practically mechanizable), a physical aspect involving translation or transformation of matter and energy (mechanizable in principle), and a problem-solving aspect (sometimes transformed by mechanization, sometimes not affected). Which of those get automated when we automate? Which must remain “manual”? What problems are solved and what new problems are created?

My business is software testing. I have heard many people say they are in my business, too. Sometimes, when these people talk about automating tests, I think they probably aren’t in my business, after all. They couldn’t be, because what I think I’m doing is very hard to automate in any meaningful way. So I wonder… what the heck are they automating?

Anyway, this confusion is why I am applying a new term to processes: sapience. Sapience is defined in the dictionary as “wisdom; sagacity.” I want to suggest a particular connotation. A sapient process is any process that relies on skilled humans.

A sapient process might gain from automating some or all of the purely material aspects of it, but any human aspect that is replaced or displaced by machinery results would in an impairment of that process. This is a matter of definition, not fact. I’m defining sapient processes as that which we do not know how to automate without dumbing down the process. It will either be slightly less intelligent or amazingly less intelligent.

The question a test automator should ask right away is whether testing is a sapient process. I think good testing is very sapient. That’s how I experience it and teach it. My brain is constantly turned on while testing. Show me someone who’s brain is not engaged, and I’ll show you a poor tester or a badly designed test.

Is digging a hole a sapient process? It might be. Consider an archeological dig. There’s no simple algorithm or tool that will automatically excavate a valuable historical site while we go and watch TV.

The purpose of this terminology is to help make it clearer what kinds of processes we are referring to. From now on, I will try not to use the term “manual testing”, except in quotes. I will practice saying sapient testing. I believe that nearly all “manual testing” is better being called sapient. I’m not carrying rocks out here, I’m penetrating the illusions surrounding a software product. Machines can’t do that.

To fully automate a sapient test activity would require us to find an alternate version of that activity which solved the same problems (or more) and that had no human element in it. If you think you know where that has been done, please tell me about it.