My War: Agency vs. Algorithm

In a recent Twitter conversation, yet again someone who should know better claimed that testing improves quality, and yet again Michael Bolton and I rose to speak against that notion. Our correspondent wants to reduce it to a simple matter of probability:

My answer to this question is to reject it. It rests on certain false premises:

  • That we sufficiently agree about what quality is.
  • That we sufficiently agree about what testing is.
  • That we agree on what is typical.
  • That we agree that discussing human choices in terms of brute causality is helpful in this context.

I’m not saying that questions like this are necessarily bad. I’m sure I have used a similar construction in some other argument. The problem here is that the writer of the Tweet arrived at this simplifying formula after dismissing, as irrelevant, my social and ethical objections to his entire line of reasoning.

I say that testing does not improve quality, first, because it is obviously true that learning facts about code does not change code. Second, and much more importantly, I say it because it is both practical and ethical for testers to respect the agency of the people who control the code. Those people may not fix the bugs I report. It is critical to my discipline as a tester to understand that. Otherwise, I risk losing my credibility and influence. I risk adopting an attitude that desensitizes me to the kinds of problems my clients need me to find.

Agency. It’s dawning on me that all of my projects are connected by this thread.

  • I am opposed to “best practices” because that phrase is just a ploy to avoid responsibility for decisions about how to work.
  • I am opposed to professional certification programs, such as the ISTQB, because it is just a ploy to profit from the fear and ignorance of testers and managers; insidious manipulation via snake oil.
  • I am opposed to “standards” that are crafted by consulting companies to justify expensive, useless services.
  • I don’t use the phrase “test automation” because that encourages thinking about testing as a set of mechanical actions instead of a set of choices, interpretations, and explorations.
  • I insist on distinguishing testing from mere fact checking. Checking can be done by machines that make no choices, but testing requires socially competent judgments and a whole raft of choices which must be made and to which testers must be accountable.
  • I am opposed to self-driving cars not because they are unsafe, but because they aim to change without deliberation or consent the social contract that humans have with each other about the uses of public roads and accountability for what happens on them.
  • Although I am loud and opinionated, I tell my students that my goal is to turn them into colleagues, not followers. People who parrot what I say are no use to me.
  • My son was born at home, because my wife wanted control over the entire birth process and was not satisfied that the hospital would respect that. So, I read a few books on midwifery, hired illegal midwives to help, and made plans in case of emergency. (To my surprise, I found research which showed that home births were safer for low-risk pregnancies.)
  • I could not tolerate school. I was assigned tasks without my consent.
  • I am opposed to forced schooling, because good education is a personal journey of self development. (My son dropped out after 6th grade.)
  • The biggest conflict of my marriage centered around money. I solved it (after a few years to get my pride under control) by giving my wife control of our finances and giving her 70% of the company I started. I got complete creative freedom and she has cheerfully been my assistant for 19 years and counting. Lesson: giving away one kind of power can give you back another kind of power.

Agency is the capacity of an actor to act; the ability to make choices. I have a lot of intellectual attitudes and arguments about testing. But my emotion and motivation about testing comes from my feelings about agency.

I am a tester because I want to set people free. I am a teacher because I want to set people free. I am a husband and father because I want to keep my family free.

Fight the algorithm.

Hiring Me: An Un-Marketing Message

Like a lot of independent consultants, I feel a bit icky about doing direct sales and marketing. I prefer the indirect approach– to speak and write and just wait until people email me with offers of work. But today I had an idea for a different kind of marketing message that I believe would appeal to exactly the sort of people who should be hiring me: a warning message.

Let me begin with a simple positive statement, and then I’ll show you what I mean.

Why You Might Want to Hire Me and For What Purposes

The main thing I do for money is to teach and coach software testers. By testers, I mean anyone whose responsibilities include software testing, during the moments when they are engaged in that task. I generally teach my three-day Rapid Software Testing class, which focuses on test design and analysis and has a number of short hands-on exercises. I also have Rapid Software Testing Applied, which has less material but much longer practical exercises, and Rapid Software Testing for Managers, which is oriented to leadership.

What I sometimes do for money is high impact test strategy consulting, which sometimes involves actual testing. I’ve consulted on testing financial systems and medical devices, for instance, in projects that lasted months. Usually I come in and help for a week or two, then leave and come back again later, providing ongoing support for full-time staff.

My favorite work– when I can get it– is high stress, high stakes, expert witness gigs. That means consulting and testifying on court cases. These projects may involve testing, but mostly a ton of reading (>50,000 pages of tech manuals on one project), analysis, synthesizing narratives, visualizing data, and persuasively writing. They are rare and wonderful projects. I’m suited for them because I have a passion for complexity, argument and evidence, and I love the feeling I get from defending truth.

Why you might want to hire me is that you are worried that you are wasting time and effort in testing, that your testers are bored or shallow in their work, or that too many important bugs are escaping into the field and causing you and your customers distress. I approach testing analytically, socially, and technologically. Or maybe you are in a law suit involving testing, quality, or patent infringement (which often requires testing to determine if a product infringes) and you don’t want to lose the case.

That’s the positive statement. Now for the warning.

Beware of Hiring Me: Tigers Make Difficult Pets

In a big consulting company, management wants workers to be versatile, docile, and inexpensive. A worker like that can be plugged into any situation. In a group of versatile workers, anyone is pretty easily replaced, a fact which serves to encourage everyone to continue to be docile and inexpensive (“you need us more than we need you”). Versatility comes at the cost of depth of skill and experience, however. With increasing expertise, you increase problem-solving power while reducing versatility.

(What I mean by “reducing versatility” is that I, for instance, can do a zillion things. I can organize coffee for you. I can plan parties. I can clean your kitchens. I can be a project manager. I can design your website. I have many skills. But practically speaking, my special expertise in testing and teaching means that my clients will not pay me to do anything other than my specialty, because I need to work for the highest reasonable pay that I can get, and I don’t get that sort of pay when I’m doing a zillion things that anyone else could do just as easily as me. So, I’m not dissing versatility. I’m just speaking of cold economic reality.)

Expertise sounds like a really good thing. But the problem is tigers.

The “tiger cub” problem is that when tigers are cute and small, they might seem to be an appealing choice as a pet– but young tigers don’t stay small forever. They grow up and become powerful, inconvenient, dangerous creatures. The same is true of experts. This is why big consulting companies generally don’t hire experts and try not to encourage experts to grow as such. The last thing a company like Mindtree, or Cognizant, or Infosys wants is employees who might refuse to work on a project because the project is demanding that they do bad work.

I wrote about this struggle in my own career, here. And here is an article about Infosys experimenting with experts. I don’t know what the outcome was for that experiment. Maybe Infosys still has this team, but if they do, they must maintain an expensive habitat for those tigers. And the tigers may be thinking “why don’t we go independent and keep all the profits? The company needs us more than we need them!”

I am a difficult pet. I want to please my clients, of course. But I have a reputation to maintain. Ask around. As far as I know there is no one who claims to have seen me do work that I knew at the time was bad work. I don’t believe there is anyone who can point to any bad work that I have done. (Except possibly the revised IEEE 829 test documentation standard, which has my name on it but which I protested and repudiate.)  In any case, I go to pains to get it right, but my concern goes way beyond customer satisfaction.

Remember the difference between a drug dealer and a doctor. Customer satisfaction is of paramount importance to a drug dealer. A doctor has other priorities.

Beware of Hiring Me: I Maintain My Own Intellectual Property

I don’t have any trouble signing NDAs. I don’t want or need to share your unreleased product details or schedules with anyone else. The stories I tell that originate with specific clients contain no details that would distinguish them or harm them, unless I am specifically authorized to share such details.

Intellectual property is another matter. There comes a day for many experts when we realize that we are selling our intellectual capital at a huge discount. We are giving our employers innovations that may make them huge profits. Is enough of that profit coming back to us?

As an independent, I maintain my own intellectual property. Therefore, I cannot sign a contract that lets my clients take exclusive ownership of any of my ideas except in rare and special circumstances. I generally offer a non-exclusive license, instead. For this reason alone it may be hard for companies to hire me to do ordinary technical work, as opposed to teaching.

Beware of Hiring Me: My Bias is Toward Deep Quality, Which is Not Always Needed

It is a plain rational truth that many things in life don’t need to be very good in order to be good enough. I agree with that truth, and I even teach it as part of the risk-base testing curriculum that anchors the Rapid Software Testing methodology.

But… I love the processes of testing. I especially love deep testing, by which I mean testing that maximizes the chance of uncovering every important bug. This means that I am at a constant low-grade risk of putting more time and effort into testing something than is justified by the business context. I can get carried away by the pleasures of craftsmanship. This is why, when I’m actually doing testing, I prefer to work with someone like my brother Jon (a virtuoso of technical administration, currently at eBay) who keeps his eye on the big picture so that people like me happily wrangle the details without constantly wondering “is this even worth doing?”

I come to every project thinking “probably there should be better, deeper testing, here.” I think this is a reasonable first position. But I teach my clients to have skepticism to counter-balance this bias. I believe that this creates a nice creative tension, but it must be managed. We need to keep talking about it.

I manage my over-kill tendencies as an independent consultant partly by making a simple declaration to my clients: if I’m working by the hour, and I submit an invoice, and it includes work that I did that you don’t like, then just don’t pay the invoice. This gives me more freedom to work speculatively without forcing my clients to be concerned that I will spend 20 hours gold-plating a two hour task.

Beware of Hiring Me: I Learn By Arguing

My favorite method of learning is testing. And when I’m learning about what’s in your mind, testing takes the form of debate. If you want me to understand and trust you, then I need to argue with you. This is negotiable of course– but that negotiation ALSO takes the form of an argument that we will need to have.

I have a hard time trusting people who seem to trust me, unless I know I have earned their trust. Otherwise, I fear that their pleasant manner is only a temporary illusion, soon to be shattered in some dramatic way. I have a conviction that good working relationships must be earned through shared trials and tribulations, not through passive hope and casual politeness.

When I go through a difficult conversation and come out the other side with a resolution and with the sense that the other people in the debate have gained emotional stability and power (even if we may have temporarily lost it during the messy part of the process) that increases my sense of loyalty to my clients and I can better manage future stresses.

Getting older has changed this, too. I’ve been through so many relationship-building and losing events that the process is not quite as dramatic for me as it used to be, and I more easily move into a mode of protecting and supporting the needs of others. Still, I won’t deceive you, there’s going to be drama. You have to expect that from a tiger.


CC to Everyone

I sent this to someone who’s angry with me due to some professional matter we debated. A colleague thought it would be worth showing you, too. So, for whatever it’s worth:

I will say this. I don’t want anyone to feel bad about me, or about my behavior, or about themselves. I can live with that, but I don’t want it.

So, if there is something simple I can do to help people feel better, and it does not require me to tell a lie, then I am willing to do so.

I want people to excel at their craft and be happy. That’s actually what is motivating me, underneath all my arguing.

My Stockholm Syndrome

Stockholm, the city where Rene Descartes spent his last days, and which now hands out Nobel prizes, is also now becoming a capital of Context-Driven testing thinking. The cool kid of the North. (Oh, why can’t your brothers Germany and Netherlands be more like you?)

This past weekend I shared a room with some of the leading context-driven and anti-ISTQB testers in Sweden. This was the Swedish Workshop on Exploratory Testing, the second peer conference I’ve attended in Sweden.

The Swedish testing community had no definite form or presence that I ever heard of, before Tobbe Ryber and Anders Claesson came to my first class in Stockholm back in– when was it? 2005? 2006?– and represented their version of testing leadership.

Tobbe went on to write a respectable book on test design. Anders went on a journey of self-discovery that led him away from and back to testing, to return like some Gandalf the Grey-turned-White as an exploratory testing coach.

Michael Albrecht and Henrik Andersson contacted me a few years ago and became regular correspondents in middle and South of Sweden, respectively. Each of them is bold and confident in his craft, and innovates in Session-Based Test Management.

Simon Morley and Christin Wiedemann took my class only last year, but they earned their way to the conference all the same. Simon does his “Tester’s Headache” rollup blog and seems to have read EVERY book, and Christin is a physicist who discovered testing last year and is applying all that brainpower toward achieving her version of true sapient testing.

I actually flipped the bozo bit on Rikard Edgren, at one time. I unflipped it when I met him in person and discovered that what I thought was obstinacy was more a determination to think things through at his own pace and in his own way. He’s one of those guys who thinks he has to reinvent everything for himself. Yeah, I’m also like that.

Henrik Emilsson and Martin Jansson share a blog with Rikard. They are energetic testing minds. Somehow they seem like bounding sheepdogs to me, asking questions, raising issues, and generally herding testing ideas into neat pens.

Petter Mattson gave an experience report about introducing session-based test management into two different companies. I was pleased, although a little jealous, that Petter hired Michael Bolton instead of me to teach the Rapid Testing class. But Michael is very good at what he does. Damn him. He’s good.

I wanted to hear more from Johan Hoberg, Oscar Cosmo, Johan Jonasson. But they did ask some questions. Next time we’ll make them give full experience reports.

Christin gave an excellent report of how she thawed out the testing practices at her company using the exploratory approach. Not bad for a newbie. But the award for learning the hard way has to go to young Ann Flismark. She stood up to give an experience report about SBTM that somehow turned into a request for “KPIs” (which apparently means nonsense metrics demanded by her management). Several of us made a fuss about how that’s not really an experience report. I made the biggest fuss. Well, perhaps “brutal attack on the whole idea” would be a more accurate way to say it. Ann was pretty rattled, and disappeared for a while.  She was upset partly because she had a nice experience report planned (I’d seen her give it on stage at SAST) and decided to change it at the last minute.

But that’s a peer conference for you. It’s the fastest way to gain or lose a reputation. You have to stand and face your peers. Ann will bounce back with new and better material. She’ll be all the better for having had to pass through the baptismal fire.

[Update: Oh I forgot… I also gave an experience report. I told the story of how I noticed and named the practice of Thread-Based Test Management. My goal was partly to help everyone in the room feel like a co-inventor of it.]

I’m in Estonia, now. My mission is to rally the testing masses here and get them excited about becoming true professionals (not fake ones, but thanks anyway Geoff Thompson!). Oliver Vilson is Estonia’s answer to Michael Albrecht. 25 years old, but such ambition and intelligence!

My advice to Oliver is: look to Sweden and see your future.

Ron Jeffries and Engineering for Adults

Ron Jeffries, one of the capital “A” Agile people, provides a great example of context-imperial talk in his critique of the context-driven approach to methodology:

Well, my dear little children, I’ve got bad news for you. It is your precious context that is holding you back. It is your C-level Exeuctives and high-level managers who can’t delegate real responsibility and authority to their people. It is your product people who are too busy to explain what really needs to be done. It is your facilities people who can’t make a workspace fit to work in. It is your programmers who won’t learn the techniques necessary to succeed. It is your managers and product owners who keep increasing pressure until any focus on quality is driven out of the project.

There is an absolute minimum of practice that must be in place in order for Scrum or XP or any fom of Agile to succeed. There are many other elements of practice that will need to be put in place. And yes, the context will matter … and most commonly, if you really want to be successful, it is the context that has to change.

Wow, he even addresses us as children! That completes the picture nicely. (The context-imperial approach to process generally involves appeals to authority, or a presumption of authority.) This is why I’m proud to be a part of the small “a” agile community, which is not about bowing to priests, but rather each of us developing our own judgment about agility.

Context-imperial methodology means changing problems to fit your favorite solutions. We are all a bit context-imperial (for instance, I prefer not to work in an environment where I’m expected dodge bullets).  We are all a bit context-driven, too.

The interesting question is when should we change the problem and when should we try different solutions? For me, the starting point for an answer is skill. To develop skill is to develop both the judgment about context variables and ability to craft solutions for them.

It would help if Ron could explain the dynamics of project, as he sees them. It would help if he offered experience reports. It does NOT help for him to ridicule the notion that competent practitioners ought to evaluate and respond to what’s there and what’s happening on a project.

When Ron says there is an “absolute minimum of practice” that must be in for an agile project to succeed, I want to reply that I believe there is an absolute minimum of practice needed to have a competent opinion about things that are needed– and that in his post he does not achieve that minimum. I think part of that minimum is to understand what words like “practice” and “agile” and “success” can mean (recognizing they are malleable ideas). Part of it is to recognize that people can and have behaved in agile ways without any concept of agile or ability to explain what they do.

My style of development and testing is highly agile. I am agile in that I am prepared to question and rethink anything. I change and develop my methods. I may learn from packaged ideas like Extreme Programming, but I never follow them. Following is for novices who are under active supervision. Instead, I craft methods on a project by project basis, and I encourage other people to do that, as well. I take responsibility for my choices. That’s engineering for adults like us.

The Gerrard School of Testing

Paul Gerrard believes there are irrefutable testing axioms. This is not surprising, since all axioms are by definition irrefutable. To call something an axiom is to say you will cover your ears and hum whenever someone calls that principle into question. An axiom is a fundamental assumption on which the rest of your reasoning will be based.

They are not universal axioms for our field. Instead they are articles of Paul’s philosophy. As such, I’m glad to see them. I wish more testing authors would put their cards on the table that way.

I think what Paul means is that not that his axioms are irrefutable, but that they are necessary and sufficient as a basis for understanding what he considers to be good testing. In other words, they define his school of software testing. They are the result of many choices Paul has made that he could have made differently. For instance, he could have treated testing as an activity rather than speaking of tests as artifacts. He went with the artifact option, which is why one of his axioms speaks of test sequencing. I don’t think in terms of test artifacts, primarily, so I don’t speak of sequencing tests, usually. Usually, I speak of chartering test sessions and focusing test attention.

Sometimes people complain that declaring a school of testing fragments the craft. But I think the craft is already fragmented, and we should explore and understand the various philosophies that are out there. Paul’s proposed axioms seem a pretty fair representation of what I sometimes call the Chapel Hill School, since the Chapel Hill Symposium in 1972 was the organizing moment for many of those ideas, perhaps all of them. The book Program Test Methods, by Bill Hetzel, was the first book dedicated to testing. It came out of that symposium.

The Chapel Hill School is usually called “traditional testing”, but it’s important to understand that this tradition was not well established before 1972. Jerry Weinberg’s writings on testing, in his authoritative 1961 textbook on programming, presented a more flexible view. I think the Chapel Hill school has not achieved its vision, it was largely in dissatisfaction with it that the Context-Driven school was created.

One of his axioms is “5. The Coverage Axiom: You must have a mechanism to define a target for the quantity of testing, measure progress towards that goal and assess the thoroughness in a quantifiable way.” This is not an axiom for me. I rarely quantify coverage. I think quantification that is not grounded in measurement theory is no better than using numerology or star signs to run your projects. I generally use narrative and qualitative assessment, instead.

For you context-driven hounds out there, practice your art by picking one of his axioms and showing how it is possible to have good testing, in some context, while rejecting that principle. Post your analysis as a comment to this blog, if you want.

In any social activity (as opposed to a mathematical or physical system), any attempt to say “this is what it must be” boils down to a question of values or definitions. The Context-Driven community declared our values with our seven principles. But we don’t call our principles irrefutable. We simply say here is one school of thought, and we like it better than any other, for the moment.

Can a Non-Sapient Test Exist?

My vote for the World’s Most Inquisitive Tester is Shrini Kulkarni. He asks: Do you believe that “non sapientâ€? tests exist or for that matter – any part (seems like a very negligibly small portion of entire universe of skilled human testing) of testing be “non sapientâ€? ?

A sapient process is any process that relies for its success on an appropriately skilled human. A sapient test would therefore be a test that relies for its success on an appropriately skilled human.

Is there such a thing as a non-sapient test? My answer is yes, depending on what you consider to be part of the test.

An appropriately skilled programmer could decide to write a clever bit of test code that automatically watches for a potentially dangerous condition and throws an exception if that condition occurs, at which point an appropriately skilled human would have to make some appropriate response. In that case, there are three interesting parts to this testing:

  • the test code itself (not sapient)
  • the test design process (sapient)
  • the process of reacting to the test result (sapient)

All three parts together would be called testing, and that would be sapient testing.
The test artifact (what a lot of people would call the test) is not sapient. However the process of understanding it and knowing how to maintain it and when to remove it is sapient.

The process of running the test is sapient, because running it necessarily means doing the right thing if it failed (or if it passed and wasn’t supposed to pass).

A lot of automation enthusiasts apparently think that the test code is the important part, and that test design and test execution management are but tiny little problems. I think the truth is that most automation enthusiasts just like making things go. They are not comfortable with testing sapiently, since probably no one has trained them to do that. They focus instead on what is, for them, the fun part of the problem, regardless of whether it’s the important part. What aids them in this delinquency is that their clients often have no clue what good testing is, anyway, so they are attracted to whatever is very visible, like pretty green bars and pie charts!

I experienced this first hand when I wrote a log file analyzer and a client simulator driver for a patent management system. During the year I managed testing on that project, this tool was the only thing that caused the CEO to come into the testing group and say “cool!” Yes, he liked the pretty graphs and Star Trekkie moving displays. While I was showing the system to my CEO, one of the testers who worked for me– a strictly sapient tester– watched quietly and then interjected a simple question: “How is this tool going to help us find any important problem we wouldn’t otherwise have found.” Being the clever and experienced test manager that I was at the time, I was able to concoct a plausible sounding response, off-the-cuff. But in truth his question rocked me, because I had become so interested in my test tool that I actually had stopped asking myself what test strategy goal I was supposed to be serving with it.

Bless that tester for recalling me to my duty.

What is Test Automation?

There seems to be a lot of confusion about this.

Test automation is any use of tools to aid testing. Test automation has been around since DAY ONE of the computing industry. And never in that history has test automation been an “automatic way of doing what testers were doing before”, unless you ignore a lot of what testers actually do.

For the same reason, a space probe is not “an automatic way of doing what planetary scientists were doing before”, but rather a tool that extends the reach of what scientists can do. Test automation means extending the reach of testers.

Test automation is not at all new. What’s comparatively new is the idea of a tester. Long ago, in the late 40’s, dedicated testers were almost unknown. Programmers tested software. Throughout the sixties, papers about testing, such as the proceedings of the IFIPS conferences, almost always assumed that programmers tested the software they built. Testing was often not distinguished from debugging. As larger and more complex systems arose, the idea of dedicated software testers came into vogue. In 1972, at Chapel Hill, the first conference on testing software was held, and the proceedings of that conference show that testing was beginning to be thought of as a discipline worthy of study apart from programming.

At that conference, I think they took a wrong turn. There was much hope and enthusiasm for the future of test automation. That hope has not panned out. Not for lack of trying. More for lack of understanding.

What they didn’t understand, and what many contemporary programmers and testers also don’t get, is that good testing is an inherently a human process– not incidentally, not accidentally, but INHERENTLY. It’s highly social and psychological. The more complex software is, the more important that humans engage intellectually to identify and solve testing problems. But the Chapel Hill conference was dominated by men trained as programmers and electrical engineers, not people who professionally thought about thinking or who trained people to think.

(Who did, you ask? Jerry Weinberg. His 1965 Ph.D. thesis on problem solving is fabulous. He had written The Psychology of Computer Programming in 1970, a number of papers about testing during the sixties, including a section on testing in his 1961 book, Fundamentals of Computer Programming. He taught problem solving classes during the 60’s too, culminating in his 1974 book Introduction to General Systems Thinking. I regret that Jerry didn’t keynote at the Chapel Hill conference, but he will at the next CAST conference, in Toronto.)

The idea of a dedicated trained tester is newer than the idea of test automation, but unlike test automation, it’s an idea that hasn’t been tried on a large scale, because tester training is so terrible.

Pretending that testing is a simple technical process of making API calls doesn’t make the boogie beasts go away. They are still there, Microsoft. My wife still needs me to troubleshoot Microsoft Office, a product produced increasingly, I’m told, by programmers who work intensively on test tools because they never learned how to test. (My colleague Michael Bolton recently ran a testing class at Microsoft, though, so perhaps there is hope.)

Test automation cannot reproduce the thinking that testers do when they conceive of tests, control tests, modify tests, and observe and evaluate the product. Test automation cannot perform sapient testing. Therefore, automation of testing does NOT mean automation of the service provided by the software tester.

In summary, test automation means applying tools to testing. Test automation is ancient, non-programming testers are a newer idea, but what the industry has not yet really tried, except on a small and local scale, is systematically training the minds of testers, instead of just calling them “test engineers” or “SDETs”, giving them tools they don’t know how to use, and hoping for the best.

(BTW, I’m a programmer. I was hand-coding machine language on my Apple II before I ever heard of things called “assemblers” that automated the process. I also ran the Borland Turbo Debugger test team on the Borland C++ project, in the early 90’s. Before that I ran a test tool development team at Apple Computer. When it comes to programmer testing, GUI test automation, and non-GUI test automation, I’ve been there and done that.

It is my experience of the profound limitations of test automation that makes me a bit impatient with the way new generations of testers and programmers who seem to think no one ever thought of test tools before they came along.)

Sapient Testing Rules

Hey, somebody at AST must have read my blog when I coined the term “sapient testing“, because they named their magazine after it.

I’m still waiting for people to pick up on my other coinage: mythomimetic, which is an adjective meaning “not informed by experience or wisdom, but rather hearsay and wishful thinking.” I’ll use it in a sentence: “The speaker peppered his talk with mythomimetic cliches such as ‘you can’t control what you can’t measure’.”

Sapient testing is the antithesis of mythomimetic test automation.

Sapient Processes

Have you ever done something manually? Have you ever tried to automate it? Did you successfully automate what you were doing?

If you answered yes for any of these questions, then I’m afraid I’m being too vague– because at least three very different kinds of things are tangled together in a simple answer. In any activity done by a human, there is the human aspect (not practically mechanizable), a physical aspect involving translation or transformation of matter and energy (mechanizable in principle), and a problem-solving aspect (sometimes transformed by mechanization, sometimes not affected). Which of those get automated when we automate? Which must remain “manual”? What problems are solved and what new problems are created?

My business is software testing. I have heard many people say they are in my business, too. Sometimes, when these people talk about automating tests, I think they probably aren’t in my business, after all. They couldn’t be, because what I think I’m doing is very hard to automate in any meaningful way. So I wonder… what the heck are they automating?

Anyway, this confusion is why I am applying a new term to processes: sapience. Sapience is defined in the dictionary as “wisdom; sagacity.” I want to suggest a particular connotation. A sapient process is any process that relies on skilled humans.

A sapient process might gain from automating some or all of the purely material aspects of it, but any human aspect that is replaced or displaced by machinery results would in an impairment of that process. This is a matter of definition, not fact. I’m defining sapient processes as that which we do not know how to automate without dumbing down the process. It will either be slightly less intelligent or amazingly less intelligent.

The question a test automator should ask right away is whether testing is a sapient process. I think good testing is very sapient. That’s how I experience it and teach it. My brain is constantly turned on while testing. Show me someone who’s brain is not engaged, and I’ll show you a poor tester or a badly designed test.

Is digging a hole a sapient process? It might be. Consider an archeological dig. There’s no simple algorithm or tool that will automatically excavate a valuable historical site while we go and watch TV.

The purpose of this terminology is to help make it clearer what kinds of processes we are referring to. From now on, I will try not to use the term “manual testing”, except in quotes. I will practice saying sapient testing. I believe that nearly all “manual testing” is better being called sapient. I’m not carrying rocks out here, I’m penetrating the illusions surrounding a software product. Machines can’t do that.

To fully automate a sapient test activity would require us to find an alternate version of that activity which solved the same problems (or more) and that had no human element in it. If you think you know where that has been done, please tell me about it.