How Not to Standardize Testing (ISO 29119)

Many years ago I took a management class. One of the exercises we did was on achieving consensus. My group did not reach an agreement because I wouldn’t lower my standards. I wanted to discuss the matter further, but the other guys grew tired of arguing with me and declared “consensus” over my objections. This befuddled me, at first. The whole point of the exercise was to reach a common decision, and we had failed, by definition, to do that– so why declare consensus at all? It’s like getting checkmated in chess and then declaring that, well, you still won the part of the game that you cared about… the part before the checkmate.

Later I realized this is not so bizarre. What they had effectively done is ostracize me from the team. They had changed the players in the game. The remaining team did come to consensus. In the years since, I have found that changing the boundaries or membership of a community is indeed an important pillar of consensus building. I have used this tactic many times to avoid unhelpful debate. It is one reason why I say that I’m a member of the Context-Driven School of Testing. My school does not represent all schools, and the other schools do not represent mine. Therefore, we don’t need consensus with them.

Then what about ISO 29119?

The ISO organization claims to have a new standard for software testing. But ISO 29119 is not a standard for testing. It cannot be a standard for testing.

A standard for testing would have to reflect the values and practices of the world community of testers. Yet, the concerns of the Context-Driven School of thought, which has been in development for at least 15 years have been ignored and our values shredded by this so-called standard and the process used to create it. They have done this by excluding us. There are two organizations explicitly devoted to Context-Driven values (AST and ISST) and our community holds several major conferences a year. Members of our community speak at all the major practitioners conferences, and our ideas are widely cited. Some of the most famous testers in the the world, including me, are Context-Driven testers. We exist, and together with the Agilists, we are the source of nearly every new idea in testing in the last decade.

The reason they have excluded us is that they know we won’t agree to any simplistic standard based on templates or simple formulae. We know those things look pretty but they don’t help. If ISO doesn’t exclude us, they worry they will never finish. They know we will challenge their evidence, and even their ethics and basic competence. This is why I say the craft is not ready for standards. It will be years before all the recognized experts in testing can come together and agree on anything substantial.

The people running the ISO effort know exactly who we are. I personally have had multiple public debates with Stuart Reid, on stage. He cannot pretend we don’t exist. He cannot pretend we are some sort of lunatic fringe. Tens of thousands of testers have watched my video lectures or bought my books. This is not a case where ISO can simply declare us to be outsiders.

The Burden of Proof

The Context-Driven community stands for excellence in testing. This is why we must reject this depraved attempt by ISO to grab power and assert control over our craft. Our craft is still an open marketplace of ideas, and it is full of strong debates. We must protect that marketplace and allow it to evolve. I want the fair chance to put my competitors out of business (or get them to change their business) with the high quality of my work. Context-Driven testing has been growing in strength and numbers over the years. Whereas this ISO effort appears to be a job protection program for people who can’t stomach debate. They can’t win the debate so they want to remake the rules.

The burden of proof is not on me or any of us to show that the standard is wrong, nor is it our job to make it right. The burden is on those who claim that the craft can be standardized to study the craft and recognize and resolve the deep differences among us. Failing that, there can be no ethical or rational basis for standardization.

This blog post puts me on record as opposing the ISO 29119 standard. Together with my colleagues, we constitute a determined and sustained and principled opposition.

Stuart Reid’s Bizarre Plea

Stuart Reid is planning to do a talk on how we should use “evidence” in our debates about what works and doesn’t work in testing.

A funny thing about that is Stuart once spent 30 minutes trying to convince me that the number “35,000” was evidence of how great the ISEB certification is, as in “35,000 happy customers can’t be wrong.” Such a concept of “evidence” wouldn’t not pass muster in a freshman course in logic and rhetoric. How does he know that the 35,000 people are happy? How does he know that they are qualified to judge the quality of the certification? How does he explain the easily checked fact that you can pick out any three ISEB or ISTQB certified testers, ask them if they think the certification has made them better testers or indicates that they are better testers, and at least two of them will do the equivalent of rolling their eyes and smirking? (Don’t believe me? I understand. So TRY IT, as I do on a regular basis in my classes)

You might think Stuart is attempting a bold and classic rhetorical move: attempting to control the terms of the debate. The problem he has is that he will lose the debate even faster if he actually engages on the question of evidence. This is because there is plenty of evidence from other fields and the history of thought itself to justify the positions of the Context-Driven School of testing. We are winning the debates because we are better informed and better educated than the Factory Schoolers, for instance, represented by Reid. For instance, Rikard Edgren (who says he’s not in the Context-Driven School, but looks like a duck to me) wrote about applying Grounded Theory to testing. I wonder if Stuart Reid has ever heard of Grounded Theory. He probably has, because I probably mentioned it at least once in the hours of debate that Stuart and I have had. He didn’t respond or react. My impression was that he wasn’t listening.

There’s something far more important than evidence that we need in our industry: engagement. People need to listen to and respond to the arguments and evidence that are already out there.

Here’s one sort of evidence I put in front of Stuart, in a debate. I claimed that my school of testing represents a different paradigm of thinking about testing than his does. After giving him examples of specific words that we define differently and concepts that we arrange differently, it became clear that the deeper problem is that he thought I was pretending to believe things that I don’t believe, just to be difficult. He actually said that to me!

This is the last resort of the determined idealogue: poke your own eyes out so that you don’t risk seeing contrary evidence. Stuart’s case rests on pretending that no one else is making a case! His demand for evidence is meant to give the impression that the evidence is not already sitting in front of him being ignored.

Cem Kaner, Michael Bolton, and I have been marshaling evidence, pointing out the lack of evidence against our ideas, and demonstrating our methods for many years. Next week it will be exactly 23 years since I first became a full-time software tester, and nearly 17 years since the first time I stood up at a conference and pointed out the absurdity of “traditional” testing methods.

BTW, here some of the kinds of evidence I offer when challenged about my work:

  • The Sciences of the Artificial, by Herbert Simon (this establishes, based on a body of research for which he won the Nobel Prize in 1978, the heuristic nature of engineering)
  • Collaborative Discovery in a Scientific Domain, Takeshi Okada, Herbert Simon, 1997, (this is an experiment that observed the behaviors of scientists attempting to create and perform experiments together in an exploratory way)
  • The Processes of Scientific Discovery: The Strategy of Experimentation, Deepak Kulkarni, Herbert Simon, 1988 (this study analyzes the basic exploratory processes of science)

The first item here is a book, the next two are papers published in the journal Cognitive Science. See, if Stuart wants evidence, he has to look beyond the desert that is Computer Science. He needs to get serious about his scholarship. That will require him to find, in his heart, a passion to learn about testing.

The Drunken Gold Rush

This comes from an ISTQB advertisement they spammed me with, today:

“To ensure the quality of any software system, testers and QA professionals must thoroughly test the product. But how do you know that these tests are effective? If your team is conducting ad hoc, informal tests with little guidance or planning, the quality of the end product can be severely jeopardized—negatively affecting your bottom line.”

I don’t like to say things like this, nor am I comfortable supporting people who do. It’s not that it’s untrue– it is not necessarily untrue. But it is the kind of statement that fans the flames of a certain sort of Factory School bigotry in our industry. “Oh, you can’t trust testing unless it is pre-planned, pre-packaged, pre-approved, formalized, etc.”

Notice they say nothing about skill. It’s all about methodology, here, not skill. This kind of setup suggests that the next statement will be about the importance of factory-like test methodology. But that’s not what happens.

“The best way to be certain that you are providing customers with quality software is to make sure your team of testers is certified.”

Friends, I’m aware of no one in the industry– not even my worst enemies, not even Rex Black or Stuart Reid– who would publicly assert this or defend it. In fact, in debates against those of us who think certification is consumer fraud, the most typical move is for certificationists to say that certification isn’t even about skill, but rather about basic knowledge. “It’s a start” they say. “It’s a foundation.” (I reply that it’s a bad start and a bad foundation. Much worse than what it tries to replace.)

But then they allow this sort of advertising to go out! Completely undercutting their innocent-sounding plea! And they wonder why I complain that they don’t have the best interests of the testing craft at heart.

Notice that the “ad hoc and unplanned” stuff doesn’t even logically connect to certification. In fact, wouldn’t a highly skilled tester be far more likely to succeed with an ad hoc testing regime? When Roger Federer plays ad hoc tennis, I bet he still wins.

I think the reasons they start talking about methodology and end up talking about certification is A) their potential customers don’t understand the difference between skill and method, B) method is more concrete than skill, thus easier to evoke, and C) they know that what they say doesn’t have to be true or even logical, as long as it evokes horror and promises hope.

Oh, but there’s more…

“By taking the Software Tester Certification course and earning an internationally recognized certification in software testing, your team will gain the expertise needed to handle your greatest testing challenges; earn credibility and recognition as competent quality assurance professionals; and provide greater value to your organization.”

It’s internationally recognized? By whom? Some people who don’t study testing and some people who study testing and financially benefit from certification. Okay, but it is also internationally ridiculed by serious testers of many nations who wish to raise themselves to a level of skill that can’t be obtained in just a couple of days of training.

I recently encountered Dot Graham, now semi-retired, who told me that it hurts her feelings when people like me suggest that certificationists are only in it for the money. Dot is a sweet person. I don’t want to hurt her feelings. But I point her to advertising like this and I challenge her to explain it in any other terms. If not greed, then what, Dot? Stupidity? Pride?

Dot doesn’t want to argue with me about this. Of course she doesn’t. Rex Black doesn’t want to argue, either. Naturally. What answer could there be? Lois Koslowski once told me that “big dogs” don’t need to debate (in fairness, Lois Koslowski claims not to be a tester. I agree that she showed no testing competence or knowledge in the conversation we had. I just mention her because she did claim to be in charge of the ASTQB organization. Yikes!) This is capitalism in its ugly form– harvesting the ignorance and fear of others. Debate has no place here.

Is there no one in that self-declared professional community who reviews the advertising and stands up for professional temperance and humility?

G2 Test Labs: Cry “Certification!”

A salesman from G2 Test Labs just called me. He said he was from India. He wanted to know if my testing company needed to partner with an offshore lab like his. I’m writing this now, while the memory of the conversation is fresh.

After he made his brief brief opening monologue I asked him “I’m a testing company. Why are you calling me?”

“Maybe you want to have an offshore arm.” He said.

“Well that depends on the skills of your testers. How do you train your testers?” I asked.

“Oh… we don’t do any training. But our testers are certified by other organizations.”

“Which organizations certify your testers?”

“Uh… I will have to check on that and get back to you.”

“Yes, that’s important information. Are ALL your testers certified?”

“Probably… most of them are.”

“Sounds like you don’t know.”

“…”

“Hey, this will make a funny post. Check my blog in about an hour. Goodbye!”

Now, in fairness, the salesman sounded like he was about 22 years old. Perhaps they sent him to call me as part of some hazing ritual.

[Oh, I just remembered, in his opening statement, he mentioned that his company was ISO-9001 certified, too. Wow. That takes me back to 1992, when I was fighting ISO-9001 certification. That certification program turned out not to amount to anything, either.]

Pradeep Pulls The Tail of the ISTQB

Pradeep Soundararajan got threatened with lawyers when he criticized Testing Experience magazine for being under the thumb of the ISTQB (for those who don’t yet know, the ISTQB are the guys who want to prevent you from getting work as a tester unless you first pass their silly test. They also plagiarized my definition of exploratory testing, while subtly changing that definition to alter part of its fundamental meaning).

The editor of that magazine could have said “Look, we believe in the ISTQB. That’s just how it is.” Instead he hinted that he would sue Pradeep if he blogged his criticism. Pradeep blogged anyway.

A couple of authors of testing textbooks have threatened to sue me, in the past. I don’t know what they think they were accomplishing by that. I just turned around and blogged about them. It’s not illegal to criticize bad work and the people who do it.

The ISTQB is not part of any community of software testers. They are a business that ignores the rest of the testing world while pursuing their own agenda to line their pockets and promote themselves with misleading advertisements. That’s my opinion, which I have reached through a variety of experiences and investigations as part of being in this craft. One of those experiences is the time that the ISTQB approached me to run their American operation. They spent 30 minutes telling me how great it would be to take advantage of the American market for certification, before they realized I thought it was a terrible idea. After that, I guess they decided that I’m not qualified to have an opinion, since they’ve never paid attention to me since.

I once read Rex Black’s own advertising (promoting ISTQB certification) as part of a keynote speech at the CAST conference– his exact words, mind you– and after reading it I explained why I thought it was misleading. Rex then demanded the return of the money he paid to sponsor the conference.

You might think, yeah, well, of course he should get his money back, until you remember that this was a conference dedicated to free investigation of testing ideas, and not a get-out-of-criticism-if-you-pay-a-fee show. CAST is a free speech zone.

I hope that testers will recognize these opportunists for what they are and begin to fight back. I’m glad that Pradeep is doing just that.

A View From Inside ISTQB/ISEB

Alan Richardson writes this commentary from inside one of the stupidest of the certification programs: the ISTQB (well, he says “ISEB”, but by all accounts, it’s being taken over by ISTQB stormtroopers).

Long ago I also tried to change a certification program from the inside. I also failed. Now I do my best to cultivate the community of people who rise above it. As Alan points out, rising above can be difficult, because of all the poor fools who’ve been duped into believing that an ISTQB tester certification actually means something important.

What such certification really means is that, in England, and several other countries, certain unscrupulous or plain ignorant consultants are able to hold the testing craft for ransom, and almost no one will call them to account. Some of the perpetrators know full well what they are doing, but many of them, I think, know so little about testing that they honestly don’t realize what harm they do to the industry.

— James

Confused Methodology Talk #1

This posting by Corey Goldberg illustrates an interesting and all too common kind of confusion people get into when discussing methods and practices. It’s worth pondering.

On SQAForums, someone stated:

“ISEB defines automated tested as useful only in mature testing environments and where functionality is not changing i.e. at regression testing.”

to which Corey replied:

“…and ISEB would be completely wrong on that point. web services testing should be fully automated, as there is no UI, just an API.”

Let’s analyze these statements. The first writer seems to be under the sway of ISEB, which immediately induces a heavy sigh in the pit of my soul.

(There are now thousands of people who might be called “certification zombies” lurching around in an ISEB or ISTQB-induced fog, trying to apply what they learned in a few days of memorizing to the complex reality of testing.)

When the first writer says that ISEB “defines” automation as useful only in a certain context, that’s a perfect example of the inability to separate context and method. To think clearly about methodology, you must be able to sift these things apart. Best practice thinking can’t help you do this, and in fact discourages you from trying.

I don’t know if ISEB actually defines or discusses test automation in that way, but if it does, I can tell you what ISEB is probably thinking.

(BTW, one of the big problems with certification programs is the depersonalization of convictions. I say “ISEB” when what I want to say is Dorothy Graham or one of those people who support and edit the ISEB syllabus. You can’t argue with a document. Only people can have a point of view. To argue with ISEB itself is to argue with an anonymous sock puppet. But that’s the way they want it. Certificationists quite purposefully create a bureaucratic buffer of paper between themselves and any dissenters. To pick someone whom I believe advocates the ISEB way, I will choose Dorothy Graham.)

If Dot advocates that belief, then she is probably thinking about GUI-level automation of some aspects of test execution; a set of detailed scripted actions programmed into a software agent to exercise a system under test. If so then it is indeed likely that modifying the system under test in certain ways will break the test automation. This often leads to a situation where you are constantly trying to fix the automation instead of enjoying the benefits of it. This is especially a problem when the testing is happening via a GUI, because little changes that don’t bother a human will instantly disable a script.

So, even though the first writer appears to be reading off the ISEB script, there is some validity to his claim, in some context.

Now look at Corey’s reply. Corey is not under the sway of ISEB, but I worry that he may be under the sway of a typical affliction common among programmers who talk about testing: the reification fallacy. This is the tendency to think of an abstraction or an emergent process as if it were a fixed concrete thing. Hence if a programmer sees me punch a few keys in the course of my testing, and writes a program that punches those same keys in the same order, he might announce that he as “automated the test”, as if the test were nothing more than a pattern of input and output. Certainly, it is possible to automate some aspects of testing, but the aspect of it that requires human reflection cannot be automated. In fact, it can’t even be precisely duplicated by another human. It is an emergent phenomenon.

(Some would say that I am splitting hairs too finely, and that imprecise duplication may be close enough. I agree that it may be close enough in certain contexts. What I caution against is taking the attitude that most of what is valuable about testing, most of the time, is easy to automate. When I have seen that attitude in practice, the resulting automation has generally been too expensive and too shallow. Rich, interesting, cost-effective test automation, in my experience, is a constructive partnership between human thinkers and their tools. I believe, based on my knowledge of Corey, that he actually is interacting constructively with his tools. But in this case, he’s not talking that way.)

What Corey can do is use tools to interact with a system under test. He uses his un-automatable human mind to program those tools to provide certain input and look for certain output. His tools will be able to reveal certain bugs. His tools in conjunction with un-automatable human assistance during and after execution and un-automatable human assistance to re-program the tests as needed will reveal many more bugs.

The reification fallacy leads to certain absurdities when you consider different frames of reference. Corey points out that a web service has no “user interface”, and therefore is accessible only via a tool, and anything that is accessible only by a tool must therefore require “fully automated” testing. By that reasoning, we can say that all testing is always fully automated because in all cases there is some kind of hardware or software that mediates our access to the object of our test. Therefore, the fact that I am using a keyboard to type this blog posting and a screen to view it, by Cory’s logic, must be fully automated writing! I wonder what will be written next by my magic keyboard?

From one frame of reference, a web service has no user interface. From another frame of reference we can say that it does have a user interface, just not a human interface– its user is another program. How we test such a thing is to write or employ a program that does have a human interface to manipulate the web service. We can operate this interface in batch mode: write a program to submit data, run it, review the results, and re-write the program as needed. Or we can operate the interface interactively: write a program to submit data, present results, then wait for us to type in a new query.

Corey and the first writer are not in a helpful dialog, because they are talking about different things. I would tell the first writer to treat ISEB as having no authority or wisdom, and to instead learn to reason for himself. The relevant reasoning here, I think, is to wonder what kind of tool we could find or write that would allow us to interact with the web service. At the same time, we need to consider how the web service interface might change. We might stick to highly interactive testing for a while, instead of investing in a batching system with lot of automatic oracles, if we feel that the interface and functionality is changing too fast. On the other hand, one of the nice things about testing through an API is that it is often rather inexpensive to script sequences and batches and simple oracles; and consequently inexpensive to fix them when the system under test changes. I suspect that belief informed Corey’s response, although I wish he would make that belief more apparent to people who are used to thinking of testing as a human-driven process.

As a programmer, I am aware of the urge, sometimes, to say “I didn’t do it, my program did.” In testing this naturally turns into “I didn’t test that, my program I wrote to test that did.” The crucial difficulty with this way of speaking, when it comes to testing, is the way it obscures the many, many choices the programmer made while designing the program, as if the program itself made those choices, or as if there were no choices to be made. The thing is, I don’t care, for a regular program, how many other ways it could have been written, or how any other things it could have done. But these are vital concerns when the program is meant to test another program.

Francis Bacon’s New Organon

It would be an unsound fancy and self-contradictory to expect that things which have never yet been done can be done except by means which have never yet been tried.

This reminds me of that “definition of insanity” which is so often attributed to Einstein. But this comes from Francis Bacon, circa 1620. In his seminal work “The New Organon”.

Bacon has some other interesting quotes for testers…

The logic now in use serves rather to fix and give stability to the errors which have their foundation in commonly received notions than to help the search after truth. So it does more harm than good.

This is how I feel about the ISTQB syllabus.

There is no soundness in our notions, whether logical or physical. Substance, Quality, Action, Passion, Essence itself, are not sound notions; much less are Heavy, Light, Dense, Rare, Moist, Dry, Generation, Corruption, Attraction, Repulsion, Element, Matter, Form, and the like; but all are fantastical and ill defined.

This is how I feel about a lot of testing terminology.

It is idle to expect any great advancement in science from the superinducing and engrafting of new things upon old. We must begin anew from the very foundations, unless we would revolve forever in a circle with mean and contemptible progress.

I feel that way, too, about patterns such as the v-model, and most of what passed for test techniques in the 80’s.

Francis Bacon was proposing a great break with the stifling Aristotelianism of his day (the certification craze of the middle ages), and sought a new foundation for science. Bacon thereafter became a godfather of the enlightenment, helping to create the modern world.

We’re way beyond Bacon, now. Still, I’m attracted to his sentiment that what passed for good scientific work in his time was actually nothing but uncritical folklore. In our time, in our little field, we need a similar re-invention of the craft, a New Organon of testing.

How to Ask (and Not Ask) for Free Consulting

Strangers contact me on a regular basis, asking questions about testing. I don’t mind answering questions. In fact, until the day I get paid for answering questions, it will remain one of my favorite excuses for not working. But once in a while, someone will ask the wrong question in the wrong way. Watch how this guy (I’ll call him “Mr. W.”) asks for free consulting. These aren’t instant messages, mind you, but full emails:

Mr. W.:
It would be a great help if u can explain me how to perform testing of
a datawarehouse along with a test plan for same.

James:
I could tell you, but would you understand the answer? Do you have any training or experience as a tester?

Mr. W.:
James, I am a mercury certified tester so I think I can understand.

James:
You are a Mercury Certified tester? I don’t know what that is. Does that cover general testing skills, or just how to run Mercury tools?

Mr. W.:
Mercury is a brand name in testing providing various tools like winrunner, test director etc.Talking of general testing skills I am proficient in Module, integration ,system and UAT.

James:
I know what Mercury is. I didn’t know Mercury certified testers.

If you are a skilled tester, then you know that a question like “Can you explain to me how to perform testing of a datawarehouse and also provide me with a test plan?” cannot be answered. It’s as if you asked me “What is the mathematical equation that solves the problem I am thinking of that has something to do with data warehousing?” Nobody can answer that.

I could tell you about issues related to testing data warehouses, but I have no confidence that you would understand what I’m talking about or be able to act reasonably on that information. I’m not going to hand you a “test plan” and anyone who tries to give you a test plan is irresponsible.

Man, I think you need to learn how to test. Then you won’t feel the need to ask silly questions. I don’t know what kind of test Mercury gave you to certify you but it could not have been very hard to pass.

Mr. W.:
I think ur waste of time, just show-off of greatness , and empty vessel who makes lots of noise, I regreat I contacted u, ur a waste.

What is Going on Here?

One thing going on is that someone with an inflated sense of entitlement is offended that I won’t be his personal homework slave. But, let me itemize the problems:

  • Mr. W did not respond in a lucid and straightforward way to my questions. (There is, for instance, no such thing as a Mercury Certified Software Tester.)
  • Mr. W seems to be more concerned about me wasting his time than him wasting mine.
  • Mr. W seems to expect multi-page hand-crafted answers to single-line emails.
  • Mr. W seems to expect specific answers to vague questions.
  • Mr. W writes so poorly that I don’t think it can be chalked up to merely English-as-a-second-language syndrome. No English class on Earth is going to teach writing “u” instead of “you”. The only people who can be excused for writing “u” are teenagers texting each other, or teenagers writing in a chat window while simultaneously fighting level 55 scorpids in World of Warcraft.
  • Mr. W approached me with an enormously inflated notion of his own skills. I don’t believe anyone who is actually proficient in any test technique or approach would ask such a broad question as he asked, except as a joke. For one thing, it’s a question that would require a very long answer. For another thing, if he’s as proficient a tester as he claims, he would already know that answer.
  • Mr. W’s insults lack wit and specificity. I bet he tells all the writers he pisses off that they “just show-off of greatness”. Why, I haven’t even tried to show him my greatness, yet. I’m intentionally not showing greatness. Or maybe he considers it an act of show-off greatness to spell reasonably well, and make use of conventional grammar.

How to Ask Questions

Don’t be like Mr. W. If you want free consulting from a writer in the field, here’s how to get it:

  • Do not present yourself as if you are so lazy you can’t even summon the ambition to spell words and complete sentences.
  • Do reasonable homework before you approach a stranger to ask for answers. Learn to use Google. Poke around the many websites available.
  • Find out the basics of what that stranger does and believes. For instance, I am well known to be skeptical of certification programs. Citing that you are certified would just put me on edge.
  • For most strangers you approach, you probably should first ask permission to ask a technical question. For me, I grant you permission. For me, here’s what you need to do: Introduce yourself, state your situation, state your problem in detail, state what you have already done to solve your problem, and ask a specific question. Also, it doesn’t hurt to say something about how you know that I’m probably busy and how you would understand if I don’t have a lot of time for out-of-the-blue questions.
  • Be ready for the stranger to ask you to do some work to solve your own problem first, or to suggest that you are not yet ready to receive an answer. A good response to this is to do some more work and come back. You may also ask for more details about the kind of work they think you need to do.
  • Never blast your question to many consultants at once. It will get you instantly blackballed.
  • It never hurts to offer a favor of some kind in return for help.
  • In my case, if I don’t reply to you, try me again. Keep trying until I reply. The reason I don’t reply is usually because I’m really busy that moment. But squeaky wheels eventually get a response. Pradeep Soundararajan once sent me 18 reminders over a three month period until I finally answered his question. He has since received a great deal of my time, at no charge, just because I feel that he respects and values my help.

I will give time to four kinds of people: hard-working self-possessed people, warm and charming people, people who teach me something important, and people who pay me money so I can support my family. If you aren’t the fourth kind, and you want something from me, try to be one of the first three.

Against Certification

What follows is a somewhat grumpy argument against tester certification programs. I have mixed feelings about writing this, because I know a lot of otherwise friendly people who are involved in certification. I know there are a good many organizations committed to certification. I will probably lose some business because I’m going on record opposing it. What I hope, of course, is that I will gain as much business as I lose. My ideal client is someone who wants straight talk, rather than only happy talk.

Please keep this in mind, as you read: my opinion is based on experience, yes, and it is based on my feelings and priorities and reasons. But it is also formed through the dialectic process of considering what other people have to say. I invite you to respond.

Please do not support bad tester certification programs. If you are already certified, please don’t take it seriously or expect other people to take you seriously because of it. Thank you.

I know something about this. I was involved in producing and approving the original body of knowledge for the Certified Software Quality Engineer program for the American Society of Quality, in 1995. By the end of that process, I could not endorse the program.

I have reviewed the syllabi for ISEB and ISTQB as well as the SWEBOK. I have debated these syllabi with some of the people who helped create them. Attempts have been made to recruit me into two other certification attempts, both of which I turned down.

I’m a big fan of merit. I do not have a high school diploma. (For those of you into etymology, “diploma” refers to a folded piece of paper.) I left school because I could not put up with the certification mentality of academic life. It’s an empty-headed way of trying to fill heads. It’s soured by petty politics. I am in this industry because it rewards primarily merit. I think the current crop of tester certification programs threaten that, in much the same way that the Capability Maturity Model has stalled and withered discussion about real software process improvement in the government and military sectors of the software industry.

Because I keep getting asked about this, I’ve laid out my arguments, below.

“No Single Community” Argument

  1. Certification is a community phenomenon. Certification is simply a clarification of community membership. Nothing wrong with that, except when the certifying agency does not actually represent the community.
  2. I imagine, for some occupations, there are well-established and internationally recognized organizations that speak for those occupations. Not so with software testing.
  3. There are many communities within the testing industry. These communities have different ideas about testing; differing values and vocabularies. Some people say there is a consensus about “good practice” in testing. There isn’t. There is no process to determine consensus. There has never even been a serious attempt to form such a consensus. (A few friends getting together to agree on practices hardly counts as an industry consensus.)
  4. Most people who do testing for a living don’t take classes or read books. They don’t go to conferences. They are not community activists. (This impression is based on the informal polls I take in my corporate onsite testing classes and by my conversations with people who create certification programs.) Yet they may well be able to test software effectively. There is little outreach to such testers by the testing activists. The experience and creativity of most testers is therefore not being harnessed in any systematic way by people making up certification programs.
  5. Sometimes people tell me that there is no real controversy among testing thinkers about the true basics of testing. Then I argue with them for an hour and see what happens. I am a living existence proof of controversy, since my rapid testing methodology rejects much of traditional testing folklore. How people react to me reveals how they define their community: those who dismiss my ideas are telling me I’m not a Citizen of Testing, and thus they preserve their consensus by banishing those who do not give consent. Through the liberal banishment of anyone who disagrees, consensus can be achieved on any topic whatsoever.
  6. Although certifying agencies can speak only for their own organizers, their ideas are too often taken seriously by people who don’t know any better. This distorts the great conversation and debate about what testing is and should be. People who are not testing afficiandos don’t know that the testing industry is fragmented. They don’t know that certification programs don’t represent consensus. Because they don’t know, they tend to assume that all the tester certification programs are pretty much the same, and that the certifying agencies are authoritative, and that people who are not certified must not know much about testing.
  7. An excellent certification program would have to be based on a comprehensive study (not just a survey or opinion poll) of testers in the field and in a variety of technology sectors.
  8. I cannot support a tester certification program unless it identifies its community, studies its community, and acknowledges the existence of other communities. That’s why I call myself a member of the Context-Driven School of testing and that’s why I give names like Factory School or Quality School to other factions in the testing world who have refused to name themselves.

“Bad Test” Argument

  1. A certification process is a testing process. I believe in good testing. Therefore, I look for a certification program that effectively tests a candidate tester for relevant qualities.
  2. I am interested in the ability of a tester to test; testing competence. The ability to remember word definitions and pat answers about oversimpified testing situations is not the same as testing competence.
  3. An exam that focuses on the way words are used is therefore a poor test of testing competence. Yet, there are no skill-based tester certification programs.
  4. Some people who agree with me that certification exams are a poor test of competence believe that it is at least a test for interest and commitment level. That’s fine, but there are also many other ways to demonstrate interest and commitment, and better ways too, since testing certification requires so little skill to acquire.
  5. I am aware of no tester certification program that actually guarantees or even indicates the quality of the tester. It has not been my experience that certified testers, of any stripe, perform any better in my testing classes (which include hands-on testing exercises) than non-certified testers.
  6. There are better tests available: a university degree in computer science, philosophy, psychology, law, math, electrical engineering or even music. Pretty much any university degree deserves far more credibility as a certification for testers than the pathetic quicky classes that prepare people for tester certification.
  7. Certification exams do not measure the quality of a tester. Until they do, they merely facilitate discriminatory hiring practices.

“Chilling Effect” Argument

  1. Tester certification programs of ANY kind (even really good ones) necessitate a narrowing of views about what constitutes testing. The certification program designer must make choices about what’s in and what’s out.
  2. The mere existence of a testing “Body of Knowledge” or syllabus has a chilling effect on the development of new testing ideas that enhance or transform that notion of knowledge. This is because people will just use the syllabus instead of rethinking it, on the assumption that it is bad to rethink an answer when the answer is already known. But in an immature craft like testing, the mere existence of a syllabus does not mean that the problem of defining the craft or the skills of the craft has been solved– even for the community it purports to serve. We need experience with a variety of models of the craft and we need to test those models.
  3. A second aspect of the chilling effect is that committee politics interferes with innovation.
  4. A third aspect of the chilling effect is that vested interests, such as consultants who develop courses, have want to keep the status quo. Rejecting change lowers course development costs.
  5. I believe that any attempt at certification, especially if it is done by a community steeped in traditional testing folklore, risks retarding our progress toward a better future as a respectable discipline.

“Folklore is a Bad Foundation” Argument

  1. Every syllabus I have seen is just a collection of folklore; paved cow-paths of popular testing mythology. Where is the critical thinking about this folklore? Why should we be satisfied with hearsay as our primary research method? I won’t take the space here to go into detail on specific examples, such as the boneheaded way boundary testing is taught, or the moronic principle that all tests must have a prespecified expected result, or that finding a bug late is more expensive than finding it early. There is just too much faith and not enough critical thinking in our craft.
  2. I believe that true excellence in testing is not about memorization or promotion of testing folklore, but rather about general systems thinking, epistemology, and the philosophy and methodologies of empirical research.
  3. The testing communities I have encountered, other than my own, almost without exception, express either indifference or contempt for the cognitive processes of excellent testing. The exceptions I have encountered are only among testing communities that don’t realize that they are testing communities (such as decision theorists or artificial intelligence researchers).
  4. I believe that even my own community has only just begun the kind of scholarship required to develop a robust idea of how to train and assess excellent testers. We are lurching awkwardly forth, but we are making progress.
  5. Testing folklore has some value, but it is no basis upon which to declare ourselves a mature and functional craft.

“It’s NOT a Starting Point” Argument

  1. Certification is not a starting point. The starting point is wanting to study to be a tester. A useful question is what helps a tester along the road to excellence.
  2. Misinformation about testing does not help testers learn their craft. See Folklore is a Bad Foundation.
  3. Because we are No Single Community, we can’t agree as a craft on what constitutes good or bad information about it.
  4. Because it is a Bad Test, attaining certification doesn’t even mean that the “starting point” has been attained.
  5. The idea that certification can be a productive step toward testing excellence is therefore an empty claim. It is not a productive step at all.
  6. Instead of certification, testers need study and practice. They should be build their reputations in their local testing communities.

“Exploitation is Bad” Argument

  1. I want my actions to contribute maximally toward the better future of testing. I want to help other people contribute creatively to the craft.
  2. I observe that the motivations driving certification programs are mainly economic, rather than being rooted in a desire to improve the testing field. Consultants find that they can easily sell classes that are tied to certification requirements. I’ve been in conversations with such people, before they knew I was against certification, as I listened to them tell me what a gold mine certification is for them.
  3. Besides economics, some people who push certification are motivated by a desire for greater influence and respect than they would otherwise receive from their peers if they had to stand intellectually naked and alone and justify their thinking. I know a good many people– even people who speak at testing conferences and write books– who in my opinion have little knowledge or competence as testers.
  4. There are a lot of people out there ripe to be exploited, including novices who want an easy way into the craft, or managers who want an easy formula for hiring testers.
  5. I feel bad about these things because I have gone through a difficult and long process of working out my own ideas about testing and grounding them in the history and traditions of organized thinking. When the get-rich-quick personalities set up shop as testing experts without any grounding other than the kind regard of foolish friends, I get depressed.
  6. I am against certification when certification requires exploiting the ignorant.

“No accountability” argument

  1. You can’t sue a tester for malpractice, because no testing certification, under the law, establishes a true profession.
  2. What happens to the certified tester if he does a bad job? Nothing. Does he lose his certification? No. What skill must a tester demonstrate to maintain certification? None.
  3. Tester certification has no teeth. It has the same legal footing as World’s Greatest Boss or World’s Greatest Dad.

“Yes There is an Alternative to Bad Certification” argument

  1. I do support certification programs that are designed to promote personal responsibility and protect an activity from restrictive regulation. For instance, if people were dying too often from scuba diving accidents, scuba diving would eventually be banned.
  2. I generally support certification programs that provide reasonable protection for consumers in an inefficient market, without posing an unreasonable burden to trade and innovation. But in the case of testing, employers don’t need protection from bad testers (because they already have protection, see below), and even if they did need it, they aren’t getting it from any of the certification programs that currently exist.
  3. I support any organization’s right to decide who can be a member, within the law. There is a certification process already in the testing field. It’s called a job interview. This protects employers. This can be followed up with an assessment process called watching people do their jobs. This protects employers. If you don’t know how to tell if someone is doing their job, then you aren’t qualified to be supervising that person. If you are an employer who knows nothing about testing or software development, and yet you want to have testers and software developers working for you, consider hiring a project manager who does understand these things. Alternatively, you can hire an outside company to manage things. At least they are accountable. But who is accountable if a certified tester does bad testing?
  4. One alternative to certification is taking the free online Black Box Software Testing course. It’s a full bore testing course, with hours of video lectures and lots of other materials.
  5. Another alternative is to get friends to say good things about you. Become “colleague certified.” My business as an independent consultant, trainer, and expert witness is based almost entirely on my reputation. Reputation = opportunity = money.
  6. I could support a tester certification program only if I thought it was honest and useful both to the tester and people dealing with the tester. Once our craft matures a lot more, I suspect I will support some form of industry-wide certification. Meanwhile, I can support certification programs that are A) identified with a particular community rather with the industry at large and B) skill-based, C) have undergone some kind of field testing process, and D) celebrate self-critical practice.
  7. There are no certification programs for testers that meet all of these criteria– not even BCRIT (Bach Certified Rapid Tester) which is my own program. BCRIT is still in development. I am not promoting it, yet. I may never promote it. I’ll promote only when I’m prepared to support it with cogent reasoning and evidence. I call upon other would be certificationists to do the same.