Designing Experiments

I experience intellectual work, such as testing, as a web of interconnected activities. If I were to suggest what is at the center of the testing web, on my short list would be: designing experiments. A good test is, ultimately, an experiment.

I’ve been looking around online for some good references about how to design experiments (since most testers I talk to have a lot of trouble with it). Here is a good one.

If you know of any other straightforward description of the logic of experiments, please let me know. I have some good books. I just need more online material.

How to Ask (and Not Ask) for Free Consulting

Strangers contact me on a regular basis, asking questions about testing. I don’t mind answering questions. In fact, until the day I get paid for answering questions, it will remain one of my favorite excuses for not working. But once in a while, someone will ask the wrong question in the wrong way. Watch how this guy (I’ll call him “Mr. W.”) asks for free consulting. These aren’t instant messages, mind you, but full emails:

Mr. W.:
It would be a great help if u can explain me how to perform testing of
a datawarehouse along with a test plan for same.

James:
I could tell you, but would you understand the answer? Do you have any training or experience as a tester?

Mr. W.:
James, I am a mercury certified tester so I think I can understand.

James:
You are a Mercury Certified tester? I don’t know what that is. Does that cover general testing skills, or just how to run Mercury tools?

Mr. W.:
Mercury is a brand name in testing providing various tools like winrunner, test director etc.Talking of general testing skills I am proficient in Module, integration ,system and UAT.

James:
I know what Mercury is. I didn’t know Mercury certified testers.

If you are a skilled tester, then you know that a question like “Can you explain to me how to perform testing of a datawarehouse and also provide me with a test plan?” cannot be answered. It’s as if you asked me “What is the mathematical equation that solves the problem I am thinking of that has something to do with data warehousing?” Nobody can answer that.

I could tell you about issues related to testing data warehouses, but I have no confidence that you would understand what I’m talking about or be able to act reasonably on that information. I’m not going to hand you a “test plan” and anyone who tries to give you a test plan is irresponsible.

Man, I think you need to learn how to test. Then you won’t feel the need to ask silly questions. I don’t know what kind of test Mercury gave you to certify you but it could not have been very hard to pass.

Mr. W.:
I think ur waste of time, just show-off of greatness , and empty vessel who makes lots of noise, I regreat I contacted u, ur a waste.

What is Going on Here?

One thing going on is that someone with an inflated sense of entitlement is offended that I won’t be his personal homework slave. But, let me itemize the problems:

  • Mr. W did not respond in a lucid and straightforward way to my questions. (There is, for instance, no such thing as a Mercury Certified Software Tester.)
  • Mr. W seems to be more concerned about me wasting his time than him wasting mine.
  • Mr. W seems to expect multi-page hand-crafted answers to single-line emails.
  • Mr. W seems to expect specific answers to vague questions.
  • Mr. W writes so poorly that I don’t think it can be chalked up to merely English-as-a-second-language syndrome. No English class on Earth is going to teach writing “u” instead of “you”. The only people who can be excused for writing “u” are teenagers texting each other, or teenagers writing in a chat window while simultaneously fighting level 55 scorpids in World of Warcraft.
  • Mr. W approached me with an enormously inflated notion of his own skills. I don’t believe anyone who is actually proficient in any test technique or approach would ask such a broad question as he asked, except as a joke. For one thing, it’s a question that would require a very long answer. For another thing, if he’s as proficient a tester as he claims, he would already know that answer.
  • Mr. W’s insults lack wit and specificity. I bet he tells all the writers he pisses off that they “just show-off of greatness”. Why, I haven’t even tried to show him my greatness, yet. I’m intentionally not showing greatness. Or maybe he considers it an act of show-off greatness to spell reasonably well, and make use of conventional grammar.

How to Ask Questions

Don’t be like Mr. W. If you want free consulting from a writer in the field, here’s how to get it:

  • Do not present yourself as if you are so lazy you can’t even summon the ambition to spell words and complete sentences.
  • Do reasonable homework before you approach a stranger to ask for answers. Learn to use Google. Poke around the many websites available.
  • Find out the basics of what that stranger does and believes. For instance, I am well known to be skeptical of certification programs. Citing that you are certified would just put me on edge.
  • For most strangers you approach, you probably should first ask permission to ask a technical question. For me, I grant you permission. For me, here’s what you need to do: Introduce yourself, state your situation, state your problem in detail, state what you have already done to solve your problem, and ask a specific question. Also, it doesn’t hurt to say something about how you know that I’m probably busy and how you would understand if I don’t have a lot of time for out-of-the-blue questions.
  • Be ready for the stranger to ask you to do some work to solve your own problem first, or to suggest that you are not yet ready to receive an answer. A good response to this is to do some more work and come back. You may also ask for more details about the kind of work they think you need to do.
  • Never blast your question to many consultants at once. It will get you instantly blackballed.
  • It never hurts to offer a favor of some kind in return for help.
  • In my case, if I don’t reply to you, try me again. Keep trying until I reply. The reason I don’t reply is usually because I’m really busy that moment. But squeaky wheels eventually get a response. Pradeep Soundararajan once sent me 18 reminders over a three month period until I finally answered his question. He has since received a great deal of my time, at no charge, just because I feel that he respects and values my help.

I will give time to four kinds of people: hard-working self-possessed people, warm and charming people, people who teach me something important, and people who pay me money so I can support my family. If you aren’t the fourth kind, and you want something from me, try to be one of the first three.

Could the Military Be Waking Up?

Ever since I got into the testing field, almost 20 years ago, it’s been a truism that military software development is moribund. It’s not that they love process, it’s that they love bad process. Documentation? Bad documentation. Who can look upon 2167A or Mil-Std-499 without dismay? I’ll tell you who: paper mills and people paid to arrange the ink on all that paper. It’s just a scam and a travesty.

I was asked, in 1998, to analyze two military software software test plans, each for a different major weapons system. I told them up front that I was neither interested nor qualified to assess the test plans against DoD standards, such as Mil-Std-499. I was told, no problem, assess them against best commercial practice. Interpreting “best commercial practice” as what I would generally recommend doing for a life-critical or otherwise high stakes project in the commercial sector, I created a model for analyzing test plans (now published on my website and also in the appendices of Lessons Learned in Software Testing). I then applied the model to the test plans I was given. What was immediately apparent is that the military test documentation had very little information density. It looked like the 75-page documents had been automatically generated by a set of macros operating on a short bullet list.

I made a bunch of suggestions, including showing how the same information could productively be packaged in 5 pages or so. That way, at least we could wage war on something other than trees. They replied, after two months of silence, that my analysis was not useful to them. Why? Because my ideas about test plan documentation were not consistent with Mil-Std-499. That was the only feedback I received about that work. Way to squander taxpayer money, guys! Hoowaa!

A New Hope

The defense department may be waking up to the problem, at long last. See the NDIA Top Issues Report. Notice that two issues I like to harp about are in the top five challenges: skills and testing. Notice that the military is now officially concerned about wasteful documentation and low-skilled workers.

Maybe not coincidentally, I recently taught my class at Eglin Air Force base, with F-15s thundering regularly overhead. I was surprised that they would invite me to teach there. I was a bit more surprised that they were quite receptive to the concept of skilled and agile software testing, wherein our documentation is produce only because and only to the extent that it actually serves a useful purpose.

Question: How Many Times Should You Run a Test?

Kevin asks: What is the best or industry standard for how many times a test case should be run?

There are questions that should not be answered. For instance, “What size unicorn do you wear?” or “How many cars should I own?” Sure, I could answer them, but the answers are worthless. My answers are A) I don’t wear unicorns and B) 2. In these cases, the more helpful reply is to question the question. For the first question, perhaps you said “uniform” and I misheard you. For the second question, perhaps you own a railroad and you were talking about train cars of different kinds, whereas I assumed you’re a small family and you were asking about automobiles.

I can tell you this for sure: No one I respect in the testing field will give you a direct answer to the general question of how many times a test should be run (except maybe as a joke).

Imagine if the answer was 100,000. Would you believe it? What if the answer was 7? Wouldn’t you wonder what was wrong with 6? I can imagine 7 being the right answer, but only for a very specific hypothetical case, not as any sort of general principle.

The first potentially useful answer I have is to tell you that this question would not even occur to you if you knew how to test, therefore, what you really need to do is start learning how to test. I mean if someone was re-wiring your house, and during that process he asked you what “voltage” is, wouldn’t you get someone else to wire your house? Like electrical work, plumbing, computer programming, or welding, good testing is a skilled activity.

I rarely give that answer, though, because I worry I will just leave people feeling discouraged.

The closest thing to a direct answer I can give you is this:

There exist no testing industry standards that are universally binding or even, in my opinion, more than negligibly helpful. Yes, there are documents that purport to be standards. If you are bound by them then you already know that. You aren’t subject to standards unless one has been imposed upon you by a regulating authority or by contract. Therefore, considering that testing costs money and time, I suggest that you don’t run any tests unless there is a reason to do so. In general, don’t do the same work a second time if you have already done it once. Certainly, if your clients would benefit from you running a test again, go for it. Otherwise, you are just indulging in compulsive/obsessive behavior, and you need help of a different kind than I offer.

A problem with this answer is that it begs the question of how you know when to run a test again. Fortunately, I wrote an essay on possible reasons to repeat tests. I can think of ten good reasons that you may want to repeat any given test (along with one big reason not to).

That’s a pretty good answer, but I think I can offer a little more:

Your job is probably to discover if there are terrible as-yet-unknown problems in your very complex product that you have little time to test. To do that job really well requires that you design and perform many tests, more tests than you probably have time to run. Therefore, when you run a test a second time, you are spending precious time and resources (even if it’s automated, though possibly less so) on something other than running a test you have not yet run that may find one of those big bugs you haven’t yet found. Get it?

So, how about having a small set of very basic tests that touch upon a lot features of the product. You may even want to automate these. It should take ten minutes to run these tests, ideally. Perhaps as long as an hour. Repeat those for every build. Their purpose is to quickly detect huge obvious things that may be wrong. Call that the smoke test suite. For everything else, make a test coverage outline that lists every significant element of the product and every significant element of data. Visit the items on that list and test each one according to its importance and potential for failure. Whenever any part of the product changes, try to figure out what could have been affected, and retest that area– but using different tests; perhaps variations on what you’ve already done.

By the way, the more you learn about testing, the less you will find advice like the preceding paragraph useful, because you will carry within you the ability to design your own test strategy that fits your specific purposes and contexts.