Variable Testers

I once heard a vice president of software engineering tell his people that they needed to formalize their work. That day, I was an unpaid consultant in the building to give a free seminar, so I had even less restraint than normal about arguing with the guy. I raised my hand, “I don’t think you can mean that, sir. Formality is about sameness. Are you really concerned that your people are working in different ways? It seems to me that what you ought to be concerned about is effectiveness. In other words, get the job done. If the work is done a different way every time, but each time done well, would you really have a problem with that? For that matter, do you actually know how your folks work?”

This was years ago. I’m wracking my brain, but I can’t remember specifically how the executive responded. All I remember is that he didn’t reply with anything very specific and did not seem pleased to be corrected by some stranger who came to give a talk.

Oh well, it had to be done.

I have occasionally heard the concern by managers that testers are variable in their work; that some testers are better than others; and that this variability is a problem. But variability is not a problem in and of itself. When you drive a car, there are different cars on the road each day, and you have to make different patterns of turning the wheel and pushing the brake. So what?

The weird thing is how utterly obvious this is. Think about managers, designers, programmers, product owners… think about ANYONE in engineering. We are all variable. Complaining about testers being variable– as if that were a special case– seems bizarre to me… unless…

I suppose there are two things that come to mind which might explain it:

1) Maybe they mean “testers vary between satisfying me and not satisfying me, unlike other people, who always satisfy me.” To examine this we would discover what their expectations are. Maybe they are reasonable or maybe they are not. Maybe a better system for training and leading testers is needed.

2) Maybe they mean “testing is a strictly formal process that by its nature should not vary.” This is a typical belief by people who know nothing about testing. What they need is to have testing explained or demonstrated to them by someone who knows what he’s doing.






10 thoughts on “Variable Testers

  1. Of course the point of invariability is to drive the cost down. It’s the factory concept, interchangeable parts, fungible resources, etc.

    If every software build and every project were the same, you could turn your brain to “off” and mindlessly execute a script, then read off the “Pass/Fail” status.

    If every testing task were the same, you could shift the task to the cheapest tester, or just automate it and be done with the whole concept of “tester”.

    Sadly, that’s what some in management seem to believe actually happens.

    We all face the challenge of educating management to the best of our abilities as to the value that testers and testing provide. We must acknowledge that our role is a thinking role far more than an executing role.

    [James’ Reply: That is often how it’s framed. But, of course, “lower cost” is utterly nonsensical. If they want to lower cost, and that’s all they wanted, then fire all the testers. So “formalize” must mean “formalize without sacrificing effectiveness” and that means “formalize a process that we assume is effective and would remain effective if it were formalized.” This fails because effectiveness of testing is not an accidental matter– it’s an ongoing struggle. You have to work at it. And when management makes noise about formality and no noise about the quality of the work, quality of work collapses.

    Another subtle problem is that the level on which you formalize can destroy a test process or protect that process. If you formalize on a low level “always press the same keys” as opposed to a higher level “systematically practice your skills so that you choose good keys to press” you are literally erasing the testing, because testing is not about keys, but about systematic learning and re-learning.]

  2. It sounds like the problem was framed poorly.

    Is the issue how the product was tested, how the test results were reported, or how the test team interacts with the whole of the company?

    [James’ Reply: Indeed, it wasn’t framed at all, as I remember. It was just “let’s formalize.” But I got the impression he meant “write down your actions and then repeat those actions.”]

    How big is the test team? Can the VP of Software Engineering realistically take the time to learn and appreciate every tester’s approach and discover the value in every result reported?

    It’s not clear what exactly ‘formalization’ means, or what problem it’s trying to solve.

    [James’ Reply: The word formal means “according to form” more or less. As I see the word used around the industry, a formalized process is one that must be done in a specific way rather than any old way you feel like at the moment. When I speak of formal testing, that’s what I mean, too. This fits mathematical formality, sartorial formality, and any other sort of formality I have thought of so far.]

    It sounds like you interpreted it to mean “we are implementing best practices void of context”

    [James’ Reply: Close. “Best practice” means nothing and I have agitated against that phrase for years, so that wouldn’t be how I interpreted it. I interpreted it as “stop doing what each individual tester think fits their situation and solves the problems they face, and instead organize yourselves to do the same things in the same way, over time, across the testing organization. Everyone use the same techniques, tools, practices, documentation. Define the process model and then change the process to “follow the defined process model.”]

    but it could also mean “the information testing is providing is not of clear demonstrable value, and a formal process will make our expectations clearer.” It could mean the company is formalizing all bug reports to include steps to reproduce. It could mean all testers are required to be available during a formal time frame to improve communication. The scientific method is a formal process.

    [James’ Reply: That’s an interesting point. That’s kind of like how when someone says “call the fire department” they aren’t referring to the call itself but to the need for help. Thus “formalize” might mean “figure out what should be done and then make sure you do it” or “remember that solution we have already decided on? You should organize yourselves to do that.” The effectiveness concern I have would thus be automatically handled.

    Okay, that’s a fair point… except for the fact that formalizing is easy compared to getting the process right. Unlike calling the fire department, where the fact of calling implies that help will come quickly, telling your people to formalize will probably lead them to formalize on whatever process idea is available. It probably won’t lead them into a long process of experimentation that culminates in responsible engineering.

    It’s easy to measure that something is formal in some way, but much harder to show that it’s a better way to work than alternatives.

    A note about the scientific method… calling it formal is a stretch. It might be fair to call it “slightly formal.” See Against Method, or Science in a Free Society. See The Golem, or The Golem at Large, or Changing Order, or The Structure of Scientific Revolutions, or Exploring Science, or Science as a Questioning Process, or The Mangle. These are all books about the scientific method in practice. Sociologists who study the practices of science have a lot to say about the formality of them, and lack thereof.]

    Formalization may cut variability, but not all variables share the same value given the context of the problem.

    [James’ Reply: Amen.]

  3. Gutsy move! 🙂

    I think it is safe to say that testing is “both an art and a science”*.

    I can imagine this idiot VP telling a group of artists that they “needed to formalize their work”. “Only paint this way!” Idiot.

    I’d like to “pop the why stack”, see what’s really bugging him, and address that, instead.

    *That’s kind of a silly phrase, but I’m ok with the spirit it conveys.

  4. Sometimes I’ve seen the root of this problem being, “It takes me too long to discover a tester that I can not trust.” Basically, if the manager is a developer, within a few months when the code doesn’t work, he can read it and know there was a hiring mistake. Without learning anything about testing, this person wishes to avoid trusting any tester, so tries to make testing something it isn’t to compensate for their own blindness.

    There is also an idea that repeating the same tests is a good idea. To me, this is a “worst practice”. Spending a great deal of time to repeat the same tests is much like reading the same 3 books your entire life. There is value in re-reading a good book at different times of your life, but only the best books at the most relevant times, and it certainly doesn’t make sense for re-reading to be the bulk of all reading. Still, I can’t talk some people, who are otherwise quite rational, out of this practice.

    Last, but not least, I’ve seen the argument that a more formal process for testing is more professional. This infuriates me, as I believe telling lies and misleading others is morally deplorable. If we were providing an artistic performance rather than a job with output (that output should be data of interest to stakeholders), making testing look easier than it is would be justifiable. Testers would be much like figure skaters who smile & wear sequins even though they are performing difficult athletic tasks. Since good testing isn’t simple, over simplifying it to make it appear easier is deception rather than enhancing a performance, so I don’t agree that it is a sign of being more professional. In fact, putting other people down, and insinuating that you are more professional than others is one sign I look out for to detect scams.

  5. It all comes back to a simplistic idea people have about testing. Or should I rather say checking. They understand checking they don’t understand testing.

    [James’ Reply: They understand neither. It’s just that checking doesn’t frighten them.]

    So now we have testers doing something that does not fit the simplistic schema the manager has of what testing should be. This leads to a loss of trust.

    [James’ Reply: I wouldn’t say that alone leads to loss of trust, though. I don’t understand the ins and outs of cardiac surgery, so I’m not bothered if a cardiac surgeon’s ideas don’t match my ideas. It’s only if I think I do understand something that I might be concerned when someone else has a different idea of it.

    Our problem is more that testers have allowed simplistic ideas about testing to flourish. We then are victimized by those fairy-tales.]

    Testers also don’t have the best record for doing enough advertising/education of non testers. So the manager reacts as I’d expect him to react. He starts to put on the thumb-screws. The ideal being somewhere out of the pages of diverse ISO standards and factory models.

    [James’ Reply: It’s an ideal for certain managers as well as certain consultants, but for different reasons. It’s an ideal for managers because they think it represents a stable, controllable process (regardless of its effectiveness). It’s an ideal for consulting companies because working that way guarantees lots of billable hours (regardless of the effectiveness).]

    In his eyes he will still be “testing” he will actually get what he wants. This has nothing to do with good process or quality but everything to do with him getting what he believes he should be getting. Since quality is often difficult to define and “see” the effects of this idiocy are far down the line and when they eventuate it might be too late to fix (see diverse big IT failures world wide. There are no shortages of examples). By that time the manager has long moved on to the next thing. He will not have learned and is bound to rinse and repeat.

    [James’ Reply: Yes, that is one reason fake software testing continues to thrive.]

  6. I think the problem is that within a large test team, not all testers are equal. Some will be better than others, some will be worse. I have worked with some who definitely fit into the worse category.

    [James’ Reply: Me too. But variability is not itself a problem. There is variability in every walk of life. Variability can lead to many good things.]

    In a highly formal testing service, although it constrains the better testers, it ensures (or at least helps to ensure) that the worse testers are performing to a recognised standard.

    [James’ Reply: But, no, it really doesn’t. Because skill is not captured in that formalization process. I agree that the whole point of formalization is to “lock in” to something worth doing. And I agree that can be valuable. And I do that. But neither you nor I can design forms that prevent– or even make it difficult– for bad testers to be bad testers.

    To solve the problem you have cited, you need to use training, personal supervision, and a discerning process of evaluation. That’s why the U.S. World Cup team is not selected by lottery, but rather by judgment after a long process.

    If employers are choosing incompetent testers and then hoping that formalization of the test process somehow will fix that, they are being proven wrong wrong wrong every day. Perhaps they won’t recognize that they are wrong, however. Perhaps they will simply hold testers in low regard and look for the first opportunity to outsource.]

    It would be better if these people could just be removed (take that in any way you wish) or at least trained but for some reaosn, managers do not seem to like either of these options

    [James’ Reply: Yes, that’s because the managers themselves are not trained and properly supervised. The testing crisis in our field is a management crisis.]

  7. Hi James!

    Sorry if it is not related to the topic of you blog!

    When it comes to the relation between the supplier and the customer I’ve experienced one problem I have no idea how to deal with.
    Say you are on the customer side and you do some testing of the product delivered by the supplier to you. And you find defects, many defects, some of them are obvious for you.
    You have a contact with the test leader on the customer side. So you decide to bring this issue up to the person.
    And you get an answer: “we have tested but missed; what’s the problem?” delivered with “NotMyProblemWhateverYouSay” emotion on the face.
    And this happens several times.

    How to deal with such an attitude? I understand that it is not possible to find all the defects; and the defect itself can have different meaning for different persons. But not when this argument is misused! On the supplier side!

    Thanks in advance!


    [James’ Reply: I would escalate to higher management.]

  8. The management at my current employer have used a couple different specific formulations (that generally look like ) as a _lingua franca_ integrating problem diagnosis and new feature development in an Agile framework.

    One of the things we try to do here is integrate testing into the model by talking about how an investment in testing reduces risk.

    Managers who understand that model mathematically, but who don’t really “get” software testing — deep down in their souls, they want to hear something convincing from the test people like “as near as we can tell, the risk of this feature is [some value or function], and we think if we spend [some range of time – the tighter the better] doing [marginally intelligible technical jargon] we can reduce the risk to [some other range of values or functions]. You want maybe we should do that?” Or, even better, have the test group come out with all that plus “…and we figured it was worth the investment so we’re adding it to the plan, which may delay this feature by a release but the overall cost is lower as you see here and here, is it OK we’re doing that”.

    A problem comes in when people aren’t clear enough about uncertainties and/or don’t take the time, every now and then, to step back and re-assess old risk models, possible new ways to build and/or test the system, and so on. Managers who are then used to being given artificially precise risk models may not even want to hear “We’re having more problems than we should with the current process, so we gotta go really look at the system and see what we can find, after which we may have a better idea about some specific testing and/or development initiatives to look into that will reduce technical debt and clearly pay off … in the long run.” That’s just way too many steps of unpredictable and revolutionary stuff for some people to be comfortable with.

    [James’ Reply: Welcome to life.]

    So I try to figure out how to sneak the thinking time in. I’ve got “exploratory testing” into our process for new projects and significant maintenance projects. I’ve had to hand-wave an argument for why we do it a few times, but never really been called out to show a deck of slides to the heavy management to justify why we budget time for exploratory testing and what they get for it. They seem to trust me so far. But I would understand, in a way, if management were to choose not to trust me and to get riled up about the most abstract of our exploratory test session charters. I wouldn’t like it, and would have a hard time working with it, but I’d (sort of) understand it.

    [James’ Reply: You would understand if management didn’t trust you and instead trusted their one-step-above-astrology rationalizations that are based on no valid mathematics or empirical evidence? I would not understand. I would not stand for it.]

    Does that make sense at all?

    [James’ Reply: You have ample scientific support for exploratory testing. Indeed, exploration is the way of science itself.]

  9. Hi James,
    Just a Quick note to say im just starting out in software testing and really enjoyed watching your youtube videos and reading your blog. Id just like to say to watch you talk about testing in that way is just amazing, and it fills me with excitement as to what this will be like to do for a living. I have always liked puzzles, thinking outside the box etc so its good to hear you talk about how this can be an advantage. I have just came out of the military after quite a while and still feel like I really will love the new challenges this line of work will bring, and I enjoyed my 6 month course as well. I just hope now to get the chance with some companies to get the experience. Well anyway I just thought id drop a line to thank you for your great and much appreciated encouragement james.
    God bless and good health always,
    [James’ Reply: Thank, man!]

  10. James,

    Thanks for the post.

    I remember a similar story. I was invited to a company to deliver a guest lecture some 4-5 years ago. It was on the subject of fuzzing. Without consulting me, the company put an advertisement for my talk with the words “Learn how to hurt the developer’s ego”. When I reached there this poster was right there in my face. I was very offended and thought to call off my lecture. Then I did something better. I modified my slide deck quickly and added a slide to describe how silly the advertisement read and why before learning about fuzzing, their company should learn that testers do not work for hurting anyone’s ego.

    To my surprise, the head of L&D talked about it later with me and said that in the spirit of making the advertisement catchy, they had done a blunder 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *


This site uses Akismet to reduce spam. Learn how your comment data is processed.