A Test is a Performance

Testing is a performance, not an artifact.

Artifacts may be produced before, during, or after the act of testing. Whatever they are, they are not tests. They may be test instructions, test results, or test tools. They cannot be tests.

Note: I am speaking a) authoritatively about how we use terms in Rapid Testing Methodology, b) non-authoritatively of my best knowledge of how testing is thought of more broadly within the Context-Driven school, and c) of my belief about how anyone, anywhere should think of testing if they want a clean and powerful way to talk about it.

I may informally say “I created a test.” What I mean by that is that I designed an experience, or I made a plan for a testing event. That plan itself is not the test, anymore than a picture of a car is a car. Therefore, strictly speaking, the only way to create a test is to perform a test. As Michael Bolton likes to say, there’s a world of difference between sheet music and a musical performance, even though we might commonly refer to either one as “music.” Consider these sentences: “The music at the symphony last night was amazing.” vs. “Oh no, I left the music on my desk at home.”

We don’t always have to speak strictly, but we should know how and know why we might want to.

Why can’t a test be an artifact?

Because artifacts don’t think or learn in the full human sense of that word, that’s why, and thinking is central to the test process. So to claim that an artifact is a test is like wearing a sock puppet on your hand and claiming that it’s a little creature talking to you. That would be no more than you talking to yourself, obviously, and if you removed yourself from that equation the puppet wouldn’t be a little creature, would it? It would be a decorated sock lying on the floor. The testing value of an artifact can be delivered only in concert with an appropriately skilled and motivated tester.

With procedures or code you can create a check. See here for a detailed look at the difference between checking and testing. Checking is part of testing, of course. Anyone who runs checks that fail knows that the next step is figuring out what the failures mean. A tester must also evaluate whether the checks are working properly and whether there are enough of them, or too many, or the wrong kind. All of that is part of the performance of testing.

When a “check engine” light goes on in your car, or any strange alert, you can’t know until you go to a mechanic whether that represents a big problem or a little problem. The check is not testing. The testing is more than the check itself.

But I’ve seen people follow test scripts and only do what the test document tells them to do!

Have you really witnessed that? I think the most you could possibly have witnessed is…

EITHER:

a tester who appeared to do “only” what the test document tells him, while constantly and perhaps unconsciously adjusting and reacting to what’s happening with the system under test. (Such a tester may find bugs, but does so by contributing interpretation, judgment, and analysis; by performing.)

OR:

a tester who necessarily missed a lot of bugs that he could have found, either because the test instructions were far too complex, or far too vague, or there was far too little of it (because that documentation is darn expensive) and the tester failed to perform as a tester to compensate.

In either case, the explicitly written or coded “test” artifact can only be an inanimate sock, or a sock puppet animated by the tester. You can choose to suffer without a tester, or to cover up the presence of the tester. Reality will assert itself either way.

What danger could there be in speaking informally about writing “tests?”

It’s not necessarily dangerous to speak informally. However, a possible danger is that non-testing managers and clients of our work will think of testers as “test case writers” instead of as people who perform the skilled process of testing. This may cause them to treat testers as fungible commodities producing “tests” that are comprised solely of explicit rules. Such a theory of testing– which is what we call the Factory school of testing thought– leads to expensive artifacts that uncover few bugs. Their value is mainly in that they look impressive to ignorant people.

If you are talking to people who fully understand that testing is a performance, it is fine to speak informally. Just be on your guard when you hear people say “Where are your tests?” “Have you written any tests?” or “Should you automate those tests?” (I would rather hear “How do you test this?” “Where are you focusing you testing?” or “Are you using tools to help your testing?”)

Thanks to Michael Bolton and Aleksander Simic for reviewing and improving this post.

 

10 Responses to “A Test is a Performance”

  1. Joe Strazzere Says:

    Thanks, James (and Michael and Aleksander)!

    While I’m uncomfortable with the connotations of the word “Performance”, I understand the concept here. It’s powerful in many ways.

    For example – metrics.

    When you experience a performance, you can review it and say “that was a superior performance”. That’s more impactful than simply counting the number of artifacts produced (and thus incenting the production of these artifacts).

    [James' Reply: Another element is that performances are never entirely repeatable.

    BTW, thanks for giving me the idea for that post.]

  2. Isaac Howard Says:

    James,

    So, if indeed testing is a performance. And some of the artifacts of testing can be documentation (in whatever form). How does one communicate the performance to another?

    [James' Reply: You know the answer, already. You've seen performances. You've learned how to perform. You learned how to greet people who come to your door, for instance (there's no one way to do it; how you do it depends on factors including time of day, type of visitor, etc.).

    Figure skating is a performance. How do people communicate it? By sight and sound. By narrated demonstration. By conversation. By expert diagnostic coaching. By documentation and diagram (see http://www.usfigureskating.org/content/BS-compmanual.pdf). A figure skating performance cannot be recorded from the point of view of the person giving it. It cannot be "played back" through the actions of someone else. The closest thing to that would be someone interpreting a routine and, having the same skill set, reproducing the performance in its essences.

    I choose figure skating because when I was a kid I was a figure skater, so I know a few things about how one learns to skate.]

    In music I can record the medium on audio files (tape, CD, mp3). Whereas with your sock puppet example, we could record it live with video, and it might be fun to watch.

    [James' Reply: You could record it in that way, and in so doing you could reach more eyes and ears. But you would not be reproducing the dynamic of the performance itself. The proper analog would be to record two musicians improvising without hearing each other or knowing the other's key or time signature. When you put their recordings together they will not sound good. Improvisation in a group depends on listening and reacting in the moment. Same with testing.

    Recording technology can be used to help in lots of ways in testing. It can be used to amplify or multiply various effects or actions. This is all potentially good, but I would not say that doing that, in and of itself, is doing testing. Testing requires human judgement in the moment. If you buy a Hallmark card, it might help your courtship, but it is not "pre-recorded courtship!"]

    In the act of testing on the other hand, how does one demonstrate to ones peers that indeed one knows how to test? Are the artifacts of testing enough? (Charters, Issue Reports, MindMaps, etc) I ask this cause I have met people who are capable of producing what I consider beautiful artifacts, but are incapable of turning those artifacts into a performance. Can you distinguish those people without asking to see a performance?

    [James' Reply: Oh I quite agree that beautiful artifacts do not necessarily imply good performance. I am good at creating documents and I know that only too well. Based on nearly a decade as a test manager before I became a consultant, and based on a great deal of experience teaching and coaching since then, I believe that ONLY through periodic witnessing of actual testing performance (following along as it happens, even if you are following along at a distance) over a period of time, can you reliably evaluate a tester. On a regular basis I am surprised by the poor performance or good performance of a tester I thought I knew pretty well. I have learned to be conservative in my opinions about testing skill.]

    Watching a video of someone testing would be long, possibly boring (unless the tester is vocal or expressive) and file size intensive. Is it possible to tldr a testing performance?

    [James' Reply: There are indicators. Vocabulary is a tricky one, though. There are certain beliefs that may seem to disqualify a tester quickly, too ("I can do complete testing and assure total quality with my automated test system!"). But I can think of examples where people who talk like idiots turn out to be hypocrites: their behavior shows a whole lot of testing wisdom that doesn't match their vapid words. Of course the opposite is easy to find. Perhaps someone tells me he's an experienced tester, and then completely fails to solve a testing problem that a novice next to him seems to have no trouble with.

    I would say, after years of working on this and trying things: I need to see a tester in action on a project in order to feel deeply confident that I know what he's made of. This is why I usually do the next best thing. I uses a variety of practical testing exercises, of various shapes and sizes, and patch together an impression from that.

    I would like us to build a testing community that earns such a good reputation that being respected within it would confer respectability in the broader world. We're not there yet, but we're trying.]

    Fundamentally I ask this not for the people I work with on a daily basis, we actively participate in each others testing efforts. But if I were to attempt to publish artifacts to the outside world, can anyone be assured from my artifacts that indeed I know how to test (in the informal sense)?

    [James' Reply: Testing artifacts have a somewhat weaker power to corroborate your expertise than they have power to condemn it. On my website, you'll find a number of my own artifacts. You can judge for yourself or ask me about them.]

    Thanks,
    Isaac

  3. Florian Jekat Says:

    Hello James,

    that’s a very good post. A test is a performance. That’s the point. A test becomes a test by doing and thinking, I think. To use your words, an artifact needs somebody to be created or used, someone who makes a performance with it. Maybe this is a difference between test and check, too. There is more thinking and doing in performing a test than in doing a check.
    A nice metaphor comes to my mind. It’s like yours with the music: a dance, maybe a tango, becomes a dance only in perfoming it, which means you have to move your body, your feet, feel music and think about your next figure to come.

  4. Keith McFall Says:

    James,
    After my holiday-related, self-inflicted news and blog black out, all I can think of after reading your post is how I am going to market a new line of sock puppets named “TeCC” (for Test Case Creature) and take it with me on all of my projects. In a meeting I would throw it on the table and say “There’s the test case. It doesn’t do much like that. Now what?” And I would say, “But you see, when you put the sock on my hand you actually have something…”
    Maybe I can find a friend that knits to make me one!
    Cheers,
    Keith

    [James' Reply:

    Tester: "What shall I test, Socky? Without documentation I am all at sea!"

    Sock Puppet: "Try boundary tests!"

    Tester: "Thank you, Socky! I'm so glad I had the discipline to sit down and draw a face on my sock and put it on my hand and make it seem talk as if it were telling me something that I don't already know. Because now when I do the same testing I would have done otherwise I can point to you so that my manager thinks that I didn't just make it up on the fly! Sock puppets are great!"]

  5. Rikard Edgren Says:

    Good stuff!
    I believe the biggest danger of the artifact view is in testing education.
    A lot of traditional testing education evolves around testing as a noun (test cases, coverage numbers, document templates etc.) as opposed to testing as a verb, a performance.
    I don’t think that’s the right way to teach testers’ skill.
    But it is seemingly easier, especially when it comes to exams (can you imagine a multiple choice exam evaluating testing as a performance…)
    To teach testing as a performance I agree with your comments on Isaac’s comment, with an emphasis on feedback and authentic problems.

  6. Jon Foley Says:

    Great post! Too frequently I encounter other testers that stick strictly to the script and don’t understand the creativity that goes into effectively testing software. They don’t “get it.” They don’t understand that being an effective tester means conjuring up ways to use the software that weren’t thought of, intended or designed. They treat everything as a workflow and anything outside of it is a corner case. Coming from a previous creative product environment into enterprise level software revealed a lot more of this behavior than I expected as well.

    Unfortunately much of this behavior falls into the “Can’t teach an old dog new tricks” category. Too many people ingrained in their ways without trying to think on their toes.

  7. Mario Gonzalez Says:

    >> Such a theory of testing– which is what we call the Factory school of testing thought– leads to expensive artifacts that uncover few bugs.
    To be convincing you need to base you arguments on facts, not simply on wishful thinking or circular logic.

    For instance: There are many worlwide studies that show that exploratory testing is AS EFFECTIVE AS scripted testing.

    [James' Reply: There are no studies that demonstrate that. There ARE studies that make that claim, and each one that I've seen would not pass muster even in terms of correctly identifying the phenomenon that they purport to study, and aside from that fail on other grounds, such a sample bias, experimenter effect, and a general lack of attention to threats to validity.

    If you want to debate the specifics and merits of any such study, list specifics and I will tell you how they don't show what you seem to think they show.]

    Further, ET might even introduce technical debt. Translated for your understanding, this means “bad things to software”.

    Sources:
    - http://www.soberit.hut.fi/jitkonen/Publications/Juha_Itkonen_Licentiate_Thesis_2008.pdf
    - http://www.testingmentor.com/publications/articles/Empirical%20Evaluation%20of%20Exploratory%20Testing.pdf
    - http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&arnumber=6475929

    [James's Reply: Yes, I see the links. Are you able to follow the logic of those reports and compare it to the logic and arguments that I have already published?

    1. Please see this from Michael Bolton: http://www.developsense.com/blog/2010/01/defect-detection-efficiency-evaluation/

    2. Then respond to his critique. Perhaps that would move the ball forward for you.

    I would take you a lot more seriously if you showed that you listened to, understood, and responded to arguments. We are deep in the midst of our work, at the coal face, while you are way up at the mouth of the mine yelling down at us. Walk in, Mario! Come and see what we are doing. Follow our reasoning and then you will have a basis to criticize it.

    But as I've told you before, we have a paradigmatic schism between us (you should REALLY try to read more about paradigms and incommensurability, it would calm you down and you'd be a happier man) and this probably makes productive discussion impossible. I basically disagree with your entire testing ontology. And if someone gets the meaning of the word testing wrong, why bother with the details of evidence? No evidence will matter.]

    >> Their value is mainly in that they look impressive to ignorant people.
    You are talking here about yourself and the CDT “school of testing”… Charlatans always feed on ignorant people.

    [James' Reply: I'm sorry that you feel that way. Obviously, I don't see how that responds in any way to my published arguments that you keep ignoring. But you serve as a reminder that science is a social activity, not a strictly rational one. I can cite entire bodies of work in Epistemology, going back to Socrates, Pyhrro, Hume, or modern fallibilists, such as Popper, Lakatos, or Kuhn, or social scientists such as Harry Collins, whose entire career refutes your fetish for formality.]

  8. Mario Gonzalez Says:

    [COMMENT REDACTED]

    [James' Reply: Mario perhaps you are too young for this, or maybe I am too old. Anyway, I notice that your comment about facts contained no facts, and so it failed to get past the irony filter and has been redacted according my policy on comments.

    Please respond in good faith to the arguments that I've already made and published if you want to be useful.]

  9. Mario Gonzalez Says:

    >> Please respond in good faith to the arguments that I’ve already made and published if you want to be useful

    That was a good chuckle. You do have a way of breaking the ice.

    It is interesting you would censor one response but not the other. But, like you yourself said before, you are in the best place to prove that what you claim is true, so please go ahead. If not for your reputation, for the sake of your followers.

    I expect you to censor this response in one way or another.

    Best,
    Mario Gonzalez

    [James' Reply: An interesting set of ethical and social challenges come with running a professional blog. Among them:

    1. I have a responsibility to commenters to minimize any editing that may distort their comments. I must preserve the substance and intent of comments to my best ability.

    2. I have a responsibility to commenters to not to unduly exploit or hold up to ridicule their ignorance of testing or lack of social skills. I think for powerful people it is fair to let them look stupid if they are determined to do so, but to the degree that someone is weak and pathetic or merely young and confused, it becomes abuse for me to let them comment such that they harm themselves.

    3. I have a desire, for marketing and branding reasons, to offer my readers value. Therefore, comments should not be a complete waste of time. (For instance, this comment you made, although it is passive-aggressive drivel, at least allows me to explain how comment moderation works.)

    4. I have a desire not to let people use fake arguments with me as a way to raise their own Internet profile. I don't mind if they improve their profile through genuine debates, though. That's all part of the free market doing its work. But some people have gone so far as to create fake identities and fake testing challenges directed at me for publicity purposes. Can you imagine such childishness?

    Every comment of yours that I have accepted was because I thought it was worth reading. Every comment of yours I have redacted has been due to reasons 2, 3, and 4. Usually all at once. The comment you say I "censored" was one that offered nothing but empty sarcasm and whining. It wouldn't make you look good. Trust me, it's better if no one sees it. You can do better, man. And remember, you can publish your comments yourself on your own blog. What we are talking about here is what shows up on my blog. It is my right, responsibility, and pleasure to police that material.]

  10. Michael Bolton Says:

    For a more extensive treatment of this discussion see “Test Cases Are Not Testing: Toward a Culture of Test Performance” by James Bach & Aaron Hodder (in http://www.testingcircus.com/documents/TestingTrapeze-2014-February.pdf).

Leave a Reply

Comments are moderated. You might want to read my commenting policy before you make a comment...