I’ve released a new video, which is a whimsical look at a serious subject: explaining exploratory testing.
In the video, my brother and I independently test an “Easy Button” for 10 minutes. Neither of us had seen the other’s test session. Then I edited the 20 minutes of total testing down to a 4 minute highlight reel and added subtitles.
The subtitles are important. One of the core skills of excellent testing is being able to reflect upon, describe, explain, and defend your work. The rhetoric of testing is a big part of Rapid Testing methodology.
So, everything we did, we can explain. If someone stops me when I’m testing, I can give a report on the spot, in oral or written form, and I can put specific technical terminology to it. In my experience, most testers are not able to do that, and there’s one major reason– they don’t practice. It does take practice, friends. While you were enjoying your Sunday, my brother and I were challenging each other to a testing duel.
You might quibble with me about the specific terminology that I used in the video. Indeed, there is a great deal of leeway. One single test activity might simultaneously be a function test, a happy path test, a scenario test, a claims test, and a state-transition test! There’s no clean orthogonality to be found. And as you already know if you read my blog, I reject any “official” lexicon of testing. But I’m not just throwing these terms around, I can explain each one, and say what is and is not an example of it.
What about the Easy Button?
Our principal finding is that the Easy Button is extremely durable. I’m surprised at the high quality of the fit and finish. Also it feels solid (I discovered why when I disassembled it and found apparently lead weights inside. Plus, the button surface is amazingly resilient to repeated hard blows with a rock hammer).
But I’m also surprised that it claims not to be a “toy.” Of course it’s a toy. Of course little kids will play with it.
If I were seriously consulting about testing it, I would probably suggest that its physical qualities were more important to validate than its functional qualities. There appears to be little risk associated with its functionality. On the other hand, there appears to be little risk with its physical qualities either.
I would suggest that it’s far more important to test the web version of the “Easy Button” than the physical version. I would move on to that.
Darren Ryan says
Interesting post and video, a few months ago when my team were hiring new testers a few of us thought of placing a drink bottle in front of the candidate and asking them how they would test it. Similar situation you and your brother put yourselves in. Have you ever done this in a job interview?
[James’ Reply: That’s a common tactic, and it can be quite revealing if you know what to look for. It’s the testing equivalent of a Rorschach test. I’ve used black rubber balls, staplers, whiteboard pens, pictures of screens, and of course, actual software. Once I even used the software that my group was actually testing.]
Michael Bolton says
Far more than just in interviews, James and I use this approach in class (with brave volunteers) and in coaching sessions, using various odd objects. We deliberately raise obstacles and traps, not only for diagnosis of potential problems but also to give testers the experience of wrestling their way out of the trap. Both class participants and instructors learn a ton from this approach.
Jon Bach says
I’ve used a lot of physical objects, too — especially good are obscure objects from a kitchen store or a hardware store — little things that aren’t obvious as to their function. I look for the ability to describe, analyze dimensions, conjecture (and corroborate and refute those conjectures), and imagine test ideas once they either discern (or I tell them) what the object does.
Mostly in interviews, I have used software. It’s not hard to have a laptop with absolutely anything (notepad, mspaint, a browser, the File/Open a dialog) and stand at the whiteboard the whole time as you take notes in front of them, just as you might do for software you test. I document their test ideas, executed tests, issues, and bugs. The point is to interact with them as they explore, to get them out of their resume and put them and their ideas in a working context.
Joe Harter says
James, I have a question.
You were able to reflect upon, explain, and defend your work, but I sometimes find that when I’m in the moment I’m not really thinking “Now I am doing configuration testing.”, “Now I am stress testing.” unless that was the purpose of my charter. I don’t really have subtitles in real-time. In retrospect were you thinking, “Let me get this rock hammer and stress test this Easy Button”?
[James’ Reply: My credibility as a tester depends on me being able to reflect upon what I do and why I’m doing it. It depends on being able to explain my work, as necessary.
I brought the rock hammer with me to the test, because I knew I wanted to do a shock-to-failure stress test. As you see from the video, I am talking about what I’m doing as I’m doing it (see my remark about the shoe test) and in the 16 minutes not shown, Jon and I both are explaining as we go. For instance, Jon announces that he will do state-based testing, and then begins doing it.]
Following is an IM conversation between me and a another tester, Shmuel who is the Legal testing expert in our department,
I liked how a sub subject in video with one aim could develop into a new testing idea (and pardon about the over criticizing the hammer work :-))
Me(after forwarding this link):
BTW – he also talk’s about testing legal aspects with an interesting idea we did not tried in our group (AFAIK).
He is correct in bringing that legal aspect. I am not sure “Easy Button” can really be understood as Service Mark, it is more a Trade Mark. But I am no lawyer! ?
Which actually, was the only bright test James did in the video. All the rest was hammer banging. Jonathan was more methodical (even if more boring, without hammers ?).
We do have marks tests in, and we even have bugs, as YYY and ZZZ never got TM&B clearance :).
Sure – but are you actually testing that the marks are registered as JB suggested?
More than checking with TM&M and their online database not. 🙂
But it would be funny if we caught them lying. 🙂
I checked now the XXX™ trademark, it is registered. The rest of our names are not registered as far as I know, we just get clearance to use them (and even if we want to claim ownership you don’t really need to register it to own it, but it is an added protection).
[James’ Reply: The video condenses 20 minutes down to 4, and the clips are out of order. I didn’t show all of what both of us did. My hammering happened at the eight minute mark, after I exhausted all of the other things I wanted to do with the button.
The method I used was to begin by reviewing the package and doing what I could do without destroying anything. I got a lot of ideas, some of which I wrote down for future reference (notice that, unlike Jon, I had a notebook). I then proceeded to functional testing and structural analysis just like Jon did (but I showed Jon doing those things in the video).
Jon thought of getting it wet. Later I took one of them apart and immersed it in water. It stopped working until it dried out.
I focused on the hammer part because I thought it was funny, and that humor would help.]
that was easy 🙂
If you do what you love, you will never have to work.
Ingress protection testing is so much fun. Immersion is just a small part of it. We have a huge variety of particles and objects to insert and why not use to pry open the device.
Fire rating is another set of fun tests.
Agreed that 10 minutes are not enough.
Great catch with the notebook! Most people think of *.log when they hear about logging.
As a side note, depending on the regulator authority, the “not suitable for …” label does not waiver the obligation to comply with the standard for normal use and forseeable abuse. In some places the “not a toy” does not free one of the regulation, for the allocation to a given category is made by the regulator entity, not the producer. In some places this also applies to the packaging.
Destructive testing might also look at the leftovers. The destroyed device is still safe? What kind of fumes does it give up when it melts, not only how easy is it to set on fire? Toy regulations are sensible to this kind of aspects. Quite similar with testing to see if there is an opening left exposed by the crashed software.
Lawyers would have a field day, depending on the country, if the shards are sharp, even if it took an adult with a hammer.
I can testify that Legos melt very nice and without sensible fumes 🙂