- Version
- Download 5870
- File Size 233.42 KB
- File Count 1
- Create Date April 27, 2014
- Last Updated April 21, 2019
Test Cases Are Not Testing
This is an article by me and Aaron Hodder for the debut issue of Testing Trapeze.
In this article, we explain why creating and performing test cases is not the same thing as doing testing. In fact, test cases are a poor basis for organizing a test process and no basis at all for measuring testing progress. Our industry's obsession with test cases must come to an end or testing as a field will be never be worthy of respect.
Attached Files
File | Action |
---|---|
test-cases-are-not-testing.pdf | Download |
That was a very interesting read! I’m brand new to Software Testing and found your article while looking through r/SoftwareTesting on Reddit. I’m currently reading the textbook, A Friendly Guide to Software Testing, and just made it to the section on writing Test Cases and I’ve been considering the inherent problem of having two or more people read a test case in a different way.
Do you feel the way to combat that would be to have a clearer understanding of what the product is supposed to do and behave like? The chances of a test case being misread are lowered then, or is this misreading unavoidable?
I really enjoyed reading your thoughts on Test Cases and look forward to reading more articles on this site. Thank you for sharing.
[James’ Reply: The most important question is do you know how to test? After that, do you understand and accept the test strategy? After that, we can consider whether you understand a test case. But I don’t think misreading, as such, is much of an issue, unless you see a test case as a blind order sent to a blind tester. My advice is: don’t work that way. Testers should not routinely test from a position of ignorance and should not be dictated to by mysterious procedures written by shadowy people.]
Hi, this is an interesting read. You mentioned testing is an activity of knowing the performance of the product. By not writing test cases. Then how would you validate it? Are you indicating to go for more exploratory testing rather than taking a structured approach? I did not really get what solution you are providing and how to achieve that.
[James’ Reply: The answer to that question is in the article you are commenting upon. But to reiterate: validation is not a helpful word. What we do is test a product. We are looking for bugs in it. Testing is learning via experimentation. Of course this process is exploratory. If you aren’t exploring then you also aren’t testing. This is a structured process, by any reasonable definition of structured, so I don’t understand your point, there.]
All the arguments in this paper apply to test automation? Doesn’t it also apply to the best developers who don’t understand testing? (rhetorical question – answer is not needed).
Hi. I completely agree with the content of this article. However, on the software projects that I’ve worked on test cases were used to provide some metric of requirement coverage and (perhaps more importantly) to document the tester’s thoughts about a requirement (or the system) during testing (our test case documents mostly contained thoughts and ideas, the test cases were a minor part of the documents). I think of testing as everything that leads up to the writing of a ‘final’ set of the test cases.
[James’ Reply: You can use anything as a metric. For instance, you can save all the contents of the system logs that Windows keeps and track the rate at which they grow. You can keep track of every change to every file on the test system. You can count the number of HTTP requests you make to the back end. You CAN do these things, but I bet you don’t– because they are nearly useless as indicators of testing effort or value (although not completely useless). Similarly using test cases as a metric is also, at best, nearly useless, and at worst utterly misleading. That’s why I don’t use them as metrics and why no one else should, either.
Instead of discussing what desperate and clueless people might do, let’s discuss what is helpful and reasonable. Instead of using test cases as a unit of testing, use attention over time (i.e. chartered and timed test sessions). By mapping what you have been testing, how you have been testing, and how long you have been doing so, you can tell a compelling testing story that isn’t just numbers for the sake of it.]
Without writing test cases (or something similar to record what has been tested) I don’t know the project can be confident that the tester has at least made a good stab at testing the software. Do you know of alternative ways to record test progress?
[James’ Reply: Yes. Activity-based test management. One such method is Session-Based Test Management, which I developed for that purpose 20 years ago.]