I was recently asked “What are the best career advancement steps for someone who is a junior to intermediate QA professional?”
If you want your career to move into programming, study programming.
If you want your career to move into management, read everything by Jerry Weinberg.
If you want to be the best tester in the world, read everything by Jerry Weinberg.
If you want to do what I do, read Weinberg.
On second thought, read Weinberg if you want to be a programmer, too.
I can say a lot more, but I’m not sure how much more I can say that is as valuable per word. Most of my work, anyway, is some variation or elaboration of Jerry’s ideas, and he in turn credits his teachers, such as Bateson, Boulding, and Satir. I think the future of the testing craft lies in the humanism and general systems approach these pioneers represent.
A lot of people I teach seem to be under pressure to create more documents to describe their test process. But documenting is not testing. It is one of the chief distractions to testing.
“James Bach hates documentation!”, some people will say. No, I don’t. I hate waste. I hate boring clerical work that unnecessarily interrupts productive and interesting intellectual work. You should hate it too.
I’m a writer for cryin’ out loud. I like documentation– when it is the solution to a problem that matters, and not merely a management obsession. But if you’re trying to move to concise test documents (outlines, matrices, one-page reference sheets, and other minimalist formats) you may need help persuading management and co-workers.
Here are some ideas:
- Show management how much less testing we are able to do because we are spending so much time with documents.
- Show management how certain kinds of testing isn’t done at all just because it is hard to document (exploratory testing and complex scenario tests often fall in this category). This is perhaps the most chilling effect of over-documentation, especially in the realm of medical devices. I keep seeing medical device test doc that is simplistic, in all its obesity, to the point of near worthlessness.
- Examine closely what testers are doing and show that they aren’t even following the documentation (often they aren’t, in my experience as a consultant who audits test processes).
- Demonstrate the power of exploratory testing (a less heavily documented approach). One day of ET is often sufficient to find what would take a week to find when following detailed documented test procedures.
- Demonstrate the value of concise test documentation (matrices, outlines).
- Consider documenting at the level of test activities rather than test cases.
- Consider automatic documentation (via log files produced by the product under test or via an external logging tool such as Spector).
- Ask the question: what exactly are we getting from our documentation? Don’t accept any theoretical answers. For example, one typical answer is that documentation protects the company from the ill effects of staff turnover. But does it? Probably not, in my experience. That’s a theory based on ignorance about how people learn. In real life, new testers learn what they need to know by playing with the product itself, and talking to the people around them. In my experience, testers come up to speed in a few days at most. And in my experience, test documentation is often of such poor quality that it’s better ignored than followed. You have to go by your own experience, of course. I’m just suggesting that you ask the questions and make your own observations.
Here are a few books about documentation that might help you make your case:
Heavy documentation is often a consequence of managers and testers who just aren’t thinking through the reasons why they do things. They hear that documentation is good, but they don’t stop to consider the cost of documentation, or watch how documentation is actually used (or more often, ignored).
A core problem with quality in our industry is lack of will.
Lack of “will work”, that is. This is because it’s much easier to tell that a product can work than that it will work. And too often it turns out that products will not work in some situations even though they can in others.
Yet, many testers, developers, and managers are recklessly confident in the will part when they’ve only observed the can part.
I often hear someone say that their smoke test suite “just checks that the basically functionality works.” But even this modest sounding goal is impossible to achieve. You can’t derive will from can, unless you give up certainty (“it will work, and I might be wrong”), or you run every possible test (and you can’t do that).
So, the claim of “…it works” is shorthand for something more uncertain, like this:
“During the tests I performed, I looked for cases where the product did not sufficiently fulfill the requirements I was testing for, but I did not see any. Furthermore, I have performed enough of the right kind of tests to justify confidence that the product probably will fulfill those requirements in the future for other people in other cases.”
Or more simply:
“It appeared to meet [some requirement] to [some degree] while I was testing it. It’s possible that the product works.”