My vote for the World’s Most Inquisitive Tester is Shrini Kulkarni. He asks: Do you believe that â€œnon sapientâ€? tests exist or for that matter – any part (seems like a very negligibly small portion of entire universe of skilled human testing) of testing be â€œnon sapientâ€? ?
A sapient process is any process that relies for its success on an appropriately skilled human. A sapient test would therefore be a test that relies for its success on an appropriately skilled human.
Is there such a thing as a non-sapient test? My answer is yes, depending on what you consider to be part of the test.
An appropriately skilled programmer could decide to write a clever bit of test code that automatically watches for a potentially dangerous condition and throws an exception if that condition occurs, at which point an appropriately skilled human would have to make some appropriate response. In that case, there are three interesting parts to this testing:
- the test code itself (not sapient)
- the test design process (sapient)
- the process of reacting to the test result (sapient)
All three parts together would be called testing, and that would be sapient testing.
The test artifact (what a lot of people would call the test) is not sapient. However the process of understanding it and knowing how to maintain it and when to remove it is sapient.
The process of running the test is sapient, because running it necessarily means doing the right thing if it failed (or if it passed and wasn’t supposed to pass).
A lot of automation enthusiasts apparently think that the test code is the important part, and that test design and test execution management are but tiny little problems. I think the truth is that most automation enthusiasts just like making things go. They are not comfortable with testing sapiently, since probably no one has trained them to do that. They focus instead on what is, for them, the fun part of the problem, regardless of whether it’s the important part. What aids them in this delinquency is that their clients often have no clue what good testing is, anyway, so they are attracted to whatever is very visible, like pretty green bars and pie charts!
I experienced this first hand when I wrote a log file analyzer and a client simulator driver for a patent management system. During the year I managed testing on that project, this tool was the only thing that caused the CEO to come into the testing group and say “cool!” Yes, he liked the pretty graphs and Star Trekkie moving displays. While I was showing the system to my CEO, one of the testers who worked for me– a strictly sapient tester– watched quietly and then interjected a simple question: “How is this tool going to help us find any important problem we wouldn’t otherwise have found.” Being the clever and experienced test manager that I was at the time, I was able to concoct a plausible sounding response, off-the-cuff. But in truth his question rocked me, because I had become so interested in my test tool that I actually had stopped asking myself what test strategy goal I was supposed to be serving with it.
Bless that tester for recalling me to my duty.