My vote for the World’s Most Inquisitive Tester is Shrini Kulkarni. He asks: Do you believe that â€œnon sapientâ€? tests exist or for that matter – any part (seems like a very negligibly small portion of entire universe of skilled human testing) of testing be â€œnon sapientâ€? ?
A sapient process is any process that relies for its success on an appropriately skilled human. A sapient test would therefore be a test that relies for its success on an appropriately skilled human.
Is there such a thing as a non-sapient test? My answer is yes, depending on what you consider to be part of the test.
An appropriately skilled programmer could decide to write a clever bit of test code that automatically watches for a potentially dangerous condition and throws an exception if that condition occurs, at which point an appropriately skilled human would have to make some appropriate response. In that case, there are three interesting parts to this testing:
- the test code itself (not sapient)
- the test design process (sapient)
- the process of reacting to the test result (sapient)
All three parts together would be called testing, and that would be sapient testing.
The test artifact (what a lot of people would call the test) is not sapient. However the process of understanding it and knowing how to maintain it and when to remove it is sapient.
The process of running the test is sapient, because running it necessarily means doing the right thing if it failed (or if it passed and wasn’t supposed to pass).
A lot of automation enthusiasts apparently think that the test code is the important part, and that test design and test execution management are but tiny little problems. I think the truth is that most automation enthusiasts just like making things go. They are not comfortable with testing sapiently, since probably no one has trained them to do that. They focus instead on what is, for them, the fun part of the problem, regardless of whether it’s the important part. What aids them in this delinquency is that their clients often have no clue what good testing is, anyway, so they are attracted to whatever is very visible, like pretty green bars and pie charts!
I experienced this first hand when I wrote a log file analyzer and a client simulator driver for a patent management system. During the year I managed testing on that project, this tool was the only thing that caused the CEO to come into the testing group and say “cool!” Yes, he liked the pretty graphs and Star Trekkie moving displays. While I was showing the system to my CEO, one of the testers who worked for me– a strictly sapient tester– watched quietly and then interjected a simple question: “How is this tool going to help us find any important problem we wouldn’t otherwise have found.” Being the clever and experienced test manager that I was at the time, I was able to concoct a plausible sounding response, off-the-cuff. But in truth his question rocked me, because I had become so interested in my test tool that I actually had stopped asking myself what test strategy goal I was supposed to be serving with it.
Bless that tester for recalling me to my duty.
Morten Frank says
I normally read this blog but have never posted. I am a developer and like the test-first approach.
The reason I do a reply this time, is that I disagree with the “I think the truth is that most automation enthusiasts just like making things go” sentence. I like automated testing, because it frees a lot of time for me to design the test. So for me its not the design of the automation that interesting, its what it can do for me.
[James’ Reply: Thank you. I’m glad to hear that. Do you think most programmers are like you? If so, I’m puzzled, because I have not found it a common thing among programmers to wish to learn test design.]
Stefan Thelenius says
I think the truth is that most automation enthusiasts just like making things go.
Quite true in general I guess…I have to be very disciplined not to work 100% with automation because it is so fun to work with and the response from other testers and stake-holders are great for the things that you accomplish.
However some automation/tools in testing is really good to have so I normally do 30% automation testing/”tool forging” and 70% â€œsapientâ€? testing on a normal day. I think that it is good sometimes to change your mindset a number of times a day from testing to coding and vice versa (some sort of focus/de-focus) so you “keep-in-touch” with both worlds.
Thanks for your great contribution to the software testing community – keep up the good work!
Repetitive tasks bore me. Any time I find myself performing a task more than once I get twitchy wondering is there’s a way I can make it repeatable, scriptable, automatable … less boring, and hence more likely to be executed correctly the next time.
Seems to me that there are some aspects of testing which are somewhat repetitive. Not just rerunning a set of tests across a new version of the code, but also given a new api there are sets of standard edge conditions that we need to explore – does it work with nulls, negative number, very big numbers, empty strings, missing files, empty files … Essential tests I guess, but somewhat dull to enumerate. Can we use tools to at least reduce this drudgery and focus our sapience where it’s needed? Would this be test automation?
[James’ Reply: Repetition bores me, too, but I also know that I can learn from it. Each time I repeat an action there is the possibility of something new happening– something I didn’t anticipate and wouldn’t have programmed for. I have ways to overcome boredom. For instance, I vary my inputs, or I vary what I’m looking at. I add probes to my process, or run it on different platforms.
In many cases, I think repetition is a sign of poor test design. It is often variation, not repetition, which maximizes the chance of finding bugs.
Having said that, I often use tools in my testing. I use them sapiently. I adjust things, I add randomization. On whatever level I am working, I try to keep my mind fully engaged.
It is not a bad thing to use tools. It’s a bad thing to believe you can automate a sapient process, because automation changes the process into something else that cannot involve the same sort of reasoning and learning that the original process had. A sapient process doesn’t “get automated” it gets replaced by something materially different. Have you ever courted somone, Dave? Can you imagine “automated courtship”?
If you are not reasoning and learning with your testing, I suspect you are doing it wrong.]
Brett Leonard says
This is a great post from my perspective. I manage an automation group for a large company and we use automation to process and audit complex financial transactions. We have found that the biggest challenge in our process is determining what I call “the big what” – or the scope of our testing (which transactions make the most sense to test – we call it “test definition”). The next biggest challenge is mining the data, setting up the transactions, running them manually and capturing the baseline. Once we do that it is fairly easy (within our framework) to continually run the tests in our test environments. Many times I’ve had to explain to people (usually high-level executives) how difficult the “definition” process is and how even though we don’t run thousands of tests, we put real thought into trying to run the “right transactions” – the ones that present the enterprise with the most risk. I could easily create tens of thousands of tests without thought and that might get wows from executives but I can’t betray our discipline in that way. Thanks for all your good work.
[James’ Reply: Great example! Thanks.]
Now that I have discovered that blogging is more interesting than lying her sick watching Oprah, I would question the logic of some your observations. “A sapient process is any process that relies for its success on an appropriately skilled human. A sapient test would therefore be a test that relies for its success on an appropriately skilled human”. I am not sure where you got the idea of appropriately skilled from? Does this mean that my friendly developer who is at best marginally skilled is not of the species homo sapiens but perhaps Australopithecus? (He is actually Australian).
[James’ Reply: That’s not a question of logic, that’s a question of definition, and therefore of premises that precede logic. I can define the term any way I want, since I coined it. I got the idea of appropriately skilled from the notion of sapience. I’m simply saying that any process that requires a skilled human for success, such we do not have the means or the knowhow to automated it, can usefully be called by a special term, and I chose sapient.]
I would also question that any activity that you ultimately have to observe must have Sapient input, ie you become part of the test. (this also echos some quantum mechanical stuff by Heisenberg or Schrodingers moggy). Needless to say short of the Non-Sapient testing that those aliens with the long fingers do when UFO’s kidnap Americans from dark roads in Buttsniff Idaho, I suspect we will not have Non Sapient Testing until we achieve Fredrick Brown’s Computer in Angels and Spaceships (Dutton 1954)….Is there a GOD?, Yes,now there is a GOD… ZAP! I am more worried that the testers who read your comments do not do an ambiguity review on the contents…..
[James’ Reply: There are non-sapient processes that can be part of testing. But testing as an entire activity is necessarily sapient, in my terms. I see no ambiguity here. It’s actually very simple. Can a machine do the same thing? Then it ain’t a sapient process (unless that machine is not available to you and you are forced to use humans). Of course most people think they have automated tests when they haven’t. What they automated is something different than what the humans were actually doing.]
Genuinely Uncertified Real Tester