What if Software Development isn’t Golf?

Jason Gorman uses a golf analogy to talk about estimation.

I like his analogy, but he didn’t take it far enough for me. He left out a key element: we may not be playing golf.

A typical sin committed by people who do studies of schedule slippages is to discuss average amounts of time to do X or Y while only considering cases where X or Y were successfully completed. What about the cancellations? Those are ignored. Being ignored, the resulting averages have questionable meaning, except to say “the people who took more than X time to do this task gave up, in our experience so far.”

Jason says that I probably won’t have to hit the golf ball 10,000 times to get it into a par 3 hole. Well, no, but if I carry it to the hole and place it in, how many hits is that? Zero? Or is it “ERROR UNDEFINED VALUE” because I cheated? This is relevant because in software development, I frequently discover that my plans won’t work as conceived, no matter how long I could work them. Or I discover a new way to do them that cuts down the time, or increases the time. Or new requirements are put on me from a technological or human source, and that messes things up. Or it turns out that the task is mooted by some deletion of requirements.

I remember the Borland C++ 4.0 project. It went through a careful planning process. We used Gantt charts. Gantt charts, I tell you! Then Microsoft shipped C++ 7.0 with a new feature, the application wizard. Our plans fell to dust. The project was restarted. A tiger team split away to create AppExpert. And those of us who were suspicious of grandiose planning for a fast-moving world got our I Told You So moment.

The nice thing about agile development is that breaking things down into smaller chunks and planning as you go makes one year slippages such as we experience in Borland C++ 4.0 far less likely. It makes the game easier to play; more stable.

So, I like Jason’s analogy. It’s a good teaching analogy because it illustrates that how you model a task dominates how you estimate it. But if I were teaching with it, I would ask the class “What assumptions am I making?” and I would get the class to make a list. Assumptions include that we know what game we are playing, we know the rules of the game, we will not be surprised by new rules, we have a clearly defined and obtainable goal, we don’t get sick or injured, etc., etc.

If I’m asked to make an estimate on which millions of dollars depend, then these are vitally important issues to raise publicly. If I’m just spit-balling an estimate in a low stress situation, then I will make a par-for-the-course guess and not worry when surprises happen.

The Gerrard School of Testing

Paul Gerrard believes there are irrefutable testing axioms. This is not surprising, since all axioms are by definition irrefutable. To call something an axiom is to say you will cover your ears and hum whenever someone calls that principle into question. An axiom is a fundamental assumption on which the rest of your reasoning will be based.

They are not universal axioms for our field. Instead they are articles of Paul’s philosophy. As such, I’m glad to see them. I wish more testing authors would put their cards on the table that way.

I think what Paul means is that not that his axioms are irrefutable, but that they are necessary and sufficient as a basis for understanding what he considers to be good testing. In other words, they define his school of software testing. They are the result of many choices Paul has made that he could have made differently. For instance, he could have treated testing as an activity rather than speaking of tests as artifacts. He went with the artifact option, which is why one of his axioms speaks of test sequencing. I don’t think in terms of test artifacts, primarily, so I don’t speak of sequencing tests, usually. Usually, I speak of chartering test sessions and focusing test attention.

Sometimes people complain that declaring a school of testing fragments the craft. But I think the craft is already fragmented, and we should explore and understand the various philosophies that are out there. Paul’s proposed axioms seem a pretty fair representation of what I sometimes call the Chapel Hill School, since the Chapel Hill Symposium in 1972 was the organizing moment for many of those ideas, perhaps all of them. The book Program Test Methods, by Bill Hetzel, was the first book dedicated to testing. It came out of that symposium.

The Chapel Hill School is usually called “traditional testing”, but it’s important to understand that this tradition was not well established before 1972. Jerry Weinberg’s writings on testing, in his authoritative 1961 textbook on programming, presented a more flexible view. I think the Chapel Hill school has not achieved its vision, it was largely in dissatisfaction with it that the Context-Driven school was created.

One of his axioms is “5. The Coverage Axiom: You must have a mechanism to define a target for the quantity of testing, measure progress towards that goal and assess the thoroughness in a quantifiable way.” This is not an axiom for me. I rarely quantify coverage. I think quantification that is not grounded in measurement theory is no better than using numerology or star signs to run your projects. I generally use narrative and qualitative assessment, instead.

For you context-driven hounds out there, practice your art by picking one of his axioms and showing how it is possible to have good testing, in some context, while rejecting that principle. Post your analysis as a comment to this blog, if you want.

In any social activity (as opposed to a mathematical or physical system), any attempt to say “this is what it must be” boils down to a question of values or definitions. The Context-Driven community declared our values with our seven principles. But we don’t call our principles irrefutable. We simply say here is one school of thought, and we like it better than any other, for the moment.

Lawyers are Testers, Too

In 1982, when I was still in high school, I read an article in Time Magazine about teenagers who worked as programmers. The article inspired me to quit school and go to work as a programmer, too. I’m writing about that as part of my book about self-education without self-discipline.

Anyway, one of the kids mentioned in that article was Eugene Volokh, who eventually went to law school and is now a professor at UCLA. Looking at his website, I stumbled into an article where he applies ideas from software testing to teaching law.

Check it out.

Volokh’s ideas are especially familiar to me, because Cem Kaner has often told me about how his ideas about scenario testing owe much to his legal training, where reasoning through the implications of complex hypothetical cases is a fundamental part of the curriculum.