How Michael Bolton and I Collaborate on Articles

(Someone posted a question on Quora asking how Michael and I write articles together. This is the answer I gave, there.)

It begins with time. We take our time. We rarely write on a deadline, except for fun, self-imposed deadlines that we can change if we really want to. For Michael and I, the quality of our writing always dominates over any other consideration.

Next is our commitment to each other. Neither one of us can contemplate releasing an article that the other of us is not proud of and happy with. Each of us gets to “stop ship” at any time, for any reason. We develop a lot of our work through debate, and sometimes the debate gets heated. I have had many colleagues over the years who tired of my need to debate even small issues. Michael understands that. When our debating gets too hot, as it occasionally does, we know how to stop, take a break if necessary, and remember our friendship.

Then comes passion for the subject. We don’t even try to write articles about things we don’t care about. Otherwise, we couldn’t summon the energy for the debate and the study that we put into our work. Michael and I are not journalists. We don’t function like reporters talking about what other people do. You will rarely find us quoting other people in our work. We speak from our own experiences, which gives us a sort of confidence and authority that comes through in our writing.

Our review process also helps a lot. Most of the work we do is reviewed by other colleagues. For our articles, we use more reviewers. The reviewers sometimes give us annoying responses, and they generally aren’t as committed to debating as we are. But we listen to each one and do what we can to answer their concerns without sacrificing our own vision. The responses can be annoying when a reviewer reads something into our article that we didn’t put there; some assumption that may make sense according to someone else’s methodology but not for our way of thinking. But after taking some time to cool off, we usually add more to the article to build a better bridge to the reader. This is especially true when more than one reviewer has a similar concern. Ultimately, of course, pleasing people is not our mission. Our mission is to say something true, useful, important, and compassionate (in that order of priority, at least in my case). Note that “amiable” and “easy to understand” or “popular” are not on that short list of highest priorities.

As far as the mechanisms of collaboration go, it depends on who “owns” it. There are three categories of written work: my blog, Michael’s blog, and jointly authored standalone articles. For the latter, we use Google Docs until we have a good first draft. Sometimes we write simultaneously on the same paragraph; more normally we work on different parts of it. If one of us is working on it alone he might decide to re-architect the whole thing, subject, of course, to the approval of the other.

After the first full draft (our recent automation article went through 28 revisions, according to Google Docs, over 14-weeks, before we reached that point), one of us will put it into Word and format it. At some point one of us will become the “article boss” and manage most of the actual editing to get it done, while the other one reviews each draft and comments. One heuristic of reviewing we frequently use is to turn change-tracking off for the first re-read, if there have been many changes.  That way whichever of us is reviewing is less likely to object to a change based purely on attachment to the previous text, rather than having an actual problem with the new text.

For the blogs, usually we have a conversation, then the guy who’s going to publish it on his blog writes a draft and does all the editing while getting comments from the other guy. The publishing party decides when to “ship” but will not do so over the other party’s objections.

I hope that makes it reasonably clear.

(Thanks to Michael Bolton for his review.)

Benjamin Mitchell and the Trap of False Hypocrisy

One of the puzzles of intellectual life is how to criticize something you admire without sounding like you don’t admire it. Benjamin Mitchell has given an insightful talk about social dynamics in Agile projects. You should see it. I enjoyed it, but I also felt pricked by several missed opportunities where he could have done an even deeper analysis. This post is about one example of that.

Benjamin offers an example of feedback he got about feedback he gave to a member of his team:

“Your feedback to the team member was poor because:
it did not focus on any positive actions, and
it didn’t use any examples”

Benjamin immediately noticed that this statement appears to violate itself. Obviously, it doesn’t focus on positive actions and it doesn’t use any examples. To Benjamin this demonstrates hypocrisy and a sort of incompetence and he got his reviewer (who uttered the statement) to agree with him about that. “It’s incompetent in the sense that it has a theory of effectiveness that it violates,” Benjamin says. From his tone, he clearly doesn’t see this as the product of anything sinister, but more as an indicator of how hard it is to deeply walk our talk. Let’s try harder not to be hypocrites, I think he’s saying.

Except this is not an example of hypocrisy.

In this case, the mistake lies with Benjamin, and then with the reviewer for not explaining and defending himself when challenged.

It’s worth dwelling on this because methodologists, especially serious professional ones like Benjamin and me, are partly in the business of listening to people who have trouble saying what they mean (a population that includes all of humanity), then helping them say it better. He and I need to be very very good at what social scientists call “verbal protocol analysis.” So, let’s learn from this incident.

In order to demonstrate my point, I’d like to see if you agree to two principles:

  1. Context Principle: Everything that we ever do, we do in some particular situation, and that context has a large impact on what, how, and why we do things. For instance, I’m writing this in the situation of a quiet afternoon on Orcas Island, purely by choice, and not because I’m paid or forced to write it by a shadowy client with a sinister agenda.
  2. Enoughness Principle: Anything we do that is good or bad could have been even better, or even worse. Although it makes sense to try to do good work, that comes at a cost, and therefore in practice we stop at whatever we consider to be “good enough” and not necessarily the best we can do.

Assuming you accept those principles, see what happens when I slightly reword the offending comment:

In that situation, your feedback to the team member was poor compared to what you could easily have achieved because:
it did not focus on any positive actions, and
it didn’t use any examples”

Having added the words, what happens if Benjamin tells me that this statement doesn’t focus on positive actions and doesn’t cite an example? I reply like this:

“That’s a reasonable observation, but I think it’s out of place here. My advice pertains to giving feedback to people who feel frightened or threatened or may not have the requisite skills to comprehend the feedback or in a situation where I am not seen as a credible reviewer. And my advice pertains to situations where you want to invest in giving vivid, powerful advice– advice that teaches. However, in this case, I felt it was good enough (not perfect but good at a reasonable investment of my time) to ignore the positive (because, Benjamin, you already know you’re good, and you know that I know that you are good– so you don’t need me to give you a swig of brandy before telling you the “bad news”) and I thought that investing in careful phrasing of a vivid example might actually sound patronizing to you, because you already know what I’m talking about, man.”

In other words, with the added words in bold face, it becomes a little clearer that the situation of him advising his client, and us advising him, are different in important ways.

Imagine that Benjamin spots a misspelled word in my post. Does he need to give me an example of how to spell it? Does he need to speak about the potential benefits of good spelling? Does he need to praise my use of commas before broaching the subject of spelling? No. He just needs to point and say “that’s spelled wrong.” He can do that without being a hypocrite, don’t you think?

(Of course, if the situations are not different and the quality of the comment made to Benjamin is clearly not good enough, then it is fair to raise the issue that the feedback does not meet its own implied standard.)

Finally: I added those bolded words, but if I’m in a community that assumes them, I don’t need to add them. They are there whether I say them or not. We don’t need to make explicit that which is already a part of our culture. Perhaps the person who offered this feedback to Benjamin was assuming that he understood that advice is situational, and that a summary form of feedback is better in this case than a lengthy ritual of finding something to praise about Benjamin and then citing at least three examples.

…unless Benjamin is a frightened student… which he isn’t. Look at him in that video. He exudes self-confidence. That man is a responsible adult. He can take a punch.

Who’s the Real Monster?

“Best practice” thinking itself causes these misunderstandings. Many people seek to memorize protocols such as “how to give feedback… always do this… step 1: always say something nice step 2: always focus on solutions not problems… etc.” instead of understanding the underlying dynamics of communication and relationships. Then when they slip and accidentally behave in an insightful and effective way instead of following their silly scripts, their friends accuse them of being hypocrites.

When the explicit parts of our procedures are at war with the tacit parts, we chronically fall into such traps.

There is a silver lining here: it’s good to be a hypocrite if you are preaching the wrong things. Watch yourself. The next time you fail in your discipline to do X, seriously consider if your discipline is actually wrong, and your “failure” is actually success of some kind.

This is why when I talk about procedures, I speak of heuristics (which are fallible) and skills (which are plastic) and context (which varies). There are no best practices.

I’m going to wrap this up with some positive feedback, because he doesn’t know me very well, yet. Benjamin, I appreciate how, in your work, you question what you are told and reflect on your own thought processes in a spirit of both humility and confidence. YOU don’t seem infected by “best practice” folklore. Thank you for that.



Programmer Pairing with a Tester

My sister, Erica, is not a programmer. Normally she’s not a tester, either. But recently she paired with me, playing a tester role, and spotted bugs while I wrote in Perl. In the process, it became clear to me that testers do not need to become programmers in order to help programmers write programs in real-time.

The Context

While working on the report for the Rapid Testing Intensive, recently, I needed a usable archive of the materials. That meant taking all of the pages, comments, and attachments out of my Confluence site and putting them in a form easier to shuffle, subdivide, organize, refer to, and re-distribute. It would be great if that were a feature of Confluence, but the closest I can get to that is either manually downloading each item or downloading an entire archive and dealing with a big abstract blob of XML and cryptically named files with no extensions.

(Note to Atlassian: Please enhance Confluence to include a archivist-friendly (as opposed to system administrator-friendly) archive function that separates pages, attachments, and comments into discrete viewable units with reasonable names.)

The Deflection

While Erica catalogued the names of all the attachments representing student work and the person or persons who created them, I was supposed to write a program to extract the corresponding material from the archive. Instead, I procrastinated. I think I checked email, but I admit it’s possible I was playing Ghost Recon or watching episode 13 of Arang and the Magistrate on Hulu. So, when she was ready with the spreadsheet, I hadn’t even started my program.

To cover my laziness, I thought I’d invite her to sit with me while I wrote it… you know, as if I had been waiting for her on purpose to show her the power of code or whatever. I expected her to decline, since like many computer power users, she has no interest in programming, and no knowledge of it.

The Surprising Outcome

She did not decline. She sat with me and we wrote the program together. She found six or seven important bugs while I typed, and many other little ones. The programming was more fun and social for me. I was more energized and focused. We followed up by writing a second, bigger program together. She told me she wants to do more of this kind of work. We both want to do more.

A Question

How does someone who knows nothing about Perl programming, and isn’t even a tester, productively find bugs almost immediately by looking at Perl code?

That’s kind of a misleading question, because that’s not what really happened. She didn’t just look at my code. She looked at my code in the context of me talking to her about what I was trying to do as I was trying to do it. The process unfolded bit by bit, and she followed the logic as it evolved. It doesn’t take any specific personal quality on the part of the “coding companion,” just general ones like alertness, curiosity, and basic symbolic intelligence. It doesn’t take any particular knowledge, although it can help a lot.

Perhaps this would not work well for all kinds of coding. We weren’t working on something that required heaps of fiddly design, or hours of doodling in a notebook to conceive of some obscure algorithm.

My Claim

A completely non-technical but eager and curious companion can help me write code in real-time by virtue of three things:

  1. The dynamic and interactive legibility of the coding process. I narrate what I’m doing as it comes together step-by-step. The companion doesn’t eat the whole elephant in a bite; and the companion encounters the software mediated by my continuous process of interpretation. I tell him what and why and how. I do this repeatedly, and answer his questions along the way. This makes the process accessible (or in the parlance I like to use “legible” because that word specifically means the accessibility of information). The legibility is not that of a static piece of code, sitting there, but rather a property of something that is changing within a social environment. It’s the same experience as watching a teacher fill a whiteboard with equations. If you came at the end of the class, it would look bewildering, but if you watched it in process, it looks sensible.
  2. The conceptual simplicity of many bugs. Some bugs are truly devious and subtle, but many have a simple essence or an easily recognized form. As I fix my own bugs and narrate that process, my coding companion begins to pick up on regularities and consistency relationships that must be preserved. The companion programs himself to find bugs, as I go.
  3. The sensemaking faculties of a programmer seeking to resolve the confusion of a companion. When my dogs bark, I want to know why they are barking. I don’t know if there’s a good reason or a bad reason, but I want to resolve the mystery. In the course of doing that, I may learn something important (like “the UPS guy is here”). Similarly, when my coding companion says “I don’t understand why you put the dollar sign there and there, but not over there” my mind is directed to that situation and I need to make sense of it. It may be a bug or not a bug, but that process helps me be clear about what I’m doing, no matter what.

And Therefore…

A tester of any kind can contribute early in a development process, and become better able to test, by pairing with a programmer regardless of his own ability to code.

Exploratory Testing is not “Experienced-Based” Testing

Prabhat Nayak is yet another thinking tester recently hired by the rising Indian testing powerhouse, Moolya. Speaking of the ISTQB syllabus, he writes:

One such disagreement of mine is they have put “Exploratory Testing” on purely experienced based testing. James, correct me if I have got ET wrong (and I am always ready to be corrected if I have misunderstood something), a novice tester who has got great cognizance and sapience and can explore things better, can think of different ways the product may fail to perform per requirement can always do a great job in ET than a 5 years experienced tester who has only learned to execute a set of test cases. That is probably one of the beauties of ET. There is of course, always an advantage of having some experience but that alone doesn’t suffice ET to be put under experienced based testing.

You are quite correct Prabhat. Thank you for pointing this out.

The shadowy cabal known as the ISTQB insulates itself from debate and criticism. They make their decisions in secret (under NDA, if you can believe it!) and they don’t invite my opinion, nor anyone’s opinion who has made a dedicated study of exploratory testing. That alone would be a good reason to dismiss whatever they do or claim.

But this case is an especially sad example of incompetent analysis. Let me break it down:

What does “experience-based” mean?

Usually when people in the technical world speak of something as “x-based” they generally mean that it is “organized according to a model of x” or perhaps “dominated by a consideration of x.” The “x”, whatever it is, plays a special role in the method compared to its role in some other “normal” or “typical” method.

What is a normal or typical method of software testing? I’m not aware the the ISTQB explicitly takes a position on that. But by calling ET an experience-based technique, they imply that no other test technique involves the application of experience to a comparable degree. If they have intended that implication– that would be a claim both remarkable and absurd. Why should any test technique not benefit from experience? Do they think that a novice tester and an experienced tester would choose the exact same tests when practicing other test techniques? Do they think there is no value to experience except when using ET? What research have they done to substantiate this opinion? I bet none.

If they have not intended this implication, then by calling ET experience-based it seems to me they are merely making impressive sounds for the sake of it. They might as well have called ET “breathing-based” on the theory that testers will have to breathe while testing, too.

Ah, but maybe there is another interpretation. They may have called ET “experienced-based” not to imply that ET is any more experience-based than other techniques, but rather as a warning that expresses their belief that the ONLY way ET can be valuable is through the personal heroism and mastery of the individual tester. In other words, what they meant to say was that ET is “personal excellence-based” testing, rather than testing whose value derives from an explicit algorithm that is objective and independent of the tester himself.

I suspect that what’s really going on, here: They think the other techniques are concrete and scientific, whereas ET is somehow mystical and perhaps based on the same sort of dodgy magic that you find in Narnia or MiddleEarth. They say “experience-based” to refer to a dark and scary forest that some enter but none ever return therefrom… They say “experienced-based” because they have no understanding of any other basis that ET can possibly have!

Why would it be difficult for Factory School testing thinkers (of which ISTQB is a product) to understand the basis of ET?

It’s difficult for them because Factory School people, by the force of their creed, seek to minimize the role of humanness in any technical activity. They are radical mechanizers. They are looking for algorithms instead of heuristics. They want to focus on artifacts, not thoughts or feelings or activities. They need to deny the role and value of tacit knowledge and skill. Their theory of learning was state of the art in the 18th century: memorization and mimicry. Then, when they encounter ET, they look for something to memorize or mimic, and find nothing.

Those of us who study ET, when we try to share it, talk a lot about cognitive science, epistemology, and modern learning theory. We talk about the importance of practice. This sounds to the Factory Schoolers like incomprehensible new agey incantations in High Elvish. They suspect we are being deliberately obscure just to keep our clients confused and intimidated.

This is also what makes them want to call ET a technique, rather than an approach. I have, since the late nineties, characterized exploratory testing as an approach that applies to any technique. It is a mindset and set of behaviors that occur, to some degree, in ALL testing. To say “Let’s use ET, now” is technically as incoherent as saying “Let’s use knowledge, now.” You are always using knowledge, to some degree, in any work that you do. “Knowledge” is not a technique that you sometimes deploy. However, knowledge plays more a role in some situations and less a role in others. Knowledge is not always and equally applicable, nor is it uniformly applied even when applicable.

For the Factory Schoolers to admit that ET is endemic to all testing, to some degree, would force them to admit that their ignorance of ET is largely ignorance of testing itself! They cannot allow themselves to do that. They have invested everything in the claim that they understand testing.  No, we will have to wait until those very proud and desperately self-inflated personalities retire, dry up, and blow away. The salvation of our craft will come from recruiting smart young testers into a better way of thinking about things like ET. The brain drain will eventually cause the Factory School culture to sink into the sea like a very boring version of Altantis.

Bottom Line: Most testing benefits from experience, but no special experience is necessary to do ET

Exploratory testing is not a technique, so it doesn’t need to be categorized alongside techniques. However, a more appropriate way to characterize ET, if you want to charactize it in some way, is to call it self-managed and self-structured (as opposed to externally managed and externally structured). It is testing wherein the design part of the process and the execution part of the process are parallel and interactive.

You know what else is self-managed and self-structured? Learning how to walk and talk. Does anyone suggest that only “experienced people” should be allowed to do that?

A Nice Quote Against Confirmatory Testing

Most of the technology of “confirmatory” non-qualitative research in both the social and natural sciences is aimed at preventing discovery. When confirmatory research goes smoothly, everything comes out precisely as expected. Received theory is supported by one more example of its usefulness, and requires no change. As in everyday social life, confirmation is exactly the absence of insight.  In science, as in life, dramatic new discoveries must almost by definition be accidental (“serendipitous”). Indeed, they occur only in consequence of some mistake.

Kirk, Jerome, and Miller, Marc L., Reliability and Validity in Qualitative Research (Qualitative Research Methods). Sage Publications, Inc, Thousand Oaks, CA, 1985.

Viva exploratory methods in science! Viva exploratory methods in testing! Viva testers who study philosophy and the social sciences!

(Thank you Michael Bolton for finding this quote.)

Immaturity of Maturity Models

Maturities models (TMMi, CMM, CMMi, etc.) are a dumb idea. They are evidence of immaturity in our craft. Insecure managers sometimes cling to them as they might a treasured baby blanket. In so doing, they retard their own growth and learning.

A client of mine recently asked for some arguments against the concept of maturity models in testing. My reply makes for a nice post, so here goes…

First, here are two articles attacking the general idea of “maturity” models:

Maturity Models Have it Backwards

The Immaturity of the CMM

Here is another article that attacks one of the central tenets of most maturity models, which is the idea that there are universal “best practices”:

No Best Practices

And of course commercial tester certification, which I often ridicule on this blog, is a related phenomenon.
Here are some talking points:

1. I suggest this definition of maturity: “Maturity is the degree to which a system has realized its potential and adapted to its context.”

In support of this definition, consider these relevant snippets from the Oxford English Dictionary.

* Fullness or perfection of growth or development.
* Deliberateness of action; mature consideration, due deliberation.
* The state of being complete, perfect, or ready; fullness of development.
* The stage at which substantial growth no longer occurs.

2. I suggest this definition of maturity model: “a maturity model is plan for achieving maturity.”

By this definition, I know of nothing that is called a “maturity model” that actually is a maturity model. This is because maturity cannot be achieved by mimicking the “look” of mature organizations. Maturity is achieved through growing and learning as you encounter and deal with natural problems.

3. Maturity is not our goal in engineering. Our goal is to achieve success, satisfaction, security, and respect through the mechanism of doing good work.

No one gains success through maturity. It is not our goal. Some businesses benefit by the appearance of maturity, but that is a matter of marketing, not engineering. And regardless of how we achieve maturity, not all maturity is desirable. A creature approaching death is also mature.

Hey, blacksmithing is a mature craft, and yet look around… where are the blacksmiths? The world has moved on. We are in a period of growth, study, and creativity in testing. No one can say what the state of the art of our craft will be in 50 years. It will evolve, and we– our minds and experiences– are the engine of that evolution.

4. The behaviors of a healthy mature organization cannot be a template for success.

We achieve maturity by learning and growing as a testing organization, not by aiming at or emulating “mature” behaviors.

Maturity is a dependent variable. We don’t manipulate our maturity directly. We simply learn and grow, wherever that takes us. As any parent knows, you cannot speed up the maturation of your children by entreating them to “be mature.” Their very immaturity is partly the means by which they will mature. Immature creatures play and experiment. Research in rats, for instance, documents severe developmental problems in rats that were prevented from playing. Rats who don’t play as juveniles are much less able to deal with unexpected situations as adults. They cannot deal effectively with stress, compared to normal rats.

There can NEVER be one ultimate form or process that we declare to be mature, UNLESS our context never changes. This is because, in engineering, we are in the business of solving problems in context. However, our context changes regularly, because people change, technology changes, and because we are continuing to experiment and innovate.

Darwin’s theory of the origin of species is also a theory of the maturation and continual re-generation of species. As he understood, maturity is always a relative matter.

5. The “maturity model” of any outsider is essentially a propaganda tool. It is a marketing tool, not an engineering tool.

Every attempt to formalize testing constitutes a claim, on some person’s part, that he knows what testing should be, and that other people cannot be trusted to know this (otherwise, why not let them decide for themselves how to test?).

I have formalized testing, myself, and that’s exactly what I am thinking when I do so. But I do not impose my view of testing on any other organization or tester, unless they work for me. My formalizations are specific to my experiences and analysis of specific situations. I offer these ideas to my clients as input to a process of ongoing study and adaptation. To make any best practice claims would be irresponsible.

6. If you want to get better try this: create an environment where learning and innovation is encouraged; institutionalize mechanisms that promote this, such as internal training, peer conferences, pilot projects, and mentoring.

As my colleague Michael Bolton likes to say “no mature person, involved in a serious matter, lets any other mature person do their thinking for them.”

Mature people take responsibility for themselves. Therefore, don’t adopt anyone else’s “maturity model” of testing. Let your own people lead you.

This is What We Do

In the Context-Driven Testing community, the testing craft is a living, growing thing. This dialog, led by my partner in Rapid Testing, Michael Bolton, is a prime example of the life among us. Read the PDF that Michael refers to, and what will you see? You see many ideas proposed and discarded. You see definitions being made, and remade. You see people struggling to make sense of subtle, yet important distinctions.

In my world, the development of testing skill goes hand-in-hand with the development of our rhetoric of describing testing. The development of personal skill is linked to the development of social skill. This is why we smirk and roll our eyes when people come to us looking for templates and other pre-fabricated answers to what they believe are simple questions. My reaction to many who come to me is “You don’t need to learn the definition of term ‘test case’. You don’t need me to tell you ‘how to create a test plan’. What you need is to learn how to test. You need to struggle with imponderables; sit with them; turn them over in your mind. You need practice, and you need to talk through your practice with other testers.”
Michael’s dialog reminds me of the book Proofs and Refutations, by Imre Lakatos, which studies the exploratory and dialectical nature of mathematics by also using dialog.

Reclaim Your Personal Method

(Since this pertains to both self-education AND technical work, I’m posting this on both of my blogs)

Randy Ingermanson has an interesting approach to writing fiction. It’s called the Snowflake Method. It looks interesting, but I won’t be following it in my work.

First, Don’t Follow
I only use my own methods. That is to say I’m happy to use anyone else’s ideas, but only if they become mine, first. I can learn from other people, but I don’t follow anyone. See the difference? The only way I can responsibly follow someone as a thinker is if they are supervising my work. For instance, when Captain Ben taught me to sail, I used his methods because he was right there to correct me. Also it was his boat, and he answered my questions and let me experiment with alternative ideas to see why they were inferior. As he trained me, his methods became my methods. I began to do them based on my sense of their logic– which means I also came to understand under what circumstances I might need to change them. That’s the difference between learning and numb indoctrination.

When Jerry Weinberg taught me the Fieldstone Method of writing, I formed my own interpretation of it, and now it’s the James Bach version of Weinberg’s Fieldstone Method. And when I teach Rapid Software Testing, my methodology ideally becomes personal to each student, morphing to their own preferences and patterns, or else they should not be using it.

“Composting” Good?
In describing the Snowflake Method, Ingermanson discusses something that he says every writer does: composting. That’s where you actually dream up the story. He writes that

“It’s an informal process and every writer does it differently. I’m going to assume that you know how to compost your story ideas and that you have already got a novel well-composted in your mind and that you’re ready to sit down and start writing that novel.He says how you do that is a personal creative matter.”

Okay. Interesting that he says nothing about how to do that, though, since for me that’s almost all that writing is. But now, the actual Snowflake Method, he says, kicks in after composting is done with. It’s a way of progressive outlining of the book so that you can write it in an organized way.

Wait, did he say that happens after composting?

AFTER composting? Seriously?

This is a problem for me, because I’m nearly always doing that thing he calls composting. For me, writing is an exploratory activity. I’m constructing my ideas before I write them down and also as I’m writing them down. I’ve written many articles and two books that way. I have not yet written much fiction, but I have a hard time believing my method will be or should be different for fiction.

“Seat of the Pants” Bad?
Here Ingermanson makes a tiresome rhetorical move: He contrasts his approach with the “seat of the pants” method. He believes his method is better. I agree that it’s probably better for him, because it’s his own personal method. But on what basis can he say that his method is better than the alternatives for anyone else? Besides, it sounds like “composting” is just “seat of the pants” that happens to be Ingermanson-approved.

This is typical best practices rhetoric, and the pattern generally goes as follows:

1. I conceive my method as figure and everything else as ground. I won’t talk about how my method blends into and is supported by any other methods or skills or talents or preferences. I won’t talk about how it may go horribly wrong. The method is an island.

2. Since I like my method better than the other not-the-method thing I once did, [cite anecdotal and cherry-picked evidence here], it is probably better.

3. Since I taught someone else to use my method and they said they like my method better than whatever unexamined way of working they once had, [did they actually use my method? Well, they said they did, but I didn’t actually watch them do it], it is even more probably better.

There are a few problems with this pattern of reasoning. One is that it is not necessarily a comparison of one method to another. It’s more likely a comparison of a state of confusion to a state of decision. Decision usually does win over confusion. The people who are out looking for a method may not already have a sound understanding of the methods they already use. So they leap on any method offered as if it were a life buoy. This of course is no indication that the method itself is better than any other method, but merely that people hate feeling confused and incompetent.

Another problem is that even when it is a comparison of methods, it’s generally a comparison between an ineffable method and one that sounds good when explained. Things that are ineffable, no matter how useful, get a bad reputation. That’s why you’ve met at least one person in your life who has claimed that you need to “learn to breathe” or “remember to breathe.” In fact, you already have a method of breathing, and unless your eyes have just gone so fuzzy that you can’t read this at all, you are probably breathing pretty well right this moment. An effective way to present a method of breathing could be to say “If you are having problem X, one solution might be to try a special kind of breathing called Y. Let’s try it now so you can see what I mean…” This way offers the practice without implicitly or explicitly denying other ways of working.

Yet another problem is that all methods rest on a certain way of organizing the world. If you don’t accept that foundation, then the method won’t satisfy you. Ingermanson seems to find it easy to segment heavy creative work from the light creative work. Hence composting is good, but seat-of-the-pants writing is bad. Since I don’t accept that distinction, to use the Snowflake Method as presented would force me to become alienated from my creative process. I would not be in direct touch with my own mind, but all thoughts would be mediated through the controlling outline of the Snowflake. Ick!

A Rhetoric for Pushing Back
It’s not “seat of the pants”, I say. It’s not merely “ad hoc.”

It’s thoughtful and responsible, rather than mindless and robotic. It’s exploratory, rather than pre-scripted. It’s agile rather than rigid. It’s constructive and generative, rather than a mere conditioned response.

Want more? Try breaking the method down into sub-parts. In exploratory work, I might cite such tasks as:

  • overproduce ideas and abandon them (think “brainstorming”)
  • recover previously abandoned ideas (think “boneyard”)
  • pursue lines of inquiry
  • conduct thought experiments
  • alternate my tactics for better progress
  • dynamically manage my focus (from very focused to de-focused)
  • charter my own work in light of my mission as I understand it
  • view my work from different perspectives
  • produce results, then reproduce them differently based on what I learned (cyclic learning)
  • construct a new and better version of myself as I work

Seat of the pants? That sounds like a put-down. Why don’t they call it dynamic control and development? Because that doesn’t sound like a put-down.

Reclaim Your Personal Method
As Adam Savage says, “I reject your reality, and substitute my own.” Yes, indeedy.

You don’t have to accept someone else’s intermediating artifice between you and your thoughts. Whether that’s a book outline, or a test plan document, TPI, or some method of artificial breathing you can say no. You can say “that would be irresponsible, because I must remain attached to the source of my own methods of working. I can’t drive a car safely from the BACK SEAT!”

Having said all that, I found Randy’s Snowflake Method interesting and I think I will try it. I will meld it with my exploratory style of working, of course, and claim it for my own.

Quality is Dead #2: The Quality Creation Myth

One of the things that makes it hard to talk about quality software is that we first must overcome the dominating myth about quality, which goes like this: The quality of a product is built into it by its development team. They create quality by following disciplined engineering practices to engineer the source code so that it will fulfill the requirements of the user.

This is a myth, not a lie. It’s a simplified story that helps us make sense of our experience. Myths like this can serve a useful purpose, but we  must take care not to believe in them as if they were the great and hoary truth.

Here are some of the limitations of the myth:

  1. Quality is not a thing and it is not built. To think of it as a thing is to commit the “reification fallacy” that my colleague Michael Bolton loves to hate. Instead, quality is a relationship. Excellent quality is a wonderful sort of relationship. Instead of “building” quality, it’s more coherent to say we arrange for it. Of course you are thinking “what’s the difference between arrange and build? A carpenter could be said to arrange wood into the form of a cabinet. So what?” I like the word arrange because it shifts our attention to relationships and because arrangement suggests less permanence. This is important because in technology we are obliged to work with many elements that are subject to imprecision, ambiguity and drift.
  2. A “practice” is not the whole story of how things get done. To say that we accomplish things by following “practices” or “methods” is to use a figure of speech called a synecdoche– the substitution of a part for the whole. What we call practices are the public face of a lot of shadowy behavior that we don’t normally count as part of the way we work. For instance, joking around, or eating a salad at your desk, or choosing which email to read next, and which to ignore. A social researcher examining a project in progress would look carefully at who talks to whom, how they talk and what they talk about. How is status gained or lost? How do people decide what to do next? What are the dominant beliefs about how to behave in the office? How are documents created and marketed around the team? In what ways do people on the team exert or accept control?
  3. Source code is not the product. The product is the experience that the user receives. That experience comes from the source code in conjunction with numerous other components that are outside the control and sometimes even the knowledge of product developers. It also comes from documentation and support. And that experience plays out over time on what is probably a chaotic multi-tasking computing environment.
  4. “Requirements” are not the requirements, and the “users” are not the users. I don’t know what my requirements are for any of the software I have ever used. I mean, I do know some things. But for anything I think I know, I’m aware that someone else may suggest something that is different that might please me better. Or maybe they will show me how something I thought was important is actually harmful. I don’t know my own requirements for certain. Instead, I make good guesses. Everyone tries to do that. People learn, as they see and work with products, more about what they want. Furthermore, what they want actually changes with their experiences. People change. The users you think you are targeting may not be the users you get.
  5. Fulfillment is not forever and everywhere. The state of the world drifts. A requirement fulfilled today may no longer be fulfilled tomorrow, because  of a new patch to the operating system, or because a new competing product has been released.  Another reason we can’t count on a requirement being fulfilled is that can does not mean will. What I see working with one data set on one computer may not work with other data on another computer.

These factors make certain conversations about quality unhelpful. For instance, I’m impatient when someone claims that unit testing or review will guarantee a great product, because unit testing and review do not account for system level effects, or transient data occurring in the field, or long chains of connected transactions, or intermittent failure of third-party components. Unit testing and review focus on source code. But source code is not the product. So they can be useful, but they are still mere heuristic devices. They provide no guarantee.

Once in a while, I come across a yoho who thinks that a logical specification language like “Z” is the great solution. Because then your specification can be “proven correct.” The big problems with that, of course, is that correctness in this case simply means self-consistency. It does not mean that the specification corresponds to the needs of the customer, nor that it corresponds to the product that is ultimately built.

I’m taking an expansive view of products and projects and quality, because I believe my job is to help people get what they want. Some people, mainly those who go on and on about “disciplined engineering processes” and wish to quantify quality, take a narrower view of their job. I think that’s because their overriding wish is that any problems not be “their fault” but rather YOUR fault. As in, “Hey, I followed the formal spec. If you put the wrong things in the formal spec, that’s YOUR problem, stupid.”

My Take on the Quality Story

Let me offer a more nuanced version of the quality story– still a myth, yes– but one more useful to professionals:

A product is a dynamic arrangement, like a garden that is subject to the elements. A high quality product takes skillful tending and weeding over time. Just like real gardeners, we are not all powerful or all knowing as we grow our crop. We review the conditions and the status of our product as we go. We try to anticipate problems, and we react to solve the problems that occur. We try to understand what our art can and cannot do, and we manage the expectations of our customers accordingly. We know that our product is always subject to decay, and that the tastes of our customers vary. We also know that even the most perfect crop can be spoiled later by a bad chef. Quality, to a significant degree, is out of our hands.

After many years of seeing things work and fail (or work and THEN fail), I think of quality as ephemeral. It may be good enough, at times. It may be better than good enough. But it fades; it always fades, like something natural.

Or like sculpture by Andy Goldsworthy.  (Check out this video.)

This is true for all software, but the degree to which it is a problem will vary. Some systems have been built that work well over time. That is the result of excellent thinking and problem solving on the part of the development team. But I would argue it is also the result of favorable conditions in the surrounding environment. Those conditions are subject to change without notice.

Studying Jeff Atwood’s Paint Can

I just found Jeff Atwood’s Coding Horror blog. He’s an interesting writer and thinker.

One of his postings presents a good example of the subtle role of skill even in highly scripted activities. He writes about following the instructions on a paint can. His article links to an earlier article, so you might want to read both.

The article is based on a rant by Steve McConnell in his book Rapid Development about the importance of following instructions. Steve talks about how important it is to follow instructions on a paint can when you are painting.

I want to talk, here, about the danger of following instructions, and more specifically, the danger of following people who tell you to follow instructions when they are not taking responsibility for the quality of your work. The instruction-following myth is one of those cancers on our craft, like certification and best practices.

[Full Discolosure: Relations between me and McConnell are strained. In the same book, Rapid Development, in the Classic Mistakes section, Steve misrepresented my work with regard to the role of heroism in software projects. He cited an article I wrote as if it was indicative of a point of view that I do not hold. It was as if he hadn’t read the article he cited, but only looked at the title. When I brought the error to his attention, he insisted that he did indeed understand my article and that his citation was correct.]

Let’s step through some of what Jeff writes:

“But what would happen if I didn’t follow the instructions on the paint can? Here’s a list of common interior painting mistakes:

The single most common mistake in any project is failure to read and follow manufacturer’s instructions for tools and materials being used.”

  • Jeff appears to be citing a study of some kind. What is this study? Is it trustworthy? Is Jeff himself telling me something, or is Jeff channelling a discarnate entity?
  • When he says “the most common mistake” does he mean the one that most frequently is committed by everyone who uses paint? Novices? Professionals? Or is he referring to the most serious mistakes? Or is he referring to the complete set of possible mistakes that are worth mentioning?
  • Is it important for everyone to follow the instructions, or are the instructions there for unskilled people only?
  • Why is it a “mistake” not to read-and-follow instructions? Mistake is a loaded term; one of those danger words that I circle in red pencil and put a question mark next to. It may be a mistake not to follow certain instructions in a certain context. On the other hand, it may be a mistake to follow them.

Consider all the instructions you encounter and do not read. Consider the software you install without reading the “quickstart” guide. Consider the clickwrap licenses you don’t read, or the rental cars you drive without ever consulting the drivers manual in states where you have not studied the local driving laws. Consider the doors marked push that you pull upon. Consider the shampoo bottle that says “wash, rinse, repeat.” Well, I have news for the people who make Prell: I don’t repeat. Did you hear me? I don’t repeat.

I would have to say that most instructions I come across are unimportant and some are harmful. Most instructions I get about software development process, I would say, would be harmful if I believed them and followed them. Most software process instructions I encounter are fairy tales, both in the sense of being made up and in the sense of being cartoonish. Some things that look like instructions, such as “do not try this at home” or “take out the safety card and follow along,” are not properly instructions at all, they are really just ritual phrases uttered to dispel the evil spirits of legal liability. Other things that really are instructions are too vague to follow, such as “use common sense” or “be creative” or “follow the instructions.”

There are, of course, instructions I could cite that have been helpful to me. I saw a sign over a copy room that said “Do not use three hole paper in this copy machine… unless you want it to jam.” and one next to it that said “Do not use the Microwave oven while making copies… unless you want the fuse to blow.” I often find instructions useful when putting furniture together; and I find signs at airports generally useful, even though I have occasionally been steered wrong.

Instructions can be useful, or useless, or something in between. Therefore, I propose that we each develop a skill: the skill of knowing when, where, why and how to follow instructions in specific contexts. Also, let’s develop the skill of giving instructions.

Jeff goes on to write:

“In regard to painting, the most common mistakes are:

* Not preparing a clean, sanded, and primed (if needed) surface.
* Failure to mix the paints properly.
* Applying too much paint to the applicator.
* Using water-logged applicators.
* Not solving dampness problems in the walls or ceilings.
* Not roughing up enamel paint before painting over it.”

Again with the “most common.” Says who? I can’t believe that the DuPont company is hiding in the bushes watching everybody use paint. How do they know what the most common mistakes are?

My colleague Michael Bolton suggested that the most common mistake is “getting the paint on the wrong things.” Personally, I suspect that the truly most common mistake is to try to paint something, but you won’t see THAT on the side of a paint can. As I write this, my bathroom is being repainted. Notice that I am writing and someone else is painting. Someone, I bet, who knows more about painting than I do. I have not committed the mistake of trying to paint my own bathroom, nor of attempting to read paint can instructions. Can I prove that is the most common mistake? Of course not. But notice that the rhetoric of following instructions is different if you draw a different set of lines around the cost/value equation represented by the word “mistake.”

Also, not knowing much about painting, I don’t understand these “mistakes.” For instance:

  • What is a clean surface? How do I sand it? What does “primed” mean and how do I know if that is needed?
  • How do I mix paints? Why would I even need to mix them? What paints should I mix?
  • What is the applicator and how do I apply paint to it? How much is enough?
  • What is a “water-logged” applicator? How does it get water-logged? Is there a “just enough” level of water?
  • How does one recognize and solve a “dampness problem”?
  • I assume that “roughing up” enamel paint means something other than trying to intimidate it. I assume it means sanding it somehow? Am I right? If so, how rough does it need to be and how do I recognize the appropriate level of roughness?

I am not kidding, I really don’t know this stuff.

Then Jeff writes:

“What I find particularly interesting is that none of the mistakes on this checklist have anything to do with my skill as a painter.”

I think what Jeff meant to say is that they have nothing to do with what he recognizes as his skill as a painter. I would recognize these mistakes, assuming for the moment that they are mistakes, as being strongly related to his painting skill. Perhaps since I don’t have any painting skill, it’s easier for me to see it than for him. Or maybe he means something different by the idea of skill than I do. (I think skill is an improvable ability to do something) Either way, there’s nothing slam dunk obvious about his point. I don’t see how it can be just a matter of “read the instructions stupid.”

Jeff writes:

“My technical proficiency (or lack thereof) as a painter doesn’t even register!”

Wait a minute Jeff, think about this. What does have to do with your proficiency as a painter? You must have something in mind. If proficiency is a meaningful idea, then you must believe there is a detectable difference between having proficiency and not having proficiency, and it must go beyond this list of issues. Rather than concluding that your skill doesn’t enter into it, perhaps one could look at the same list of issues and interpret it as a list of things unskilled people frequently do when they try to paint things that often leads them to regret the results. It’s a warning for the unskilled, not a message for skilled painters. A skilled painter might actually want to do these things, for instance, to paint with a water-logged applicator to get some particular artistic effect.

Jeff writes:

“To guarantee a reasonable level of quality, you don’t have to spend weeks practicing your painting skills. You don’t even have to be a good painter. All you have to do is follow the instructions on the paint can!”

Now I have logic vertigo. How did we get from avoiding obvious mistakes, where we started, to “reasonable quality”? Would a professional house painter agree that there is no skill required to achieve reasonable quality? Would a professional artist say that?

(And what is reasonable quality?)

Even following simple instructions requires skill and tradeoff decisions. A paint can is neither a supervisor, nor a mentor, nor a judge of quality. Don’t follow instructions, instead I suggest, consider applying this heuristic: instructions might help.

And one more thing… Does anyone else find it ironic that Jeff’s article about reading instructions on paint cans would include a photo of a paint can where the instructions have been partly obscured by paint? Perhaps the first instruction should be “Check that you see this sentence. If not, please wait for legible instructions.”

Defining Agile Methodology

Brian Marick has offered a definition of agile methodology. I think his definition is strangely bulky and narrow. That’s because it’s not really a definition, but an example.

Those of us who’ve worked with Brian know that he doesn’t like to talk about definitions. He’d rather deal with rich examples and descriptions than labels. He worries that labels such as “Agile” and “Version Control” can easily become empty talismans that can turn a conversation about practice into a ritualized exchange of passwords. “Oh, he said Agile, that must mean he’s one of us.” I admire how Brian tries to focus on practice rather than labelling.

Where Brian and I part ways is that I don’t think we have a choice about labels and their definitions. When we decline to discuss definitions we are not avoiding politics, we are simply driving the politics underground, where it remains insidious and unregulated. To discuss definitions is to discuss the terms by which your community governs itself, so that we do not inadvertantly undercut each other.

Here’s an example of how postponing a conversation about definitions can bite you. A few years ago, at the Agile Fusion peer conference I hosted at my lab in Virginia, Brian and I got into a heated debate about the meaning of the word “agile”. He said he was completely uninterested in the dictionary definition. He was interested only in how he felt the word was used by a certain group of people– which group, it turned out, did not include me, Cem Kaner, or very many of my colleagues who can legitimately claim to have been working with agile methodologies since the mid-eighties (or in one case, mid-sixties). Perhaps because of Brian’s reluctance to discuss definitions, our disagreement came up out of the blue. I don’t know if it surprised Brian, but it shocked me to discover that he and I were operating by profoundly different assumptions about agile methodology.

Actually, I have had many clashes with people who claim to own the word agile. It’s not just Brian. But some agilists in the capital “A” camp don’t limit themselves to it. Ward Cunningham is a great example. Find Ward. Meet him. Talk to him. He gives agile methodology a good name. I have had similar positive experiences with Alastair Cockburn and Martin Fowler.

There are at least two agile software development communities, then. My community practices agile development in an open-ended way. We support the Agile Manifesto (in fact, I was invited to the meeting where the manifesto was created, but could not attend). However:

  1. We do not divide the world into programmers and customers.
  2. We do not demand that everyone on the project be a generalist, and then define generalist to be just another word for someone who remains ignorant of all skills other than programming skills.
  3. We believe there can be different roles on the team, including, for instance, the role of tester; and that people performing a role ought to develop skill in that art.
  4. We don’t limit our practices to fit guru-approved slogans such as “YAGNI” and “100% automated testing”, but instead use our skills to match our practices to our context.
  5. We don’t accuse people who question practices of “going meta” as if that is a sin instead of ordinary responsible behavior.
  6. We aren’t a personality cult. (if you ever hear someone justify a practice by saying “because James Bach said so” please email me so I can put a stop to it. I like being respected; I hate being a blunt object for ending a debate.)
  7. We don’t talk as if software engineering was invented in 1998.
  8. We question. We criticize. We learn. We change. We are agile.
  9. When we make definitions, we strive to be inclusive and try not to redefine ordinary English words such as “pattern” or “agile”. Specifically, we probably won’t say you just don’t “get it” if you cite the dictionary instead of using approved gurucabulary. GURUCABULARY: (noun) idiosyncratic vocabulary, often a redefinition of preexisting words, asserted by one thinker or thinkers as a way of establishing a proprietary claim on a field of interest.

I want to offer an alternative definition for use outside of the insular world of capital “A” agilists.

First, here is the Websters definition of agile:

Etymology:Middle French, from Latin agilis, from agere to drive, act, more at AGENT

1 : marked by ready ability to move with quick easy grace *an agile dancer*
2 : having a quick resourceful and adaptable character *an agile mind*

Here, then is my definition of agile methodology:

agile methodology: a system of methods designed to minimize the cost of change, especially in a context where important facts emerge late in a project, or where we are obliged to adapt to important uncontrolled factors.

A non-agile methodology, by comparison, is one that seeks to achieve efficiency by anticipating, controlling, or eliminating variables so as to eliminate the need for changes and associated costs of changing.

Brian Marick’s definition of agile methodology is an example of how one community approaches what I would call agile methodology. My definition is intended to be less imperialistic and more pluralistic. I want to encourage more of us to explore the implications of agility, without having to accept the capital “A” belief system whole.

Fight the power!

“Use Version Control”

Darrell Norton says that “version control” is a best practice. I disagree with him about that, but his blog posting gives me an opportunity to show how context-driven reasoning works.

Darrell writes:

“If you’re looking for a No Best Practice rant like James Bach, then you won’t find it here. Instead, here I will propose the one true best practice I’ve found, one which I have not been able to change it from being a best practice no matter how much of the situation I change:

Use version control”

I think that’s the whole message. I was not able to find any more to it. Okay, let’s look at this critically. I wrote a blog entry about no best practices. Questions that immediately occur to me are the following:

  1. What is the practice being proposed as best?
  2. From what field of candidates has it emerged as the best?
  3. By what sorting process has it been determined to be best?
  4. In what contexts does that sorting process apply?

These questions are not answered in his message, nor does he point to any resource by which we may answer them. I’m concerned about that, because while Darrell may not be a pointy-haired boss who bullies people with nonsensical process-speak, some of that pointy hair in the world does read blogs. Such people latch onto things they don’t understand and force feed them to people who would otherwise be free to do their work in peace.

I am willing to believe that Darrell has some idea in his mind to which he has applied the label “version control”, and he has some notion of what it means to “use” it. I suspect that, to him, this practice seems obvious, and the contexts in which it is worth doing is obvious, too.

But whatever it is Darrell has in mind is not obvious to me, and I’ve been in this business a long time, both as a programmer and in other roles. Darrell has not explained what he means by version control. He has essentially flashed us a briefcase marked “version control”. I wonder what is in that case?

I’ve used something I would also call “version control”, but it isn’t any one thing, it’s many things. One form of version control I’ve used might be called “your turn, my turn”. In YTMT version control, you work on something for a period of time, then you email it to someone else to work on. You don’t touch it until he emails it back to you. Tada! Version control. Is that what Darrell meant? Is that “best”?

I’ve also used Visual SourceSafe. Some would say that’s a terrible tool, others, I suppose, swear by it. Either way, SourceSafe does not solve all the problems of controlling versions. There must be more to the practice of version control than buying a tool and installing it.

I can list many more humble practices that comprise version control for me, and each one involves a choice, not only about what to control, but also about what need not be controlled. Version control goes hand-in-hand with version un-control (e.g. I might change the version number of the document occasionally, yet not keep track of each individual change, and that might be good enough version control).

In many contexts, such as developing complex production software, version control deserves attention. It does not deserve to be patronized by the phrase “best practice.” I think, in his effort to elevate it, Darrell has inadvertantly degraded it to a slogan. He has elevated an empty husk.

Fortunately, he can fix it. All he has to do is tell a story about a time before he practiced “version control” and some problems he suffered. Then he should share how a specific practice of version control (he must describe this enough so that we can get a mental picture of it) seemed to solve those problems without creating too many new ones. We, the readers, would then decide for ourselves what his experience has to do with our own contexts.

While I deny that there can be an unqualified best practice, I have personally experienced benefit from hearing stories about how other people have solved specific problems by employing some practice or another. I think we need more stories, and fewer slogans. Please help.

Therefore, in the spirit of self-referential humor, I say to you “No Best Practices!”

Stuart Reid Says “It’s Better Than Nothing”

I was watching Dr. Stuart Reid talk about model-based testing, some months ago. During the presentation, he complained that so few people used UML for model-making. Why don’t more people use UML, he asked the audience?

I suppose his question was rhetorical, but I couldn’t help myself. I called out “Because it’s clunky!”

“It’s better than nothing,” he replied.

Think about that. Think about that phrase. A comparison is offered between something and nothing. Who could prefer nothing? Nothing is a void. It’s scary and black. It makes us think about death. Ick. But wait a minute, the comparison that matters is not between something and nothing, it’s probably between something and some other thing.

An affliction of people who promote best practices is aversion to alternatives; a passion for monoculture. A desire to have one and only one solution, instead of a spectrum of solutions, to the problems that beset us.

As soon as Stuart said “[constructing state models using the UML formalism] is better than [NOT constructing state models using the UML formalism]” I immediately thought of an alternative which I suppose occupies the “nothing” category in his mind: an alternative to making models in UML is to make models in our heads.

This is not nothing. Our brains are amazing instruments. Don’t pretend that your brain isn’t there or that it does nothing important. Come on, Stuart. I do lots and lots and LOTS of modeling in my head. So do you. A tiny portion of that will we ever commit to any kind record-keeping system.

Another alternative: Pencil and paper. I make little boxes and arrows on paper. It works. Try it.

Yet another alternative: source code. Computer software source code is a modeling language. When I write my software, I am encoding my state models.

What Stuart may have wanted to say is that he believes things would go better in projects if programmers chose to create visual models of software in a specific formal system called UML (to which he may believe there is no viable alternative formal system of visual modeling) prior to writing code, instead of making the leap directly from mental modeling to coding.

It would have been interesting to hear him say that. It’s a debatable sentiment, but at least it’s a way of speaking that doesn’t shut down dialog before it even gets going.

Personally, I find it absurd that a few people would invent an awkward, overly formalized system of making pictures and then hold in contempt the great majority of productive working programmers who choose to ignore it. Remember, just because a consultant wants to take your money doesn’t mean you have to give it him.

— James

No Best Practices

Dear Reader,

I would like you to give up, henceforth, the idea of “best practice.” Thank you.

I want to stamp out “best practice” for several reasons:

  1. There are no best practices. By this I mean there is no practice that is better than all other possible practices, regardless of the context. In other words, no matter what the practice and how valuable it may be in one context, I can destroy it by altering things about the situation surrounding the practice.
  2. Although some things that don’t exist can be useful as rhetorical signposts, “best practice” is not one of them. There is nothing honorable you get from pretending that a practice is best that you can’t also get from suggesting that a practice may be good in a specified context, and making a case for that. Sure, there are dishonorable things you can get from “best practice” rhetoric– say, causing a dull-witted client to give you money. If that has tempted you in the past, I urge you to reform.
  3. It has a chilling effect on our progress as an intellectual craft when people pretend that a best practice exists. Best practice blather becomes a substitute for the more difficult, less glamorous, but ultimately more powerful idea of learning how to do your job. By “learning” I mean practicing the skills of identifying and solving patterns of problems we encounter as testers. Of course, that will involve patterns of behavior that we call “practices”. I’m not against the idea of practices, I’m against pretense. Only through pretense can a practice that is interesting in a particular context becomes a “best practice” to which we all must bow down.
  4. When you say that something is a “best practice”, you may impress the uninitiated, or intimidate the inexperienced, but you just look foolish to people who believe in the possibility of excellence. Excellence in an intellectual craft simply cannot be attained by ignorantly copying what other people say that they do. Yet, the notion of a best practice is really just an invitation to be an ignorant pawn in someone else’s game of process manners– or it’s a trick to make people pawns in your own game.
  5. Simple, honest alternatives are available:
    • “Here’s what I would recommend for this situation.”
    • “Here is a practice I find interesting.”
    • “Here is my favorite practice for dealing with {x}.”
    • “{Person X} attributes {practice Y} for his success. Maybe you’d like to learn about it.”

    These alternatives have their own problems, but less so than than does the Oompa Loompa moralizing that lies behind “best practice.” I’m not trying to stamp out all informal communication, I’m just trying to discourage you, dear reader, from making irresponsible recommendations. Saying “best practice” is obviously irresponsible. Recommending a practice more narrowly may also be irresponsible, depending on the situation, but let’s not worry about that right now.

At this point, if you are in the habit of talking about practices that you think everyone should follow, then you must be pretty annoyed with me. You will be annoyed as well if you pin your hope for project success on finding Holy Grail practices that will endow you with grace and prosperity. You may be thinking of certain practices that you think must be best, and getting ready to throw them at me like Pokemon warriors from your collection.

And if you are one of those people who promote a “testing maturity model” then you must be apoplectic. A maturity model is basically a gang of best practices hooked on crystal meth. In my maturity model of the industry, promoting a maturity model is mere level 2 behavior. By level 3, we outgrow it.

So, first let me apologize for raining on your bubble. I have sworn to speak the truth as I see it, but I don’t enjoy this. It doesn’t make me rich, I’ll tell you that. The way to get rich in this world is mainly by making people feel large hope about a small exertion (i.e. “six-second abs”, lottery tickets, voting in an election, maturity models, and stuff like that). If you want to get rich, do not tie yourself to the truth.

Go ahead and follow your favorite practices. Just don’t preach that the rest of us must follow them, too. Keep your process standards to yourself. If you want to make a suggestion, make one that takes context into account.

Here are some common replies to my argument, and my answers:

  • “I don’t mean ‘best practice’ literally. I’m just suggesting that this is a damn good practice.”If you want to say it’s a damn good practice, then it’s just empty hyperbole to call it the BEST practice, right? But the truth is that your practice, whatever it is, is not a damn good practice. The most it can ever be is a damn good practice *within a certain context*. Outside of that context it is something else. So, please define that context before you prescribe it to me.A doctor who prescribes a drug without examining the patient is committing malpractice. Isn’t that obvious? Would you respect such a doctor? Why should we respect people who prescribe practices without regard for the problems faced in a project?The answer comes back, “What if the doctor tells you to eat right and exercise?”And I reply “Even that is irresponsible unless that doctor has seen me. Besides, it’s terribly vague. Ya call that best practice? It’s vacuous blah blah.”
  • “When I said this practice is a ‘best practice’ I meant it was best for a certain context. But pretty much everyone in the industry shares that context, so what’s the point of talking about context?” Often when someone makes this argument to me, I like to learn something about the context they think is so universal, then talk about a particular company or industry segment that doesn’t share that context. This is not difficult.But even if, for the sake of argument, the context were universal, why call it “best”? Maybe this would make sense if there were a world championship competition for practices. If every conceivable practice were brought out and tested against every other one, in some specified context, then maybe there could be some truth to the “best practice” story. Of course, this hasn’t happened, and it can’t happen.
  • “This practice represents a consensus among industry leaders.” No it doesn’t. You’re just making that up. Industry leaders aren’t able to agree on anything much of substance. I know this because I am an industry leader (based on the fact that some people have said so), and other leaders more often disagree with me instead of obeying my commands. I am influential within a certain clique. The software testing industry is made up of many such tiny cliques, each with its own terminology and values.Whenever people tell you that there’s a consensus in the industry, what they probably mean is that there’s an apparent consensus among those people in the industry whom they happen to respect. The best practice for dealing with discordant voices is: just ignore them.(Exercise for the reader: Can you spot the sentence above where I irresponsibly declared a “best practice”? What do you think I really meant by that?)
  • “Lighten up. It’s just an expression.”It’s an expression that cheapens our craft. Please stop using it so that we can build a better craft.

A valued colleague of mine recently sent me a challenge.

“Would you allow that this is a best practice — ‘Seek to understand your problems before selecting solutions for them'”

Here was my reply:

Your advice is generally good, but it is not a universal best practice.

Here’s my analysis.

  1. Do I know what the practice actually is? The practice you specified is pretty vague. What specifically do I do? How do I tell if I’m doing it correctly? It kind of reminds me of this advice: don’t get sick. I would like to follow that advice, but I’m not necessarily able to. I do know to avoid certain behaviors that will probably make me sick; those are the practices. “Remain healthy” or “Understand problems” are goals, not practices.
  2. Does the advice involve a meaningful distinction among possible behaviors? The more you try to specify something that is always best, the more you must avoid making distinctions that might run afoul of context-specific variables. For instance, here’s a potential best practice: Whatever you choose to do that is supposed to have an impact, make sure you do it in this universe, rather than in other universes that might exist in parallel with ours– otherwise what you do won’t have any impact. See? Although I am not able, in the sixty seconds in which I tried, to find an exception to this practice, the fact that there are no exceptions just makes it seem a best practice that isn’t worth discussing. If I have no option but to follow a “practice”, it’s not really a practice, it’s just life. So, if I want to solve a problem, do I have the option of solving it without having understood it? My answer to that, on reflection, is yes. I believe, for some kinds of problems, I can solve them without understanding them. For instance, by staying pretty much in my hotel room when I am in a strange land, I know I must be preempting lots of problems (and therefore in a sense solving them) that might otherwise befall me, even though I can hardly calculate or catalog them, and certainly can’t say that I understand each one.I can think of other examples where there seems to be value in solving a problem that I don’t understand. Here’s one: I saw a news story about something called Dithranol, which has been used to cure psoriasis for a hundred years. Scientists recently announced that yup, it really does cure psoriasis. Up until now they weren’t sure. In other words, somebody apparently solved a problem that they didn’t understand, and that seemed to be okay.Is it possible for me to understand a problem, solve it, and then suffer in some way because I understood the problem? I suspect that some solutions are socially acceptable only because the person doing the solving didn’t know what they were doing at the time. For instance, in California it is illegal for a direct entry midwife to help a woman go through labor, but it is not illegal for a woman to deliver a baby without help, or for her husband or partner to help her deliver. That’s why, officially, I am listed as having delivered my son, even though I paid an illegal crew of midwives to help me. (They were great, BTW.)Now that I’ve identified alternatives, I can ask, is it possible that the alternatives are better than the proposed “best practice”?
  3. Is there a way to measure the quality of the practice? Without a scale and a measuring instrument, the concept of “best” has no meaning. Therefore, I’d like to know how to measure the value of not understanding a problem before I solve it. I don’t have much of a method for doing that. I’m kind of scratching my head about how to assess the value of understanding a problem, versus the value of, say, avoiding understanding. According to dead poet and methodologist Thomas Gray “Where ignorance is bliss, ’tis folly to be wise.” Maybe his is a useful sentiment.
  4. Is there any conceivable situation, then, when I would rationally make the choice to behave in a way opposite to the practice suggested to be “best”? I can think of a few possibilities:
    • If I am trying to score very high on a multiple choice test under time pressure, and I don’t understand one or more of the questions, it may be a better strategy not to interrupt the testing process and seek to understand each question prior to giving the answer. It may be better to give the best answer I can, based on heuristics or cues that come from the structure of the question or the structure of the answers.
    • If I am being tested on my ability to spontaneously draw correct conclusions, under time pressure, the whole point of the test may be for me to select a solution before I consciously understand it.
    • If I am watching a magic show, it would be an insult to the magician if I tried to figure out the trick using all my faculties of understanding the problem. The purpose is to be entertained. Trying to understand too much about the “problem” represented by each trick might spoil the fun.
    • If I am a novice under the supervision of a master, and the master has specified how I am to select solutions in a particular domain. Even though I may not understand the problem, I still may need to follow the instructions as to the selection of solutions. The key thing, here, is that I am not taking responsibility for the quality of the work, only the quality of my following of instructions.

    Do you know what’s behind all this? I think it’s simply that so few of us know how to do our jobs. Like puffer fish, many of us feel that we need to huff ourselves up so that predators won’t devour us. We fluff ourselves full of impressive words. But when I do that I just feel empty.

Test Messy with Microbehaviors

James Lyndsay sent me a little Flash app once that was written to be a testing brainteaser. He challenged me to test it and I had great fun. I found a few bugs, and have since used it in my testing class. “More, more!” I told him. So, he recently sent me a new version of that app. But get this: he fixed the bugs in it.

In a testing class, a product that has known bugs in it make a much better working example than a product that is has only unknown bugs. The imperfections are part of its value, so that testing students have something to find, and the instructor has something to talk about if they fail to find them.

So, Lyndsay’s new version is not, for me, an improvement.

This has a lot to do with a syndrome in test automation: automation is too clean. Now, unit tests can be very clean, and there’s no sin in that. Simple tests that do a few things exactly the same way every time can have value. They can serve the purposes of change detection during refactoring. No, I’m talking about system-level, industrial strength please-find-bugs-fast test automation.

It’s too clean.

It’s been oversimplified, filed down, normalized. In short, the microbehaviors have been removed.

The testing done by a human user interacting in real time is messy. I use a web site, and I press the “back” button occasionally. I mis-type things. I click on the wrong link and try to find my way back. I open additional windows, then minimize them and forget them. I stop in the middle of something and go to lunch, letting my session expire. I do some of this on purpose, but a lot of it is by accident. My very infirmity is a test tool.

I call the consequences of my human infirmity “microbehaviors”, those little ticks and skips and idiosyncrasies that will be different in the behavior of any two people using a product even if they are trying to do the same exact things.

Test automation can have microbehavior, too, I suppose. It would come from subtle differences in timing and memory use due to other processes running on the computer, interactions with peripherals, or network latency. But nothing like the gross variations inherent in human interaction, such as:

  • Variations in the order of apparently order independent actions, such as selecting several check boxes before clicking OK on a dialog box. (But maybe there is some kind of order dependence or timing relationship that isn’t apparent to the user)
  • The exact path of the mouse, which triggers mouse over events.
  • The exact timing and sequence of keyboard input, which occurs in patterns that change relative to the typing skill and physical state of the user.
  • Entering then erasing data.
  • Doing something, then undoing it.
  • Navigating the UI without “doing” anything other than viewing windows and objects. Most users assume this does not at all affect the state of an application.
  • Clicking on the wrong link or button, then backing out.
  • Leaving an application sitting in any state for hours on end. (My son leaves his video games sitting for days, I hope they are tested that way.)
  • Experiencing error messages, dismissing them (or not dismissing them) and trying the same thing again (or something different).
  • Navigating with the keyboard instead of the mouse, or vice versa.
  • Losing track of the application, assuming it is closed, then opening another instance of it.
  • Selecting the help links or the customer service links before returning to complete an activity.
  • Changing browser or O/S configuration settings in the middle of an operation.
  • Dropping things on the keyboard by accident.
  • Inadvertantly going into hibernation mode while using the product, because the batteries ran out on the laptop.
  • Losing network contact at the coffee shop. Regaining it. Losing it again…
  • Accidentally double-clicking instead of single-clicking.
  • Pressing enter too many times.
  • Running other applications at the same time, such as anti-virus scanners, that may pop up over the application under test and take focus.

What make a microbehavior truly micro is that it’s not supposed to make a difference, or that the difference it makes is easily recoverable. That’s why they are so often left out of automated tests. They are optimized away as irrelevant. And yet part of the point of testing is to challenge ideas about what might be relevant.

In a study done at Florida Tech, Pat McGee discovered that automated regression tests for one very complex product found more problems when the order of the tests was varied. Everything else was kept exactly the same. And, anecdotally, every tester with a little experience can probably cite a case where some inadvertent motion or apparently irrelevant variation uncovered a bug.

Even a test suite with hundreds of simple procedural scripts in it cannot hope to flush out all and probably not most of the bugs that matter, in any complex product. Well, you could hope, but your hope would be naive.

So, that’s why I strive to put microbehaviors into my automation. Among the simplest measures is to vary timing and ordering of actions. I also inject idempotent actions (meaning that they end in the same apparent state they started with) on a random basis. These measures are usually very cheap to implement, and I believe they greatly improve my chances of finding certain state-related or timing-related bugs, as well as bugs in exception handling code.

What about those Flash applications that Mr. Lyndsay sent me? He might legitimately assert that his purpose was not to write a buggy Flash app for testers, but a nice clean brainteaser. That’s fine, but the “mistakes” he made in execution turned into bonus brainteasers for me, so I got the original, plus more. And that’s the same with testing.

I want to test on purpose AND by accident, at the same time.