Benjamin Mitchell and the Trap of False Hypocrisy

One of the puzzles of intellectual life is how to criticize something you admire without sounding like you don’t admire it. Benjamin Mitchell has given an insightful talk about social dynamics in Agile projects. You should see it. I enjoyed it, but I also felt pricked by several missed opportunities where he could have done an even deeper analysis. This post is about one example of that.

Benjamin offers an example of feedback he got about feedback he gave to a member of his team:

“Your feedback to the team member was poor because:
it did not focus on any positive actions, and
it didn’t use any examples”

Benjamin immediately noticed that this statement appears to violate itself. Obviously, it doesn’t focus on positive actions and it doesn’t use any examples. To Benjamin this demonstrates hypocrisy and a sort of incompetence and he got his reviewer (who uttered the statement) to agree with him about that. “It’s incompetent in the sense that it has a theory of effectiveness that it violates,” Benjamin says. From his tone, he clearly doesn’t see this as the product of anything sinister, but more as an indicator of how hard it is to deeply walk our talk. Let’s try harder not to be hypocrites, I think he’s saying.

Except this is not an example of hypocrisy.

In this case, the mistake lies with Benjamin, and then with the reviewer for not explaining and defending himself when challenged.

It’s worth dwelling on this because methodologists, especially serious professional ones like Benjamin and me, are partly in the business of listening to people who have trouble saying what they mean (a population that includes all of humanity), then helping them say it better. He and I need to be very very good at what social scientists call “verbal protocol analysis.” So, let’s learn from this incident.

In order to demonstrate my point, I’d like to see if you agree to two principles:

  1. Context Principle: Everything that we ever do, we do in some particular situation, and that context has a large impact on what, how, and why we do things. For instance, I’m writing this in the situation of a quiet afternoon on Orcas Island, purely by choice, and not because I’m paid or forced to write it by a shadowy client with a sinister agenda.
  2. Enoughness Principle: Anything we do that is good or bad could have been even better, or even worse. Although it makes sense to try to do good work, that comes at a cost, and therefore in practice we stop at whatever we consider to be “good enough” and not necessarily the best we can do.

Assuming you accept those principles, see what happens when I slightly reword the offending comment:

In that situation, your feedback to the team member was poor compared to what you could easily have achieved because:
it did not focus on any positive actions, and
it didn’t use any examples”

Having added the words, what happens if Benjamin tells me that this statement doesn’t focus on positive actions and doesn’t cite an example? I reply like this:

“That’s a reasonable observation, but I think it’s out of place here. My advice pertains to giving feedback to people who feel frightened or threatened or may not have the requisite skills to comprehend the feedback or in a situation where I am not seen as a credible reviewer. And my advice pertains to situations where you want to invest in giving vivid, powerful advice– advice that teaches. However, in this case, I felt it was good enough (not perfect but good at a reasonable investment of my time) to ignore the positive (because, Benjamin, you already know you’re good, and you know that I know that you are good– so you don’t need me to give you a swig of brandy before telling you the “bad news”) and I thought that investing in careful phrasing of a vivid example might actually sound patronizing to you, because you already know what I’m talking about, man.”

In other words, with the added words in bold face, it becomes a little clearer that the situation of him advising his client, and us advising him, are different in important ways.

Imagine that Benjamin spots a misspelled word in my post. Does he need to give me an example of how to spell it? Does he need to speak about the potential benefits of good spelling? Does he need to praise my use of commas before broaching the subject of spelling? No. He just needs to point and say “that’s spelled wrong.” He can do that without being a hypocrite, don’t you think?

(Of course, if the situations are not different and the quality of the comment made to Benjamin is clearly not good enough, then it is fair to raise the issue that the feedback does not meet its own implied standard.)

Finally: I added those bolded words, but if I’m in a community that assumes them, I don’t need to add them. They are there whether I say them or not. We don’t need to make explicit that which is already a part of our culture. Perhaps the person who offered this feedback to Benjamin was assuming that he understood that advice is situational, and that a summary form of feedback is better in this case than a lengthy ritual of finding something to praise about Benjamin and then citing at least three examples.

…unless Benjamin is a frightened student… which he isn’t. Look at him in that video. He exudes self-confidence. That man is a responsible adult. He can take a punch.

Who’s the Real Monster?

“Best practice” thinking itself causes these misunderstandings. Many people seek to memorize protocols such as “how to give feedback… always do this… step 1: always say something nice step 2: always focus on solutions not problems… etc.” instead of understanding the underlying dynamics of communication and relationships. Then when they slip and accidentally behave in an insightful and effective way instead of following their silly scripts, their friends accuse them of being hypocrites.

When the explicit parts of our procedures are at war with the tacit parts, we chronically fall into such traps.

There is a silver lining here: it’s good to be a hypocrite if you are preaching the wrong things. Watch yourself. The next time you fail in your discipline to do X, seriously consider if your discipline is actually wrong, and your “failure” is actually success of some kind.

This is why when I talk about procedures, I speak of heuristics (which are fallible) and skills (which are plastic) and context (which varies). There are no best practices.

I’m going to wrap this up with some positive feedback, because he doesn’t know me very well, yet. Benjamin, I appreciate how, in your work, you question what you are told and reflect on your own thought processes in a spirit of both humility and confidence. YOU don’t seem infected by “best practice” folklore. Thank you for that.

 

 

Seven Kinds of Testers

Most of my work is teaching, coaching, and evaluating testers. But as a humanist, I want to apply the Diversity Heuristic: our differences can make us a stronger team. That means I can’t pick one comfortable kind of tester and grade people against that template. On the other hand, I do see interesting patterns of skill and temperament among testers, and it seems reasonable to talk about those patterns in a broad sense. Even though snowflakes are unique, it’s also true that snowflakes are all alike.

So, I propose that there are at least seven different types of testers: administrative tester, technical tester, analytical tester, social tester, empathic tester, user, and developer. As I explain each type, I want you to understand this:

These types are patterns, not prisons. They are clusters of heuristics; or in some cases, roles. Your style or situation may fit more than one of these patterns.

  • Administrative Tester. The administrative tester wants to move things along. Do the task, clear the obstacles, get to “done.” High level administrative testers want to be in the meetings, track the agreements, get the resources, update the dashboards. They are coordinators; managers.  Low level administrative testers often enjoy the paperwork aspect of testing: checking off boxes on spreadsheets, etc. (I was a test manager for years and did a lot of administrative work.) Warning: Administrative testers often are tempted to “fake” the test process. This pattern does not focus on the intellectual details of testing, but more the visible apparatus.
  • Technical Tester. The technical tester builds tools, uses tools, and in general thinks in terms of code. They are great as advocates for testability because they speak the language of developers. The people called SDETs are technical testers. Google and Microsoft love technical testers. (As a programmer I have one foot in this pattern at all times.) Warning: Technical testers are often tempted not to test things that can’t easily be tested with the tools they have. And they often don’t study testing, as such, preferring to learn more about tools.
  • Analytical Tester. The analytical tester loves models and typically enjoys mathematics (although not necessarily). Analytical testers create diagrams, matrices, and outlines. They read long specs. They gravitate to combination testing. (If I had to choose one category to be, I would have to say I am more analytical than anything else.) Warning: Analytical testers are prone to planning paralysis. They often dream of optimal test sets instead of good enough. If they can’t easily model it, they may ignore it.
  • Social Tester. The social tester wants you! Social testers discover all the people who can help them and prefer working in teams to being alone. Social testers understand that other people often have already done the work that needs to be done, and that no one person needs to have the whole solution. A social tester knows that you don’t have to be a coder to test– but it sure helps to know one. A good social tester cultivates social capital: credibility and services to offer others. (I follow a lot of the social tester pattern. My brother, Jon, is the classic social tester.) Warning: Social testers can get lazy and seem like they are mooching off of other people’s hard work. Also, they can socialize too much, at the expense of the work.
  • Empathic Tester. Empathic testers immerse themselves in the product. Their primary method is to empathize with the users. This is not quite the same as being a user expert, since there’s an important difference between being a tester who advocates for users and a user who happens to test. This is so different from my style that I have not recognized, nor respected, this pattern until recently. People with a non-technical background often adopt this pattern, and sometimes also the administrative or social tester pattern, too. Warning: Empathic testers typically have a difficult time putting into words what they do and how they do it.
  • User Expert. Notice I did not say “user tester.” User experts may be called domain experts or subject matter experts. They do not see themselves as testers, but as potential users who are helping out in a testing role. An expert tester can make tremendous use of user experts. Warning: User experts, not having a tester identity, tend not to study or develop deep testing skills.
  • Developer. Developers often test. They are ideally situated for unit testing, and they create testability in the products they design. A technical tester can benefit by spending time as a developer, and when a developer comes into testing, he is usually a technical tester. Warning: Developers, not having a tester identity, tend not to study or develop deep testing skills.

When I’m sizing up a tester during coaching. I find it useful to think in terms of these categories, so that I can more efficiently guess his strengths and weaknesses and be of service.

Do you think I have missed a category? Do you think I have de-composed them poorly? Make your case in the comments.