Many years ago— around 2001 if I remember— I worked with Paul Szymkowiak on improving the testing side of the Rational Unified Process (RUP). Paul was on the RUP team, proper, while I was an outside consultant brought in because of my well-known work on exploratory testing practices.
During that project I was obliged to take the RUP class, taught by Rational to paying clients. At the class, I asked uncomfortable questions and challenged many aspects of RUP. The instructor (not Paul) was tolerant at first, until he found out that I was working for Rational. Then he became livid with me, threatening to throw me out of the course and accusing me of betraying my employer and for making RUP look bad in front of outsiders. He threatened to get me fired, which is strange considering that I had been hired to do the thing that I was doing in his class— to critique RUP. Also strange because I was doing exactly what he should expect from any committed student, and what I hope every student will do who takes my training.
After the class, I went back to working with Paul, where we engaged in useful, ongoing debate about RUP. Paul didn’t act on all my suggestions. I wanted him to throw away the ridiculous 25-page RUP test strategy template, for instance, but he felt he couldn’t do that. But what did come out of it was a strong working relationship and enduring respect. This is because Paul is a leader.
He is a context-driven thinker. He doesn’t focus on “best practices,” Instead, he thinks about skills and heuristics. He thinks about problems and solutions. He doesn’t just follow a methodology, he is a methodologist. After my first Pipeline post, yesterday, he tweeted:
Great article – thanks as always for sharing. The list resonated with me, but felt awkward and slightly incomplete. It’s challenging to work within memorable checklist contraints, but here’s an adaptation inspired by your post that feels more useful for me …1/2
— Paul Szymkowiak (@paulzee) March 1, 2020
There followed a brief argument, back and forth, which, because Paul is Paul, didn’t turn into a flame war. I asked him to post his ideas as a comment on my blog, but I think it would be better to feature him in a regular post. Here goes:
Thanks James for the helpful follow up exchange on Twitter on the Bug Pipeline.
Your post touched on some key points for me that I would generalise as context-driven communication/ adapting communication to context. My assumption is that communicating the right bug information to the right people in a compelling way is a key aspect of enabling a bug pipeline to provide useful outcomes.
Acknowledging that your intent was to focus on the bug itself as the central element, I found my thoughts went to the supporting human interactions that carry the bug through the pipeline with greater or lesser success. I think the checklist is – at least in part – reflecting the communication of the bug between the tester and their stakeholders.
With a focus on communicating, your list triggered the following adapted checklist for keeping a bug moving through the pipeline. I see these as heuristics: some of which might apply only on occasion. I’ve purposefully tried for symmetry in the list: in an attempt to aid memory retention:
The Bug Exists (because)::
1. is producible/ produced
2. is observable/ observed
3. is comprehensible/ comprehended
4. is authenticatable/ authenticated
The Bug Matters (because):
a. is reportable/ reported
b. is receivable/ received
c. has relevance/ is relevant
d. is advocatable/ advocated
Without understanding the context of these communication factors—which can vary considerably depending on cultural context—bugs can get stuck at various points in (or drop inappropriately out of) the pipeline.
This is pretty close to his original tweet thread. But as a result of our conversation, he added explanatory notes:
Supporting comments for further clarification:
Comprehensible reminds me that I need to understand how the different stakeholders who will interact with the bug will use the information I provide, and what they need to be able to get value from it.. Comprehended reminds me that I need to understand or at least guess or infer about the bug being observed/ produced, and that I need those stakeholders to comprehend it too.
Authenticable reminds me that I need to understand the oracles I will make use of to verify my findings, and how and when to source or access suitable reference sources – including people. In some cases, oracles may need to be formally requested and checked, or may be time sensitive or contain classified information. In some cases they may take time and effort to create, access or coordinate. (Authenticated reminds me that I need to confirm my findings against those oracles).
Reportable reminds me that I need to understand the reporting system of the stakeholders I am reporting to: their processes, formats, culture, ceremonies, etc, and that I understand how to report effectively within that context. Reportable is more about the context of the people receiving the report than directly the tester herself, although bug reporting is a communication between the tester as author and other people.
Receivable reminds me that all stakeholders may not be treated equally, and they may not have access to the default system of record I’m using to record bugs, or know how to access bug information usefully. There may be a need to assist them in receiving bug reports, such as reporting through additional communication channels/ mechanisms/ systems to ensure all relevant stakeholders can receive the reports.
Has Relevance reminds me about triage processes: understanding what is required to get the appropriate focus on action for the bug, including what bugs have less relevance for stakeholders and might warrant less focus.. Arguably this might happen during reporting, but sometimes triaging for action is a secondary process. Relevant reminds me I’ve succeeded in communicating relevant/ relative importance.
Advocatable reminds me that I need to be aware of how the advocacy section of the pipeline works in this context. If a bug could benefit from one or more advocates supporting it, I need to know who I need to advocate to, what do they care about, and what approach works to compel them to become an advocate. Some bugs might be advocatable under the specific set of contextual constraints, others might not. Some bugs might be advocated to the wrong stakeholder, or to the right stakeholder in the wrong way. Advocated reminds me that the bug has moved to a position of focus and support for action through an advocate.
I need to think about Paul’s ideas a little more, and talk them over with Michael. I might adjust my version of the pipeline, or I might not. The reason I’m excited to show you Paul’s take on this is that it’s a perfect example of how we in the Context-Driven Testing community work with each other. Our interactions involve no attempt to dominate each other or force consensus. Of course we want simplicity, harmony, and agreement, if we can get it, but the way we try to get it is through gentle marketing over time, trusting that the really good ideas will be picked up by the really good practitioners in the long run.
The move Paul made here could be called symmetrical expansion, or pattern completion. He “went meta” by asking what my list was silent about. This is a common move in methodology analysis (or in critical theory in general).
Michael and I already had done that analysis when I was writing the initial article. The reason I opted not to fill out the pattern in the way Paul wants to is that my version of the list is simpler, and each item on my version of the pipeline covers more ground. I think Paul is on a somewhat slippery slope, because we can keep heaping more elements onto that list until its hard to quote, hard to remember, and awkward to apply. Paul’s items don’t seem very focused on the bug, to me, which gets away from my original goal— which was to create a tool for talking about what goes wrong with bug reporting.
My version was designed to join the notion of a bug existing to the notion of a bug mattering. It moves from the world of ontology (being), through epistemology (knowledge), to axiology (values). This is indicated by the first item on my list being “exists” and the last being “matters.” The middle part is all about become aware. Each stage is related to at least one common way that things go wrong in testing and bug reporting, all kept in focus by the notion of “bug.” Michael and I tested this by telling each other stories of bug-reporting-gone-wrong and then relating it to the pipeline pattern. We stopped modifying the model when we found it could handle all the stories we came up with.
Paul can counter-argue that the way he factored the list makes it more memorable. (I think that re-writing any list in any way makes it easier for you to remember that list.) He could argue that a better focus would be the tester, or the bug reporting process, rather than the bug or the bug report itself. Well, that could be a matter of taste. It’s hard to know the truth until we’ve tested our various versions of this in field use.
Since Paul and I are engaged in an exploration of heuristic methods, we can easily tolerate differences in our concepts and vocabularies. What matters for us is to be able to talk about this, and even argue about it, while maintaining an appreciation of the subjectivity and uncertainty which surrounds this whole subject matter.
Learning to have those hard conversations, without pulling your punches, is all part of earning a reputation as an expert in your field. I recognize Paul as such.