My son is ready to send the manuscript of his novel to publishers. It’s time to see what the interest is. In other words, we are going to beta on it. He made this decision tonight.
What is the quality level of his manuscript? There is no objective measure for that. Even if we might imagine “requirements” we could not say for sure if they are met. I can tell you that the novel is about 800 pages long, representing well more than 1,200 hours of his work alone. I have worked a lot on editing and review. The first half has been rewritten many times– maybe 20 or 30. It’s a mature draft.
The first third is good, in my opinion. I’m biased. I’ve read the parts I’ve read many many times. But it seems good to me. I cannot yet speak about the latter 2/3 because I haven’t gotten there yet. I know it will be good by the time we’ve completed the editing, because he’s using a methodical, competent editing process.
Here’s my point. My son, who relies on me to test his novel, has not asked me to quantify my process nor my results. I have not been asked for a KPI. He cares deeply about the quality of his work, but he doesn’t think that can be reduced to numbers. I think this is partly because my son is no longer a child. He doesn’t need me or anyone else to make complicated life simple for him.
How do you measure quality?
Gather relevant evidence through testing and other means. Then discuss that evidence.
That’s how it works for us. That’s how it works for publishers. That’s how it works for almost everything.
Who can’t accept this?
Children and liars.
But my company demands that I report quality in the form of an objective metric!
I’m sorry that you work for children and/or liars. You must feel awful.
Absolutely amazing to see the simplicity of ingenious. KPI-s are very often used in different industries (logistics and manufacturing in my experience) and there are huge amount of risks to trust those numbers. So many people work every day to accomplish some kind of level of those numbers. And they don’t even have an idea what’s going on behind it. Until bang!
Problem in my point of view is absence of proper communication (like you say in your post). Nowadays everybody is so busy (to do something and not think about it) that to trust some kind of number seems easier than to talk to real person. And it’s pandemic.
And that is sad to see.
Thanks for the post!
First, I’d like to mention that I don’t have to give objective metrics. It used to be rare, now it’s never (ever since I put alarming disclaimers on them).
Isn’t it natural for humans, even adult humans, who think using abstractions, to want to use the power of abstraction to simplify the crippling and awesome complexity of the world into something they can easily understand so that they have power over it?
[James’ Reply: Oh yes. I wouldn’t argue that the syndrome I’m complaining about is unnatural. I also don’t believe that shoplifting is unnatural. Nor is it unnatural to cheat on one’s spouse. I’m calling for the bar to be set at a reasonable place for us non-liar adults with respect to expectations and demands regarding quantification.]
Then maybe ignore the abstraction leaks, leverage logical fallacies by incorrectly naming the metrics, and fall into metric abuse. Hard work and hard thinking don’t have great appeal on their own, and metric abuse is easy and I imagine not deliberate in most cases; I don’t think most people in this situation are bad, children or even deliberate liars.
[James’ Reply: This is where I disagree. They ARE children. Not physically, but functionally. A child, functionally, is someone who for emotional and physical and intellectual reasons cannot yet cope with responsibilities that adults consider normal and reasonable. I’m saying the ability to deal with unquantifiable qualities is a normal and reasonable responsibility– routinely handled in many many walks of adult life. I don’t accept that people who think they can manage their family relationships without metrics just fine cannot comprehend managing analogous situations at work.]
Some people just want a widget to crank.
[James’ Reply: Ah, you mean I should expand my categories to: liars, children, and hamsters?]
Especially when they’re sold tools by people that promise that adding up pretend numbers and producing sciencey-looking graphs is a valid and affordable substitute for discussing evidence while from the other end the boss pressures for more done in less time for less money. They could be victims of con artists, be conning themselves, or genuinely believe something that’s just not true because they don’t have a strong foundation in science or metrology or statistics and they read somewhere about a miracle cure in Factory Testing Monthly.
[James’ Reply: Yes… And if so, then, unless they are liars or children or hamsters, we expect them to take responsibility for themselves and get competent at their work.]
Paul Manuele says
I’m very new to the testing industry and so I’m trying to learn as much as I can. Someone recommended I look you up as a good place to start, so I did. I am very impressed with your ability to incorporate the principles of testing into everything you do. I am also very impressed that you use all the knowledge you gain and apply it to testing. I think that there is a very profound truth in how you work.
I appreciate your example. I wish good luck to your son on his first endeavor, as well. The publishing world is not very friendly. But the internet is making things easier to get ideas out there.
[James’ Reply: Thanks!]
Tim Western says
That’s awesome for your son! (I hate to ask, but when you say 800 pages, is that 8 and 1/2″ x 11″ or typical novel sized paper back? – I feel like an argument on test case size is trying to be resurrected here… I won’t.)
[James’ Reply: When it is formatted as a normal paperback, according to the template we downloaded from Lulu.com, it is about 800 pages. Feel free to make that argument. I’m not presenting these measures as any indicator of quality, but rather to help you visualize the situation. What he wrote takes a fair amount of time to read.]
Things that I ask when I’m reading a story (I’ve mentored other fan fiction writers before in my past). I look at things like readability to include, simple things like mechanics (grammar, punctuation, and spelling.) Sentence Structure, and general story flow and character development. I’ve often found some younger writers who saw one thing in their heads, but didn’t get it all down in the rush for the early draft.
[James’ Reply: Yes. And none of those are objective measures of the quality of a novel.]
I hope this novel idea works out well though. 😉
Joe Strazzere says
Perhaps as the evidence moves up the reporting chain in an organization, more and more information is inevitably lost (like in the children’s game of “Telephone”), until it becomes meaningless?
[James’ Reply: Sure, but what accompanies that rise? I’ll tell you what: trust. Top management expects lower management to handle things.]
You and your son talk about his book. You share a common understanding so that your discussions have real meaning.
Your son and his publisher talk. They share a bit less common understanding. The discussion still has meaning, but it’s a bit less.
His publisher talks to his boss. The understanding of the individual book becomes just a small part of all the books the publisher is handling.
And so on.
Business seems to work the same way. Testers on my team talk with me. We discuss the testing that is underway, and gain a common understanding.
But as this knowledge is propagated up the chain, the message is diluted further and further until nothing but a metric remains.
[James’ Reply: No. In a healthy organization, trust remains. Numbers do not run a healthy business. Numbers can be useful, but only if you understand them to some reasonable degree.]
Perhaps it’s all about trust? Can someone 5 levels of management up from the individual Tester just trust the “it’s good” message received through the levels? And is a metric any better?
[James’ Reply: Thank you. I didn’t read ahead as I replied, but it’s nice to see you were ahead of me.]
Tough questions. No easy answers.
[James’ Reply: It is vital for a tester to build credibility.
Mind you, I’m not against metrics. I’m against IMPOSED metrics, and I’m against the childish suggestion that complicated things can be unproblematically summarized in simple ways.]
Sarah Glanville says
Thanks James – but are you saying that KPIs are never useful? I guess this depends on what these are being used for. I can see for example that some KPIs are completely objective and tell us nothing, for example, we’ve raised a thousand defects against this product therefore our testers are awesome, or we’ve achieved 98% coverage…OF WHAT?!
The ones I care about (but admittedly don’t rely on) as a Test Lead are things like, how many defects have been reported by customers (that we should have found), how many of our tests used to be manual but are now automated (to work out ROI)
[James’ Reply: I have worked a lot with bug-related metrics. I can do things with those metrics. But I would never suggest or allow such metrics to be considered as objective indicators of the quality of anything. They are indicators of SOMETHING, but we can’t know what without a lot of ongoing skeptical inquiry.
The problem I have with “KPI” is partly that acronym seems everywhere followed by complacency and the assumption that we know what’s going on. This is why I would replace it with “KPC”, which means “key process conversations.” And as part of a key process conversation (such as the conversation “how is going on our project?”) of course we muster such facts and observations and speculations as pertain to making sense of the situation– always wary about oversimplification, because any oversimplified decision-making process can be easily hacked and coopted by people acting to make things easy on themselves.
The bigger problem that I have with KPI is that it is usually part of a demand by management to make life simple for them, instead of the demand that they should really be making, which is that we collect and make available and discuss with them the facts of the situations we face, whatever those facts might be.
I can’t tell much of anything from a NUMBER of bugs found in the field. And neither can you. If you are entirely honest with me, what you want is not the number, but the LIST. You want to know WHAT WAS FOUND. You can do something with that. The number is of little use.]
The thing that really grates me is when I’m asked for KPIs related to the testers themselves…I can’t think of how to ‘measure and compare’ them in a way that’s fair. Number of defects – they could be preventing the defects being developed instead of waiting to find them which is preferable and makes them great, amount of time spent testing – this will depend on how they work, some are methodical and thorough, whereas some are exploratory and still find great bugs, if I’m getting the right stuff out of them I don’t really mind how they are doing it, so long as there is evidence, how highly they’re rated by developers/ rest of team – they might be buying chocolate for all I know, in which case – where is mine??
[James’ Reply: Every number reflects upon people, because people created the system. We aren’t measuring natural systems, here.]
Thanks for the reply!
I’ve come around to agree with the post content now. I suppose looking back on it I wasn’t arguing against a point you made, just saying that hopefully some understanding of the problem, with some empathy to give us some insight, will help to find ways to change things. I think I confused that idea with the idea that we should be tolerant of bad work.
[James’ Reply: Well, to be complete, yes. Tolerance can be important. I often am tolerant with my clients. I look at how must they are trying and I try to weigh the bad against the good. I’m just putting a stake in the ground about standards. How you hold people to standards is another matter, and compassion for confusion may be a good idea.]
Matt Stave says
It seemed you were throwing babies out with bath water regarding KPIs and quality.
Perhaps I’ve been misusing the term KPI wrt quality. It seems to be that the way to report on quality is to have expert witnesses examine the data and express their read of it, citing objective data as well as their required context. There are data points to be examined that I’ve been calling KPIs for things such such as defects that may provide useful information if you understand the full context and why they show what they show (today).
[James’ Reply: That sounds okay. I am reacting to the “culture” of KPI– the habits and assumptions and values that I perceive to be associated with that acronym. It’s possible that some people are using that term in a benign way that does fall afoul of the concerns I raised.]
Often, a bunch of new high severity defects represent a new quality problem, but not always.
[James’ Reply: Certainly, but it’s not the NUMBER of them that you consider. It’s the particulars of each one and how they are related and the underlying causes. Naturally the number might catch your eye first, but you immediately move on from that. And if someone were to tell me that it’s absolutely vital to count bugs, I would say, no, it’s vital to know what the bugs are– it cannot be more than a momentary convenience to count them.]
It seems bad, but there may be mitigating circumstances. It is an indicator that more investigation is needed – “there’s smoke”. If an expert vets the numbers, and determines there are or aren’t mitigating circumstances, then that opinion enables that KPI to be broadly useful, if it’s communicated with that additional data point.
[James’ Reply: I do not in any way object to collecting data and looking at it. I am objecting to the demand that there must be some simple abstraction which somehow distills the essence of the situation and forms the major or sole basis for decision-making on projects.]
I get that surely there are many pathological uses of KPIs as with all kinds of data, but that doesn’t mean they’re always “lying.”
[James’ Reply: I said that liars and children insist on KPIs. I meant that if you are not really serious about doing a good job as a manager– if you are faking you way through– then you look for KPIs in the same way that politicians love to talk about test scores in schools, as if tests can or even should represent the status of a someone’s personal growth. That’s the liar angle. The child angle is about wishing that world were simple and not being able to cope with the fact that it isn’t.
What YOU seem to be talking about is not taking the KPIs seriously. In other words, YOU don’t seem to consider them “key” or that they are necessarily “indicators.” I like that. I suggest that instead of adopting the language of data fetishists, though, you could simply say that you consider data from a variety of sources in your discussions and analysis.]
Perhaps the intent of the term is to have objective numerical data that directly expresses quality without needing the additional context, and for the things I call KPIs today (and try and present with the specific context, or with disclaimers on the specific limits or “confidence” of each one if communicating up), I should call metrics or something.
My mind has a known defect that involves the intermittent overly literal interpretation of hyperbole that may have bitten me again, here. I welcome pull requests if someone has a fix.
In any case, thanks for writing what you write and presenting a forum for discussion.
Greetings from India,
I have just watched your open lecture on YouTube and I am very impressed. I would like to thank you for your thoughts and I think this will bring the best in me.
Matt Haun says
After reading through your responses to the comments, I feel much better about this piece than when I first read it. I think you missed the opportunity in the main piece to really describe what your objections are. As you later say, there is nothing wrong with collecting the numbers, analyzing metrics and finding those areas where a “conversation” is needed. When you’re trying to manage an organization of 100 people or more potentially spread out across multiple timezones, nobody has the luxury that you and your son have to sit down and discuss all the finer points of the novel. Metrics can help you focus on the conversations that are needed.
Metrics are simply a tool for us to use. I believe that most of the metrics we track related to testing shouldn’t be published to any type of broad audience. They should only be published to the people that understand what the numbers really mean, how the numbers can be used to identify faults in the process, and things that may be working well so they can be learned from. I really think that KPIs are way too often misused as a blunt object to beat people with.
Anders Risvold says
This is a bit funny. I am currently in a workshop (right as I write this) with test leads and test managers, how to best implement and use KPI in our newly bought, overpriced, don’t bring too much value to the table – software for requirements handling, ‘test procedures’ and defect management.. I wont say from what company, but it is large and rather well known.
[James’ Reply: It’s quite a money machine. Crystal power, Feng Shui, and KPIs.]
A bit by chance I came upon your blog and read your note about KPI’s at the same time I launched my revolutionary idea of new reporting format/KPI: a big round circle, coloured with ‘green’ for ‘Everything is fine, chill out’, yellow for ‘We have issues, perhaps a bit out of time, not where we want to be, but we are handling it. Feel free to call me up for discussions if you want more info’ and red for ‘here we need serious decision making, I advice you to come see me as soon as you can if you are serous about your manager position, because we are buggered up,’
[James’ Reply: I like use “structured subjectivity” dashboards. These are concise descriptions of status not based on numbers, but rather on the judgements of people on the project. It’s similar to what you describe.]
I didn’t manage to sell my idea this time, but I will try again later.
I don’t have had much time yet to think the idea trough, but I don’t think it is any worse than any other reporting formats or metrics suggested. Numbers of tests run ??? wtf does that tell us ?
[James’ Reply: It tells us nothing at all, and it’s fertile ground to manipulate management.]
My report is easy to create (based on test managers evidence so far in the testing process), easy to understand, easy to read. Minimal overhead and just to the core.
The biggest objection I can think of is that it might be viewed upon as a mockery, and even if it clearly has elements of that too, the benefits far outweighs the negatives.
Anyway, have to get back to the discussion. Will make sure to check your blog more often ( I did read your book lessons learned many years ago). Keep tuned, in 5 years from now you can bash on ‘Anders 3CRS’ 3 colour reporting system. Agreed the concept is not new, I think the context is.
Oliver Erlewein says
Your son trusts you, he respects your judgement and your professionalism. He is also actually interested in an outcome and it’s success. That is a winning combination for success.
[James’ Reply: Agreed.]
The above are all features that, in varying degrees, seem to be missing in projects. The knee jerk reaction to try and save the situation is CONTROL! This is done by KPIs, test cases, written-xyz stuff,…
[James’ Reply: You mean it’s often ATTEMPTED to be done through the ILLUSION of control.]
And there is a limitless amount of $$$ to be made selling solutions that don’t fix the issue.
[James’ Reply: Yes, sadly.]
It’s the same conundrum doctors have. If they eradicate sickness and any form of medical need then they are out of a job. If we dealt in truth and simplifying complexity there would be far less $$$ to be made in the industry (this is a fallacy though! Won’t go into detail here). So instead of doing the obvious, paying doctors for health and project teams for truth and transparency, we do just the opposite. And never forget it always takes two to tango so pointing a finger won’t really do it.
All that makes me think on how do you foster trust? There is probably no other way than to temporarily open yourself up to risk until trust mitigates it.
But then I know what is happening now is risky as hell just that we don’t admit it. We have multi million $ projects that just stick their heads in the sand call it KPIs and “best practice” (thinking the breeze at the tail feathers tells them enough about where they are!). What irks me is the unwarranted amazing success these projects have, never realising (or admitting?) that it is individual efforts that actually make it happen.
As for your son I wish him all the best & luck! I’d actually be interested in the story of what processes he used to write the book and where he got them from. But that’s for when we sit over a dinner sometime as I am sure Rebecca wants to hear that too.
Lanette Creamer says
It makes me so happy to see any male say this publicly.
When I say this I get feedback that I’m “defensive”. I consider myself “Right”, rather than defensive, but thank you for showing that it’s possible to have this opinion because you have it, not because you feel attacked.
Having an opinion isn’t something gendered, but the response to it seems to vary depending on your gender when you say it. See. Balls that are metaphorical seem confusing if you don’t have real ones.
Daniel Kihlgren says
I think KPIs is the work of the devil! Well there might be some use to them, but I have seen so many times number of testcases of usage of lab equipment as kpi. This always leeds to short an many testcases. And it’s fast to write a useless testcase and it gets the numbers up. If you measure uptime of your lab it gets important to have it running. Crappy tests that take a loooong time makes you look good :/
KPIs are for lazy executives that dont bother to check if the quality is good.
I’m in a situation where some KPIs (metrics) are required and I’m allowed to provide what data I can.
[James’ Reply: Translation “I am in a situation where my bosses don’t know how to manage and don’t trust me to manage myself and don’t listen to me.”]
No one is requesting specific numbers, so I try to provide what I have available (like amounts of bugs per project) but try to caveat everything as much as possible. But I still see it all as a lot of numbers without context which can be easily misinterpreted, manipulated and artificially inflated or deflated.
[James’ Reply: That’s right.]
What I’ve been thinking about recently is how to find something I can keep track of that gives some legitimate semblance of success. If I have to provide numbers I want them to be useful, not just for the company but for my team (actually, I’m more interested in making them useful for my team). So what in the testing world can be measured that might be an indication that the team is doing well (not just cranking out bugs)?
[James’ Reply: A better first question is “what is testing and what signs are there that I have done testing?” All legitimate measurements grow from a theory of the process. So, what is your theory? In my theory of testing, there is no sense in speaking of an *amount* of testing except in qualitative ways related to risks, techniques, an coverage of areas.]
An example I recently read about was in a collections agency. The employees were motivated by the numbers of collections they completed, but they were all miserable. An employee started her own agency and decided to track the number of “thank you” notes the team received. The employees were happier and more productive than those tracked based on collections quantities.
That’s the kind of thing I’d love to find for testers (since I have to track something). What would show a successful team and actually motivate people (in the right way)?
[James’ Reply: You can count time. You can reckon testing effort by tracking uninterrupted time spent testing and you can quantize that into “test sessions.” You can categorize time in different ways. You can plot risk and coverage areas in a grid or mindmap, then establish a set of qualitative “test levels” such as 0, 1, 1+, 2, 2+, 3, and 3+ (that’s what I use). The levels represent qualitative achievements such as 1 = sanity check, 2 = all common and critical testing complete, 3 = complex and quasi-functional testing complete.]
Thanks James. Those are both good ideas and I’ve thought about how to track time with session based testing. Uninterrupted time chunks are a little hard to come by but it could still be workable. I also like the idea of looking at testing levels. I’ll have to give that some serious thought on how I might use it. And your questions “what is testing and what signs are there that I have done testing?” and ‘what’s my theory?’ are definitely giving me food for thought.
I have a question not really related to the topic but near to it.
It is about test reports. If it should include recommendations about the release or not.
Many project leaders and test chiefs expect release recommendations from test leaders. “Do you recommend or not to release it?”
I try to explain – this is not our decision. And successfully arrive into discussions “recommendation is not a decision”.
[James’ Reply: You may make recommendations. A recommendation is not a decision. In order to make a responsible recommendation, you must recommend based on your understanding of the needs of the business, not just your own personal feeling.]
From my point of view – in theory yes, a recommendation is not a decision. But in the reality it will be as it makes it easier to make a decision based on another person recommendation. Also recommendation obligation leads to:
1. Worse test reports. I can skip the whole thing and just say “I do not recommend”
[James’ Reply: I don’t give a recommendation if my clients are going to treat it as a decision. They must listen to the report.]
2. Stress for test leaders. “I did not recommend to release and they run over my recommendation and went life. Who cares what I say? I feel useless” is something I hear often.
[James’ Reply: What difference does it make if they don’t follow the recommendation? The whole point is that the right person makes the decision, whatever that is.]
3. People do not read/listen to test reports and analyze the information – they go directly to the recommendation.
[James’ Reply: If you suspect that, then don’t give any recommendation. Don’t let anyone use you as an excuse not to do his job.]
Am I right or I completely missed the point?
[James’ Reply: This is not an easy issue. People want you to guarantee the product. You have to refuse to do that.]
Thanks in advance!
Ivor McCormack says
Its been a while since I checked in on your blog and boy have I missed a lot 🙁
There is an argument that is put forward that evidence based, data driven decision making is the only way to go. Calculate the odds based on what we know and go for it!
The problem is always in the way people play with the odds. Lies, Damn Lies and Statistics as an old Maths Professor used to say.
KPI’s allow people to focus on the big picture or the minutiae if they want to, whatever suits their purpose.
I have been involved in projects where the main KPI was Milestone achievement and other key data points were ignored, until it was too late.
The problem became one of overcoming a lack of trust from the key stakeholders who asked the very valid question, “If you can’t deliver how can we trust anything else you say?”.
The response did not go down well, “We described to you the problems the project faced, but you were only interested in the milestones we did or didn’t achieve”.
That led to a round of metric gathering that would strain the capability of Deep Thought to determine what were the objective facts.
KPI’s are a hammer, and we all know that to a man with a hammer, everything looks like a nail.
Thanks for the article.
I’m definitely a fan of you and the ways in which you think about and talk about testing. I suppose I related particularly to this as seemingly my boss would like to be given some sort of number or numbers at the end of a project that would tell him whether or not the product is ok for general release. He also seems to listen a lot to a software test consultant that tells him that this can be done and he can provide it. This consultant has an impressive title and I suppose I’m not as a good a talker, so my boss doesn’t listen to me. Sigh.
[James’ Reply: Thank you.
BTW, I like your new blog. Only thing that’s weird is the title. You say that we testers sometimes forget that it’s better for the bugs not to be there in the first place. I don’t know anyone who thinks that it’s good for bugs to be in the code, actually. Are you saying that it would be better if we put down our testing and took up coding, so that we could do a better job than the coders who are currently doing that? If so, who would test our code? Nobody?
Let me put it this way: if you were training to be a very good goalie in a soccer game, what would the point be of saying “gee wouldn’t it be better if we didn’t need a goalie?” The answer is yes, but who cares? We do need a goalie, and therefore we should have a good goalie.]
Well for example, if I sell UI automation test tools, it is actually in my interest for software to be built badly, in a sense. But anyway, my point was not really that testers want bugs to be there, but rather that they may neglect root cause investigation and reporting. The other thing is, if your root cause analysis and corrective action is really effective, well that may just mean that you need less testers. So, I don’t mean ‘Don’t Test’ to be taken literally: software is highly complex, it will never be built without bugs, of course. Therefore, testers will always be needed. The blog title was just a device for stressing the need always to be looking for ways that we can actually expend less effort on testing, not that we literally don’t test at all, which of course is nonsense.
I have read this article however got the intention behind it from the title or subject itself. I have following opinion on this topic;
– KPI’s/metrics are designed to quantify easily (in any industry).
[James’ Reply: I think that is irrelevant. Ease of quantification is not important compared with value as a tool for inquiry and decision-making. I don’t believe KPI’s, as a concept or heuristic, are generally useful for that purpose. I think KPI’s express a philosophy of detachment from reality.]
– All the times it would not be possible to have discussions due to various reasons- those could be due geographically distributed team, different time zones, understanding levels, any kind of disability.
[James’ Reply: If you can’t have a discussion, and you can’t personally supervise, then you still have writing. You have stories. We don’t need KPI’s and also, KPI’s don’t work.]
– I think its a responsibility of the KPI publisher to make sure it makes sense and is correct (correct to the level of making sense out of it + showcasing reality + should be readable).
[James’ Reply: This responsibility is not possible to fulfill. Any responsibility that cannot be fulfilled is no responsibility at all.]
– Than having ”No KPI’s” and have only discussions, I think its good to have a discussion with a reference to KPI/some metric which would give a definite direction.
[James’ Reply: “Definite direction” is not something we have any trouble getting or giving when we need it. To imply that only through numbers you get “definite direction” is silly and displays an ignorance of what direction means or what numbers do.]
– In the end its our responsibility (as a team & as an individual) to make any KPI successful, realistic which could certainly happen along with the discussions and metrics together. If someone is just making it as a table to be published just for the sake of publishing, then it is obviously meaningless.
[James’ Reply: KPI’s don’t work. KPI’s are a dumb idea. That is my judgment. By the way, I held the title of “staff metrics engineer” at Borland International for a while, and I was responsible for creating and managing their bug metrics system. We used bug metrics, but not as “key process indicators.” We made no attempt to represent social activities in numerical form.
I have been in hundreds of companies. I have seen dozens of attempts to apply KPI’s. I have not yet seen a healthy use of this concept. It’s just a dumb idea. It is promoted by people who don’t understand what they are losing and how KPI’s are hurting them.]
Thanks James for the reply to my comment. However I still do not see any robust reasoning beyond words “you believe its dumb”. I would still insist that a combination of metric and discussion make sense. May be its difference of opinions or what you observed as part of your industry experience does not match with what I have implemented in my projects (e.g. maintaining defects metric does help – one should it as appropriate based on project context)
[James’ Reply: Excuse me, you want reasoning? That’s strange to hear since you have offered no support for your own position. You approached me with an unscientific faith-based opinion, and I responded with my own opinion.
I responded specifically to each of your points, but for some reason you have ignored that and chosen to focus on my final statement that KPIs are a dumb idea. If you choose to ignore my arguments, that’s your choice, but don’t insult my intelligence, or that of my readers, by claiming I have made no argument. They were right in front of your nose, sir.
But if you would like stronger reasoning, then let’s start with the concept of “burden of proof.” The burden of proof rests with the party making the more remarkable claim. It is a surprising claim for you to say that we should or even CAN find quantitative measures for every interesting aspect of a social process. Where is the science or logic that justifies the assignment of meaningful numbers to social life? Don’t lecture me about reasoning when you have come naked and unarmed to this debate!
That software projects are social is unarguable. A software project is people coming together to solve problems via the use of some sort of software contrivance. There is no algorithm for doing this. It is a matter of design, and therefore a question of navigating a huge solution space using finite time and resources. This is all explained in Sciences of the Artificial, by Herbert Simon, the Nobel laureate. See also The Social Life of Information, by Paul Duguid and John Seely Brown, based on research at Xerox. See the Mythical Man-Month, by Fred Brooks.
Social scientists understand the dangers of measuring a self-aware system, such as a software project. You can read any textbook on Qualitative Research, Grounded Theory, or Ethnography to become acquainted with these dangers. Read how experimental psychology is done.
Your statements about measurement betray a reckless disregard for the difficulty of controlling social systems. See Measuring and Managing Performance in Organizations, or Introduction to General Systems Thinking, or Quality Software Management (Vol. 1 or 2). Or the Logic of Failure. Or the Black Swan. These books will bring you out of the clouds.
The whole idea of a “KPI” is faith that you can use tractable algorithmic models to control complex social systems via a handful of dimensions in a deterministic (or usefully probabilistic) way. It’s the software project management equivalent of penis growth pills, or How to Seduce Any Woman seminars. It’s quite simply the belief in magic. I am asking you to support your wild claims or else cease making them.
I don’t use KPI’s because I am an adult and I can deal with complexity. I sometimes make use of metrics, but not as “key indicators” of anything, but rather as one limited and unreliable means to pursue ongoing inquiry into the nature of the systems I study.
For pure reasoning, here you are:
A KPI must be based on a sufficiently reliable and valid model of the system you are measuring, a sufficiently reliable and valid model of “success” or “productivity,” a sufficiently reliable and valid theory of how something measurable in the system correlates with “success” or “productivity,” a sufficiently reliable and valid measuring instrument, and a sufficiently reliable and valid model of controlling that system (because the whole purpose of a KPI is to inform management action). Every measuring instrument is subject to error, and so you also need a theory of error for your instrument that allows you to evaluate the quality of your measurements. Your models must also take into account the systemic effects of attempting the measurements. Let me know if you disagree with anything above.
You seem to be saying that access to such models and instruments is a trivial matter. Whereas my claim is that it is a more or less intractable problem. This is my strong claim. I predict, therefore, that there are no unproblematic, uncontroversial KPIs, and thus no general basis for the achievability of any of the bullet points you offered in your original comment.
Therefore, you can prove my strong claim wrong my citing one example of an unproblematic, uncontroversial KPI. I challenge you to do that. Whatever you propose, I predict I will show is both controversial and problematic.]
Thank you for your detailed reply. Just want to clarify that -I neither have any intention to challenge you/insult you nor to any of you readers. I have gone through your above KPI reasoning comments(& thank you for that):- that’s where it addressed my points. I understand that KPI’s can’t be claimed as fully unproblematic.
[James’ Reply: That’s an odd understatement. A more accurate statement would be that there is no evidence, nor any other reason to believe that a social system that solves novel problems can be effectively managed by reducing it to numbers.]
Hence I have specifically put my “opinion” with full respect towards your findings. The only point I wanted to make was, considering KPIs are not fully correct, let us take the remaining correct from it and give it a strong support by discussion as well. Thanks for giving your time for this discussion.
[James’ Reply: I don’t know what that means. But if you propose a KPI for software I will be happy to debunk it for you.]
Micah Tan says
Really late to this party, but enjoying the conversation, both the original thought and the conversations in the comments thread. Regarding “the dangers of measuring a self-aware system”, I’ve always used an “Observer effect” as a metaphor for KPIs skewing organizational behavior towards undesired (but importantly, potentially locally beneficial) outcomes.
Do you have any recommendations for papers, books, etc. which go deeper into the topic?
Appreciate you taking the time to share your wisdom.
[James’ Reply: You could try The Social Life of Information.]
Marius Francu says
I was reading a post of Dave Snowden: http://cognitive-edge.com/blog/a-perverted-system/ . Reading it reminded me about this blog post and all of its comments.
@James Bach: I read all your blog, is wonderfull.
Thanks for this blog post (I just found years later). The comments are definitely interesting and I appreciate the call out because I too work in (great) corporation that unfortunately uses KPIs as their love language. And it makes sense for what they use it for. It really does. And we’re a multi-billion dollar company so I know it works. But I’ve always struggled with how to generate valuable KPIs for quality. Because they generally don’t make sense to anyone other than the team. If I report number of defects found during test, then I’m basically saying that Dev sucks at writing good code or the PO’s have no clue how to write testable requirements. If I report number of defects found post-release, then I’m basically revealing my team flaws. Obviously that’s a generalized comment but there’s truths in it. Because I live in a world where the KPI will never go away, I had to find a way to generate some sort of meaningful information. And it really came down to requirements coverage and type of testing used (IE Automation, Manual, etc..) of each individual release requirement, how much is being covered by all testing (including Unit). That’s literally the most reliable metric I can report up and it’s trackable in our existing toolkit. It justifies QE existence to upper management IMHO. It shows value on the work in progress and completed work but has nothing to do with quality. Because again, reporting defects as a metric has underlying implications. Which I don’t think are necessary or productive to quality.