This post is not about the sort of testing people talk about when nearing a release and deciding whether it’s done. I have another word for that. I call it “testing,” or sometimes final testing or release testing. Many projects perform that testing in such a perfunctory way that it is better described as checking, according to the distinction between testing and checking I have previously written of on this blog. As Michael Bolton points out, that checking may better be described as rejection checking since a “fail” supposedly establishes a basis for saying the product is not done, whereas no amount of “passes” can show that it is done.
Acceptance testing can be defined in various ways. This post is about what I consider real acceptance testing, which I define as testing by a potential acceptor (a customer), performed for the purpose of informing a decision to accept (to purchase or rely upon) a product.
Do we need acceptance testing?
Whenever a business decides to purchase and rely upon a component or service, there is a danger that the product will fail and the business will suffer. One approach to dealing with that problem is to adopt the herd solution: follow the thickest part of the swarm; choose a popular product that is advertised or reputed to do what you want it to do and you will probably be okay. I have done that with smartphones, ruggedized laptops, file-sharing services, etc. with good results, though sometimes I am disappointed.
My business is small. I am nimble compared to almost every other company in the world. My acceptance testing usually takes the form of getting a trial subscription to service, or downloading the “basic” version of a product. Then I do some work with it and see how I feel. In this way I learned to love Dropbox, despite its troubling security situation (I can’t lock up my Dropbox files), or the fact that there is a significant chance it will corrupt very large files. (I no longer trust it with anything over half of a gig).
But what if I were advising a large company about whether to adopt a service or product that it will rely upon across dozens or hundreds or thousands of employees? What if the product has been customized or custom built specifically for them? That’s when acceptance testing becomes important.
Doesn’t the Service Level Agreement guarantee that the product will work?
There are a couple of problems with relying on vendor promises. First, the vendor probably isn’t promising total satisfaction. The service “levels” in the contract are probably narrowly and specifically drawn. That means if you don’t think of everything that matters and put that in the contract, it’s not covered. Testing is a process that helps reveal the dimensions of the service that matter.
Second, there’s an issue with timing. By the time you discover a problem with the vendor’s product, you may be already relying on it. You may already have deployed it widely. It may be too late to back out or switch to a different solution. Perhaps your company negotiated remedies in that case, but there are practical limitations to any remedy. If your vendor is very small, they may not be able to afford to fix their product quickly. If you vendor is very large, they may be able to afford to drag their feet on the fixes.
Acceptance testing protects you and makes the vendor take quality more seriously.
Acceptance testing should never be handled by the vendor. I was once hired by a vendor to do penetration testing on their product in order to appease a customer. But the vendor had no incentive to help me succeed in my assignment, nor to faithfully report the vulnerabilities I discovered. It would have been far better if the customer had hired me.
Only the accepting party has the incentive to test well. Acceptance testing should not be pre-agreed or pre-planned in any detail– otherwise the vendor will be sure that the product passes those specific tests. It should be unpredictable, so that the vendor has an incentive to make the product truly meet its requirements in a general sense. It should be adaptive (exploratory) so that any weakness you find can be examined and exploited.
The vendor wants your money. If your company is large enough, and the vendor is hungry, they will move mountains to make the product work well if they know you are paying attention. Acceptance testing, done creatively by skilled testers on a mission, keeps the vendor on its toes.
By explicitly testing in advance of your decision to accept the product, you have a fighting chance to avoid the disaster of discovering too late that the product is a lemon.
My management doesn’t think acceptance testing matters. What do I do?
Thank you for the blog, James!
When it comes to purchased services (specially cloud services), test people on the customer side sometimes deal with “upgrade the service” questions.
From one side: yes, customers buy a service – the vendor should be able to upgrade e.g. the platform the service is built on and as far as the customer business is fine it is not a problem.
From the other side – any upgrade always brings potential problems that something will be wrong and very often the vendor does not have the whole picture of how the customer uses the service to be able to ensure that the customer business continues to be fine.
The question is: in cases when the customer cannot “control” how the vendor upgrades the service should the customer still has the acceptance test practice on place? How it can look like? Can it be in parallel with the business – some “automated” checks and fast exploratory testing in the production with the goal to discover information faster than business?
[James’ Reply: Seems like you may have to re-test. Testing always comes down to risk. If you are relying very much on the product, and you have any reason to believe it may have been changed in an unsafe way, then you either must test, or you must accept the risk that comes with not testing. In medical devices, upgradeable components are typically locked down. This creates an interesting problem because security vulnerabilities go unpatched, but the cost of re-testing is also very high. The right answer, of course, is simple: don’t create such technology and don’t use it. But human ambition will not accept that answer– and so we live with the risk.]
I had a rant here (http://hellotestworld.com/2013/03/13/are-we-are-doing-uat-wrong/) on how Acceptance Testing is not actually a test phase and we’ve lost our way. It has become that, through the downfall of previous test stages. I don’t really see this as a field, where testers should get involved too much, as testing (including checking) should have been done by that stage. This is the evaluation of whether the thing I ordered is actually what I envisioned in the 1st place or if it needs to change to actually fit my intended purpose. That can really only be determined by the people who use the system. And as you rightly point out this task cannot be done by a vendor! That just shows how misguided this phase in projects has actually become.
I’d also see business (i.e. the users) pushing for this test phase. If I’m a tester having to argue for such a phase I think I’d be looking for a new job. I’d be OK with telling business, that there is something they can and should be doing and that this is essential to mitigating their risk but that’s as far as I’d go.
Acceptance testing != generalised functional/E2E regression testing
Modern IT is very much missing this phase. That is one of the reasons the health.gov or Novopays of this world actually see the light of day.
Vernon Richards says
I wish I had been able to articulate the point about getting the vendor (or in my case, delivery team) to do UAT when I ran into a similar situation some time ago.
The best I could do was use the following analogy:
“Getting me to do UAT is like going to a restaurant and after eating your meal, asking the waiter/chef whether you enjoyed it or not”.
I thought that analogy would bring it home but alas, I failed 🙁
Thanks for the post.
Asmir Babaca says
Generally, I agree with your points.
Accept, I think that an “Acceptance Test” is an oxymoronic term. Furthermore, it contains two verbs (only).
[James’ Reply: Your English is excellent, but perhaps it’s not your first language? Acceptance is a noun (see here) and so is test in this usage.]
Let me refer to some definitions found on Wikipedia:
Acceptance in human psychology is a person’s assent to the reality of a situation, recognizing a process or condition (often a negative or uncomfortable situation) without attempting to change it, protest, or exit.
Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test. A primary purpose of testing is to detect software failures so that defects may be discovered and corrected.
[James’ Reply: Those definitions are not mine, but they are consistent with mine. I accept them for the moment, without testing them.]
In those two definitions I recognize the oxymoronity of the “Acceptance Test”: Investigating reality of software without attempting to change it, with a primary purpose to detect failures so that defects may be discovered and corrected.
[James’ Reply: Well, now I have to reject the testing definition because it’s clear you have interpreted it in different way than I do. The fixing of failures is a descriptive comment about testing. It is not definitional. My own preferred definition of testing is evaluating a product by learning about it through experimentation (that is consistent with my interpretation of your definition). There is nothing oxymoronic about wanting to know what a thing is prior to accepting it.]
I have experienced several situations where this conflict has occurred. The project management saw the acceptance testing as a part of an acceptance process where the customers had to declare the product (unconditionally) acceptable. The same time, when we, the testers, organize an acceptance test, we are leading the customers to look for problems (risks) to declare the product rejectable -unless the problems are resolved.
[James’ Reply: I don’t see this as a deep conflict. In both sides of this, there is a decision by the potential acceptor that must be made. The question is simply one of how well informed that acceptor will be.]
The question remains: is an “acceptance test” an acceptance or a test process?
[James’ Reply: Unless it’s just a lie, then it is a test process.]
I do agree that the “Acceptance testing” as described in your post is very important accept is should be called something like “customer testing” or “user testing” or whatever. As long there is a noun and a verb and not two verbs.
I mostly agree with you, but have several questions – most of them about your sentence “Acceptance testing should not be pre-agreed or pre-planned in any detail”:
1. Why did you avoid using more common and known term “User Acceptance Testing (UAT)”?
[James’ Reply: I think that’s a misleading term. It’s unnecessarily long, and users are almost never the people who decide to accept the product. So what does it mean? Is it user-level testing? Is it testing with real users? Is it usability testing? I don’t know. I am in favor of tightening up our language.]
2. Do you really think, that if the vendor unprepared (your acceptance test is not pre-agreed), he will fix the defects you found better and quicker?
[James’ Reply: How strange that you would conflate the lack of pre-agreement and pre-scripting with being unprepared. Firemen do not have a pre-agreed plan for fighting a fire at your house, but they certainly can be prepared to fight a range of probable fires. I do consulting, but I never have a pre-agreed script for what I will say or do. And yet I am prepared.
I advocate being prepared to test your product. There, I said it and I don’t care how controversial that may be… oh, it isn’t controversial at all, it’s just reasonable behavior.
It certainly would be easier for the vendor if he could know the testing I will do in advance– just as it is easier to pass an exam if you are given the questions and answers in advance. But I hardly see that as a reason to make testing of EITHER kind easier for the one being tested.
The vendor has one simple thing he must do: make the product work as required. If he does that, it makes no difference to him what tests I perform.]
3. Do you test better, when you are unprepared?
[James’ Reply: I test better when I know (as you should know) the difference between being a robot following a ritual and a tester hunting for important problems. In both cases I would be prepared, but in the latter case I have some hope of fulfilling my job description.]
4. If you organize acceptance testing of big software package (e.g., ERP system) in large company, and come to end users without pre-planning, how can you manage them to find time for the tests?
[James’ Reply: Why are you asking such a basic question? I feel as if I’m a fighter pilot and someone is asking me how I can keep such a large and heavy metal thing like an F-16 from falling out of the sky because air is obviously too insubstantial to support it. Look, the answer is not difficult if you spend a little while learning about this subject.
Instead of explaining basic aerodynamics or testing to you, here and now, I would urge you to read my website, my blog, or my book on testing. I also have a number of testing videos online. Enjoy.]
And before testing, they also should read requirements, know processes “to be”, and know what software testing is, shouldn’t they?
[James’ Reply: I don’t know what you mean by “to be” but other than that, yes, I would recommend that. I also recommend that they know how to tie their shoes and feed themselves. Come on, man…]
5. What is the goal of your acceptance testing – to accept software, to find as much defects as possible, to find out if the software suits you well enough, or something else?
[James’ Reply: In the article you are commenting upon, I answered this question. Hint: search for the word “purpose.”]
6. Don’t you think it will be great to save your time, and reject software without testing at all by sharing your tests (test cases, use cases, user stories, requirements – whatever) with vendor and he say that his product does not meet your requirements?
[James’ Reply: Because it is great to save my time, I refrain from using the strategy that you suggest– a strategy adored by people who know nothing about testing, or hate testing– which doesn’t work.
You seem to believe that there is a small set of scripted fact checks that will achieve the goals of testing. That’s not how it works. The leading thinkers of testing could PERHAPS be excused for thinking that way 40 years ago. But there is no more excuse for that, today.]
P.S. Happy New Year! :o)
[James’ Reply: Yes, I hope your new year is full of learning.]
Ben Quinn says
This is somewhat difficult for me to fully explain without me being humble; context-driven testing is my ideal to strive towards, but I’m likely skewed by more traditional approaches (which conform to the waterfall development process).
[James’ Reply: There’s nothing about the Context-Driven approach that defies Waterfall as such. What we defy is doing things that waste time and also don’t work.]
So I appreciate (if you’re willing to offer advice), that I’m wrong in some of my own methods as well as the person I’m having contention with.
[James’ Reply: I will do my best.]
To give you a little context, my company is small, and we have a team of two testers, and 10+ developers. Usually when a task comes my way, if it’s not fire-fighting (we need you to test this now with no room for planning),
[James’ Reply: Hey, that’s not really Waterfall, then, is it? :-)]
then we are asked to estimate our work prior to it being done, and usually the answer I give to project managers is;
– A smoke test (an exploratory end-to-end check of the application, which is sort of the lowest effort user journey)
– A substantial amount of exploratory testing (the actual things I’m doing are approaching the application with a view to consider every component of that application from the perspective of it functionally works on the front-end, and if it makes sense if I was a user). There’s a lot more detail to this, and I would be happy to expand on that if it helps you understand what’s going on.
– A vendor “acceptance” phase, which as you have rightly said, doesn’t really add anything to the product, because as the vendor I’m invested in making it acceptable, (and vendors can consciously or unconsciously can redefine ambiguity to fit their idea of acceptance. I don’t consciously do it, but I accept that I’m potentially fallible to cognitive dissonance).
At the moment project managers tell us that they don’t have the budgets to support our estimate, and ask us what can be dropped to accommodate a much smaller time frame (think 2 days instead of 20 days).
Usually my response to this is “we’ll have to dive in usually exploratory testing, focusing on the risk areas, but this will inherently increase the risk of this application not being fit for purpose, we ideally need more time”.
So that happens, and then it gets sent to the customer regardless, and it fails the customers acceptance test.
[James’ Reply: Again, I thought you said “Waterfall” and “Traditional.” Instead it looks like an attempt at an intense and agile development process. I mean “agile” in the original sense of the word, not as a trademark of the National Agile Methodologist Collective.]
So over time, various higher managers start thinking (without a discussion occurring with us, I hear these things through the grapevine) that the reason customers acceptance testing fails is because we as testers aren’t doing enough “User” testing, and we aren’t thinking about how the client will use the application.
[James’ Reply: This lack of discussion with you, and instead discussion solely among apparently ignorant managers, seems to be where the breakdown is. You have to muscle your way in there. Write a report about what you think the real issue is: not enough time for necessary testing to occur. In order to do this, you will have to be able to explain and defend how you are using time now.]
I’ve been told now directly that we need to only focus on User Journeys, which to management would be high-level scripts which cover day-to-day scenarios. And we also shouldn’t be focusing on the “detail”.
[James’ Reply: You’ve been told by whom? By someone who doesn’t test, hasn’t studied testing, and ALSO expects you to do a good job of finding bugs by some magic while following their micro-managing instructions?
I think you need to push back on that strongly, while at the same time inviting management to witness how you test and why you test that way. You need to win them over. At this moment, they simply don’t believe you are credible.]
My counter to this, is that our exploratory testing factors in how a user would interpret using the application, and that to build up the knowledge of the application, so that we can ourselves become proficient users, is more helpful to the process of understanding how users would use the application, and their expectation of how the application works.
[James’ Reply: Good reply.]
I then get “the dodgy metaphor treatment”, like;
– “When you first rode a bike, what did you do? You started pedalling, you didn’t need to know how the bike worked.” My response to this was;
“Understanding how the bike works is what developers do, a user journey would be the action of getting from point A to point B, me starting to pedal is me performing exploratory testing. When you first started to ride a bike, did your mum say “I need you to go to the supermarket”, without expecting to teach you how to ride it?”
[James’ Reply: Very nice.]
And another dodgy metaphor
– “When you first get in a car, you don’t need to know that the dashboard lights work, the first user journey is putting the key in the ignition”
And so on and so forth (there are a few more examples if you can believe that).
Anyway long story short, I’m stuck trying to really explain what our methods are, and why they are more efficient, and becoming more disillusioned with my company’s monetary-only investment in testing, they are aware they need testers, but not aware why, and every time I try to explain why they don’t really care…
Thanks for your time.
[James’ Reply: First, your credibility is not good enough, or they wouldn’t be treating you this way. Judging from the quality of your writing, here, it seems to me you are very bright and sharp-witted. You ought to be able to earn that credibility if you can overcome the social barriers in the way.
Second, you need to relate what you are doing directly to their business concerns. If you follow their instructions and the customers are unhappy, then nothing has been gained. Show that your way DOES address their business concerns. If I were you I might shift the subject to intrinsic testability (see my site for a document about that) and also I would lay out several “layers” of testing (much as you have done, above, but in more detail) and talk about how management can remove a layer if they choose, but that will mean certain bugs will get through.
Third, they may be doing this because they feel desperate. But what you need to get them to do is think about this in the right way: if we are desperate, let’s admit that we are willing to run higher risks when we ship. And let’s NOT pretend it’s a “testing problem” that testers and developers can’t rewrite the laws of physics and create an arbitrarily great product in an arbitrarily small amount of effort.]
Simple answer , yes we need acceptance testing ..
Thanks for sharing something like that