(Thank you, Anne-Marie Charrett, for reviewing my work and helping with this post.)
One of the reasons I obsessively coach other testers is that they help me test my own expertise. Here is a particularly nice case of that, while working with a particularly bright and resilient student, Anita Gujrathi, (whose full name I am using here with her permission).
The topic was integration testing. I chose it from a list of skills Anita made for herself. It stood out because integration testing is one of those labels that everyone uses, yet few can define. Part of what I do with testers is help them become aware of things that they might think they know, yet may have only a vague intuition about. Once we identify those things, we can study and deepen that knowledge together.
Here is the start of our conversation (with minor edits for grammar and punctuation, and commentary in brackets):
What do you mean by integration testing?
[As I ask her this question I am simultaneously asking myself the same question. This is part of a process known as transpection. Also, I am not looking for “one right answer” but rather am exploring and exercising her thought processes, which is called the Socratic Method.]
Integration test is the test conducted when we are integrating two or more systems.
[This is not a wrong answer, but it is shallow, so I will press for more details.
By shallow, I mean that leaves out a lot of detail and nuances. A shallow answer may be fine in a lot of situations, but in coaching it is a black box that I must open.]
What do you mean by integrated?
That means kind of joining two systems such that they give and take data.
[This is a good answer but again it is shallow. She said “kind of” which I take as a signal that she may be not quite sure what words to use. I am wondering if she understands the technical aspects of how components are joined together during integration. For instance, when two systems share an operating space, they may have conflicting dependencies which may be discovered only in certain situations. I want to push for a more detailed answer in order to see what she knows about that sort of thing.]
What does it mean to join two systems?
[This process is called “driving to detail” or “drilling down”. I just keep asking for more depth in the answer by picking key ideas and asking what they mean. Sometimes I do this by asking for an example.]
For example, there is an application called WorldMate which processes the itineraries of the travellers and generates an XML file, and there is another application which creates the trip in its own format to track the travellers using that XML.
[Students will frequently give me an example when they don’t know how to explain a concept. They are usually hoping I will “get it” and thus release them from having to explain anything more. Examples are helpful, of course, but I’m not going to let her off the hook. I want to know how well she understands the concept of joining systems.
The interesting thing about this example is that it illustrates a weak form of integration– so weak that if she doesn’t understand the concept of integration well enough, I might be able to convince her that no integration is illustrated here.
What makes her example a case of weak integration is that the only point of contact between the two programs is a file that uses a standardized format. No other dependencies or mode of interaction is mentioned. This is exactly what designers do when they want to minimize interaction between components and eliminate risks due to integration.]
I still don’t know what it means to join two systems.
[This is because an example is not an explanation, and can never be an explanation. If someone asks what a flower is and you hold up a rose, they still know nothing about what a flower is, because you could hold up a rose in response to a hundred other such questions: what is a plant? what is a living thing? what is botany? what is a cell? what is red? what is carbon? what is a proton? what is your favorite thing? what is advertising? what is danger? Each time the rose is an answer to some specific aspect of the question, but not all aspects, but how do you know what the example of a rose actually refers to? Without an explanation, you are just guessing.]
I am coming to that. So, here we are joining WorldMate (which is third-party application) to my product so that when a traveller books a ticket from a service and receives the itinerary confirmation email, it then goes to WorldMate which generates XML to give it to my product. Thus, we have joined or created the communication between WorldMate and my application.
[It’s nice that Anita asserts herself, here. She sounds confident.
What she refers to is indeed communication, although not a very interesting form of communication in the context of integration risk. It’s not the sort of communication that necessarily requires integration testing, because the whole point of using XML structures is to cleanly separate two systems so that you don’t have to do anything special or difficult to integrate them.]
I still don’t see the answer to my question. I could just as easily say the two systems are not joined. But rather independent. What does join really mean?
[I am pretending not to see the answer in order to pressure her for more clarity. I won’t use this tactic as a coach unless I feel that the student is reasonably confident.]
Okay, basically when I say join I mean that we are creating the communication between the two systems
[This is the beginning of a good answer, but her example shows only a weak sort of communication.]
I don’t see any communication here. One creates an XML, the other reads it. Neither knows about the other.
[It was wrong of me to say I don’t see any communication. I should have said it was simplistic communication. What I was trying to do is provoke her to argue with me, but I regret saying it so strongly.]
It is a one-way communication.
[I agree it is one-way. That’s part of why I say it is a weak form of integration.]
Is Google integrated with Bing?
[One major tactic of the Socratic method is to find examples that seem to fit the student’s idea and yet refute what they were trying to prove. I am trying to test what Anita thinks is the difference between two things that are integrated and two things that are simply “nearby.”]
According to you, they are! Because I can Google something, then I can take the output and feed it to Bing, and Bing will do a search on that. I can Google for a business name and then paste the name into Bing and learn about the business. The example you gave is just an example of two independent programs that happen to deal with the same file.
So, if I test the two independent programs, haven’t I done all the testing that needs to be done? How is integration testing anything more or different or special?
At this point, Anita seems confused. This would be a good time to switch into lecture mode and help her get clarity. Or I could send her away to research the matter. But what I realized in that moment is that I was not satisfied with my own ideas about integration. When I asked myself “what would I say if I were her?” my answers sounded not much deeper than hers. I decided I needed to do some offline thinking about integration testing.
Lots of things in out world are slightly integrated. Some things are very integrated. This seems intuitively obvious, but what exactly is that difference? I’ve thought it through and I have answers now. Before I blog about it, what do you think?
Timothy Western says
I like this topic a lot, I’ve recently had an interesting epiphany about many terms including integration testing. And have realized that something like this, how it is understood can drastically alter how the idea is perceived.
I would say an integration is as stronger as the amount of dependencies between the parts, and the impact that these dependencies may cause on the assembled product. And the dependencies can be on a number of factors that affect the product quality and stability, such as:
– code dependencies (libs, usage of different languages, backward compatibility)
– performance (if one of the parts does not respond to the other fast enough, or if a part cannot handle the other’s throughput)
– interfaces signature (all the expected inputs/outputs should be exchanged, respecting the rules, such as format and number of arguments)
– hardware dependencies
And the list of factors would depend on the aspects of the application.
I’m looking forward to reading you answer!
[James’ Reply: This is a nice response. Good specific ideas about dimensions of integration risk.]
One aspect of the level of integration is how complexity drives integration.
[James’ Reply: I’m listening…]
Imagine several complex real time sub-systems who depend heavily on each other for quick and secure communication. Such a system would need extensive communication to work resulting in strong integration between the sub-systems. The strong integration brings several parameters to check during integration testing. Compared to a less complex system you will need to do more testing which is a direct consequence of the higher complexity.
[James’ Reply: Can you be more specific? What is “extensive communication?”]
So in this aspect, the level of integration is depending on the complexity of the system. If your system is trying to answer a difficult question it will probably need many steps to do that. If the question is simple the amount of steps will be less and the integration needed is also less.
[James’ Reply: I don’t see how the number of steps itself is an issue. I also don’t know how you would propose to count steps. How many steps does an event loop count for?]
It’s like two people talking trying to answer what the meaning of life is. They will need to use several disciplines to come up with an answer and integrate heavily through debate. If one guy just wanted to know what time it was, the integration between the two would be have been much weaker and the number of dependencies fewer.
[James’ Reply: My intuition agrees with you, but my reflective mind has no idea what you are talking about. So, how can we get clearer on this? I don’t want to sound like a tarot card fortune teller when I teach this stuff.]
As with everything in life, the harder the task the more you will have to integrate to find an answer.
Just some bed time thinking 🙂 Great blog
David Greenlees says
Integration – an act or instance of combining into an integral whole.
[James’ Reply: Does that statement really shed any light on the subject, David? We need to go deeper than that.]
If we look at integration as communication then perhaps we never get true ‘integration’. There will be output on one side, and input on another, but are they really ever integrated in to one whole? If we get all the way down to the 1’s and 0’s then probably not.
[James’ Reply: This sounds like mystical talk. What do you think true integration is?]
Two people having a conversation… 2-way conversation with knowledge sharing taking place. Do they integrate into a whole? No.
[James’ Reply: I could as easily say yes.]
If they physically hug while having the exchange do they integrate into a whole? No. Perhaps there some arguments that they do, on a social and/or emotional level, however that won’t apply to machines
So I think we need to take a higher level view? Taking product A and product B, and turning them into product C (C being the integrated whole). Then can we call all testing of product C integration testing? No, but there are parts of it that we probably could; like end-to-end testing perhaps? I’m using question marks and some safety language here because I’m not entirely sure myself, and I also need to think deeper about this.
[James’ Reply: I feel that you are spinning your wheels but you haven’t yet moved very far. You are still getting warmed up.]
Perhaps we could also see a few products working together (our single entities) to help us achieve a business process flow (our integrated whole). Therefore testing the process flow using the products together becomes our integration testing. I’m not sure… just spit-balling now.
[James’ Reply: One thing you can say is that there are different dimensions in which we could talk about wholeness. Without identifying those dimensions, it doesn’t help much to speak of it.]
Dave McNulla says
Integration: the act or process or integrating
Integrate: to make whole by bringing all parts together
[James’ Reply: Yes, but that’s a shallow answer. What does it MEAN to bring parts together?]
When I test integration, I test services & subsystems together.
[James’ Reply: Of course, but what does that MEAN and why specifically should you do that?]
There may be multiple servers, an enterprise service bus, or multiple services with in one monolithic application, or a combination. Sometimes I am aware of the standards of communication (e.g. when I tested T30 fax protocol) and sometimes I do not.
One could test an ATM machine. The whole would be a system that an person can self-identify with a card and a pin number. The card interface and touch screen interface integrate with a transaction manager & finite state machine, which integrates with a bank communication system, that communicates with each banks accounting system. There are internal and external integrations. Integrations may be synchronous calls to a rest interface or a call to another jar/dll, or through a queue (or topic bridged to a queue).
[James’ Reply: This part is obvious. I’m looking for the implications of it.]
I could test business logic here, but shouldn’t I test how these clients and servers interact with each other in both typical and unexpected circumstances: a database error on the banks db, insufficient funds for the account, an xml/xsl validation error on a message, insufficient funds in the overdraft account, insufficient funds within the ATM machine’s cash reserves.
[James’ Reply: This doesn’t look like a list of interactions. It looks like a list of conditions that exist within sub-systems. I’m pushing to hear something above and beyond behavior that is easily tested on a unit level.]
In Anita’s example, how is the information consumed? Is an email forwarded or is an email client checking for new emails relating to travel? What system parses the email; it creates the xml? Is the xml put in a queue, sent by rest, or something else for Anita’s product to consume? How is that validated against account information? There is another system that applies the data (some kind of CrUD)?
[James’ Reply: Yeah, that starts to get to integration risk… investigating the specific nature of the dependencies.]
The toughest part of integration testing is in revealing the assumptions I am making. I am really interested in seeing your thoughts on this.
[James’ Reply: Excellent point. That’s is one of the key issues– the fact that we don’t KNOW about all the dependencies until we encounter the product after integration. They are assumptions until we try it out. Some of the problems will lie quietly until the moment that a particular function is invoked.]
Dave McNulla says
>> [James’ Reply: This part is obvious. I’m looking for the implications of it.]
I didn’t think it was obvious within the original post so I felt it added to the conversation, but I’ll accept the challenge. The implications are that queues can build up and/or rest services can block additional connections; we could have a version of the Affordable Care Act website. Parts of transactions could become permanent while other parts are rolled back. Authentication could be hacked.
[James’ Reply: Thank you, yes. You reeled off a list of interesting product elements– that was the obvious part: the fact that a product has functions that do various things– but what we need to get to is the integration issue and now you are starting to drop that next shoe. Which is: What sorts of things might work fine in isolation but get disrupted or choked up when plugged into other components?]
I think the main reason for this question to be difficult is that “integration” refers to quite a wide spectrum, ranging from one system consuming an output created by another system (that might not even be aware of the other system), to having two modules in the same system that are used to satisfy the business needs of one product (and the definition of “modules” is also vague, as some modules are more independent than others, – for now, I’m assuming that a “module” is a piece of software that has a distinct purpose one can summarize in one sentence, and whose code is easily identified as being part of it – so one can take the whole module and move it around relatively easily). The more I think about it, the more I am convinced that the two sides of that spectrum are completely different activities – in one case (communication of two stand-alone systems), the only thing separating “integration” from simple functionality (e.g. “this server shall process requests in the following format..”) is the explicit will of the system under test’s stakeholders to assert (or to do the best they can to assert) that the connection works – when I use Google maps geolocation API, I might consider it as “integrating with google maps”, but I’m pretty certain the folks testing this API in google are not considering this to be an integration testing task, nor should they.
[James’ Reply: When they test their system functionally, it’s just normal testing. When a client organization tests with that API, it’s more properly called platform testing, because the API is a platform from their point of view. Platforms are anything required by your product and yet outside of your control. If two things that work together are both in your control, it would be integration testing. The mechanics of integration testing and platform testing as more or less identical, but certain other issues affecting testability may be very different.]
On the other side, there is integration between a system and an in-house built, semi-independent module. In this case, “integration” means allocating and managing the resources needed by the module (we had an integration bug in our product when a DB connection we supplied was set to “auto-commit”) and using that module libraries as part of the main system (So, for example, I could invoke a method and expect to get a response in form of an object that is defined by that module, as opposed to just getting a text representing the response).
[James’ Reply: One side of the integration testing problem is to imagine specific dependencies and failure modes, as you are doing here.]
The only thing I think is similar between testing the two very different thing is that in both cases I’m stating “I’m not trying to find faults in the module’s internal logic, but rather that the data is passed and parsed properly” in the google maps API example, I will just send a request and see that my system can parse it and use it as intended, and in the in-house module I may benchmark it’s performance, check it’s DB to see that what I’ve sent was received properly and the response was dealt with as intended.
[James’ Reply: Can’t you do that testing in a lab without actually integrating them into the same machine environment? I agree that covers one aspect of integration, but what about the issue of conflicting dependencies?]
Something interesting that came across my mind when writing – it seems to me that in an integration testing scenario, there is the standpoint of “my product” and “that thing I’m integrating with” (I can imagine some sort of “meta-view” where neither component is “my own” and neither has the focus, but it seems a bit shallower – though I may be completely wrong here). When I look at this notion, my approach to testing integration changes a bit – while my attention is still mainly on the communication between the systems (allocating memory & DB connection, in this case, are considered as communication), my mandate in testing this “integration” extends to finding logical flaws in *my* product. They might be caused by problems in the other system (such as deviating from a defined protocol, of even bluntly providing “wrong” answers, but this is a problem in “my” product, and therefore “my” problem (fixing the other system is a great way to deal with this problem, but creating a workaround is an option too – and sometimes the only one available in a timely manner).
[James’ Reply: You are describing exactly what I mean by platform testing.]
And finally – re reading my answer, it seems that I focused too much on API integration – I send a message and get a response, and vice versa. I have mostly ignored the possibility of providing a sub-system the required resources and letting it “do it’s thing” without “my system” ever noticing it – specifically, when a component comes with its own GUI – in such a case, the integration task changes a lot – now, still assuming that the other system is working properly, I have to check if what is considered proper for it, is proper in my context – does the GUI conform with my own? does it stand-up to the same security standards as my product? does it provide support for disabled people? Is the assumed business flow matching the assumed business flow of my product?
[James’ Reply: I like your thought process. This is what I mean by going deeper, folks.]
Amit Wertheimer says
I’d like to dwell a bit about some of the things you’ve said.
1. Platform testing – I’ve never thought about API’s as a “platform”, and I find that I am a bit reluctant to use this term. The main reason is that I’m used to “platform” being stuff like the OS I’m running on, the device or the programming language – those are stuff that are not only out of my control, but usually have control over my software. API’s are things My software calls voluntarily (unless I’m the server, in which case I listen voluntarily to get the input), process the input and decide what to do.
I would rather name it “service”.
[James’ Reply: Then I would urge you to consider what is the essence of a platform. I believe when you look at the use of that word in the technical world you will come to the same or similar conclusion as I have: a platform is something a product depends upon but that is not controlled by its project. It is a nice clean definition that of course includes OS’s and hardware, but also other things. That means a library produced by your company but controlled by another project must be treated by your project as a platform. Platform testing is special mainly because of the testability considerations. We must find platform bugs earlier because they may be expensive to workaround or to get fixed. A platform also often has many clients and is updated in an uncoordinated manner that may affect those clients.]
However, platform or service, I’m a bit torn between two impulses. The 2nd causes me to believe I missed something.
My first impulse is to say “great, another term I can use to box an idea to focus, refine and zoom-in later!”.
The 2nd impulse is to ask “If platform testing is mostly similar to integration testing, only with less testability – is it worth having a name of it’s own? is it different enough to be treated differently?
[James’ Reply: There are special considerations with platform testing, and the testability is lower with platform testing. I think it’s useful having a special name for it to signal the presence of those special issues. The main special issues are: uncoordinated schedule, inaccessible black box (usually), inability to dictate bug fixes.]
2.”Can’t you do that testing in a lab without actually integrating”
No, I can’t. Unless I write the whole component from scratch – when using stubs \ simulators I will be missing stuff such as response time & resource consumption, but what is most important – while I’m in the lab I’m testing against my imagined requirements. I’m not confronting the assumptions as they were interpreted by whatever I’m integrating with. If I think the API is X, but is actually X’, I may be in problem.
[James’ Reply: Good. I was looking for at least that level of detail.]
I think I’ll start examining the concept of integration testing base on several scales –
a. component VS service – a component is something I develop, a service is something that was developed and will not change for me.
[James’ Reply: I don’t believe that is what service means in the technical world. That is what platform usually means, though. When a program uses a service, these days that usually refers to a web service set up on a server, somewhere, often in the cloud. Your project could set up a service that it controls, in which case testing it would not be platform testing from your point of view.
For a long time I tested developer tools. To developers, our tools were a platform, but not to the testers who worked on them.]
In that spectrum there are components developed for me by other teams, or that they exist, but I can ask for changes and get them.
b. Systems VS components – Systems can work fine one without the other. If Google were to block their GeoLocation service, my system won’t be able to get geolocation, but will work fine (and present N/A in the geolocation fields). Components are more tightly intertwined. Component’s can’t work alone, and they need the support provided by other components.
c. interfacing VS assimilation – interfacing is calling defined API’s. both sides are doing their own things, only occasionally, at the will of either side, some information(including commands) is being passed between them. Assimilation is a case where a main software (usually a system) is incorporating a component- providing it with resources, and managing everything in it.
d. well defined VS run-amok. well defined is an API – you know what you get and that’s it. you control when to invoke that functionality . “run-amok” is a software that can do just about without “my ” intervention, such as something with a new UI (imagine facebook comments mechanism – in theory they can change the GUI to ghastly green and destroy my red themed UI.
egalitarian scale – can I say that one product is “being used” by another, or is it something that “those two things interact to get a common goal”
[James’ Reply: This is the level of thinking I wanted to provoke. Thanks.]
Mesut Güne? says
Integration is the process of building different parts together for performing new functionality/ies. Integration testing is to explore these function/s work as expected.
[James’ Reply: This is not informative. Come on. The whole point is to go deeper. So what does this really mean?]
Mike Talks says
My gut reaction on reading this was recalling something my artist friend Violet told me about Egyptian sculpture.
If you look at just one component of it – the hands, the face, the feet. They look completely like what they’re supposed to. But when you look at the statue as a whole, it looks somewhat disproportionate and “doesn’t work” quite how it should.
Likewise I see integration as looking at the product and how it behaves as a whole system. Typically early in development/testing we’re going to be using a lot of plug-ins/drivers/stubs to simulate the larger ecosystem our product is going to be part of. This is a great start, and it helps to ensure our component is robust as we can get it.
These stubs allow us more control over our system for testing, but they’re not the real thing. And in testing we want ultimately to be able to test in a system which is as close to the final system as possible (even if ultimately we have less control in that system).
[James’ Reply: But why, Mike? What added benefit do we get? Of course I think there IS an added benefit, but the point of my challenge was to get you to say specifically what that is. Obviously we want to test the final product, but why, if we have tested the final components, would that not be the same thing as testing the final product?
Let me put it this way, when you test the final integrated product, the individual units don’t “know” that any other unit is anywhere nearby, right? Each line of code does what it does without checking whether other lines of code are on the same system, right? Or am I wrong? If the units are being units, why can’t we rely on unit testing to sort everything out?]
Integration testing then to me is about putting together your pieces and confirming they plumb together nicely, in a way you expect.
(If you were to build a bike … and use a mountain bike front wheel, a BMX frame, a racing bike rear wheel, and gears from a childs bike. Each component can be high quality, precision made and tested. But together they make a damn hideous bicycle)
[James’ Reply: Why exactly would that bike be hideous? How much of that is that the units are in fact not performing the functions that the other units need them to perform? For instance, if two units of the bicycle don’t match in size, then you could say that each is not performing the “function” for the other that the other was expecting, because one function is “connect to me at this point.”]
Mike Talks says
In unit testing where we use more controlled stubs, we’re only using through the stubs some very limited set of interactions between the components. Real components can behave in ways which are unexpected and surprising.
I like to think of it like the Berlitz language guides, where you’ll get a page script of “how to ask for directions to the police station. It’s full of typical responses you might get to them.
Me: Where is the hospital
Them: Down that road, on the left
Them: Two blocks behind you, on the right
When you come to actually use it, that’s what they might say. But more than likely they’ll go off script and go “hospital? They closed the hospital in 1991 from cuts. Cuts I tell you”. And you just look bemused because it’s not in your script.
[James’ Reply: I like this example. But why can’t we simply do the testing in the units and find these bugs?]
When you put component A with component B in some form of relationship, you’ve created a system. The behaviour of the overall system is down to the behaviour of A, the behaviour of B, but also how they interact with each other.
The overall systems behaviour is more complex that the components taken individually. Likewise if you take the human brain – each neuron in your brain is relatively simple, but they way they interact produces one of the most complex object in the natural world. We can’t say that because we understand how a neuron works that we understand how the system of the brain works.
[James’ Reply: I agree, but can you be more specific? WHY can’t we understand a software system by understanding its components?]
Thanks James, and a bit of reflection on that. Notice I always go into science teacher mode, and try and explain it in using analogies. Mainly because usually the person I’m talking to is a non-tester, and need to win them over to my viewpoint. Certainly throwing more technical jargon at someone who doesn’t understand testing that well, doesn’t seem to win the day – any advice for engaging with such people?
[James’ Reply: I agree that you need to identify a general pattern and then either demonstrate it or use an analogy. But that doesn’t mean we have to stop at very general analogies or broad metaphors. We can still get pretty specific.
See how Richard Feynman does this in this video: https://www.youtube.com/watch?v=EKWGGDXe5MA
At 4:28 he starts a general analogy about computers as filing systems. But then he takes that analogy to interesting depths, such as at 37:35 where he talks about integration, or 45:00 where he talks about computers that “think.”]
Mike Talks says
Thanks for the Feynman link.
I think I’m pushed to the best answer I can – and frustratingly it’s been right under my nose. In my test team, we’ve been looking at a few scenarios to play out planning testing on different items.
Last month we took a look at Conways Game Of Life, which elegantly shows the difference between components and a system.
For the quick summary – Conways Life is made up of a series of cells, which follow certain rules, and is meant to mimic some of the complexity of life.
So the cell rules – requirements if you will,
If a live cell has 3 or 4 neighbours, it stays alive
If a live cell has 2 or less neighbours, it died from underpopulation
If a live cell has 5 or more neighbours, it dies from overpopulatoin
If a dead cell has exactly 3 neighbours, it resurrects and becomes alive
Putting a single cell through a series of JUnit tests, we were able to conclusively test that the logic for each cell was following it’s rules. Now because we’re able to prove Conway in one cell, does not mean we’ve got a working system.
If you look at videos of Conway then you should be able to see how the rules (in aggregate) support units which are still life, oscillators and spaceships/gliders. These phenomena are a product of system of cells, and they CANNOT be tested at a cell level. We need another form of testing which allows us to model, reproduce and check these patterns.
[James’ Reply: I don’t understand. What “bug” do you think you are going to find with “integration testing” in Conway’s game of life? The whole point of that game is that the unit level rules completely determine everything that happens.]
This became our “integration testing” – integrating a series of these elements together. We didn’t really touch upon those core requirements again (we’d tested them elsewhere), but we made patterns from the Wikipedia, to confirm we could reproduce the systems behaviour seen. We also tried some of our own patterns, and put the changes under scrutiny – were the rules obeyed.
We found at the integration level there was much more variety of scenarios to trial out than we’d had at the low level, where our behaviour on the cellular level was somewhat limited.
[James’ Reply: What exactly was the value you got out of doing that? Did you find a bug?]
Mike Talks says
Oh … I’d modified the code in one of our phases … there was a bug to be discovered.
[James’ Reply: In what way could that bug hide from unit tests?]
People aren’t really interested in the cellular rules of Conway – what interests them is how groups of cells work together. That is after all the aim of the simulation. As discussed, we can represent that in unit testing.
[James’ Reply: But that has nothing to do with the proper functioning of the code, but rather the implications of the logic of the game itself. But you are not talking about testing the RULES of the game, are you?]
ARRAY INITIALISATION TESTING
What we can’t represent is that everything is set up in the grid correctly. If I have cell B2 – I expect the state of it’s neighbours (A1, A2, A3, B1, B3, C1, C2, C3) to be counted up when considering the state of B2.
[James’ Reply: Integration testing includes validating that the user has correctly decided what the input should be? I’m not sure if that’s what you are saying.
The user can do what he wants to do. That doesn’t mean there is a bug in the game.]
How do I know for sure? The still life objects are great to move around my board space, and allow me to perform spot checks that confirm that cells are “wired together” well. I might want to also for instance split a static across a boundary – ie make a square using A2, A3, H2, H3 (on my 8 x 8 grid) as live to if G2 is considered a neighbour to A2 because of some bad method being used causing the grid to roll over as a kind of cylinder.
[James’ Reply: Isn’t that just unit testing squares that are on the boundary? That’s boundary testing, right?]
If my statics don’t remain (basically whither) – then I know there is a problem. Probably with how neighbours are assigned in the initialisation.
[James’ Reply: So you call it integration testing to test the topology of the space in the Game of Life? What exactly is being integrated there? It’s just a data space. That seems like a unit level concern to me.]
So far we’ve been able to prove the system can create and support static shapes. Static shapes are certainly easier to test and to move around the board.
[James’ Reply: If the system correctly implements the rule of the game, then that must logically follow and there is no uncertainty or risk left, I would think.]
But how can we be sure the grid as set up support dynamic rule application? How can we be sure it’s picking up the latest state of it’s neighbours?
[James’ Reply: Your unit testing covered that.]
For that dynamic shape reproduction can help. Oscilators are useful, as you can place them as you want around the grid, and are easier to monitor. If they work you can attempt to reproduce with spaceships.
MOST ELEGANT YET
If you create a spaceship which travels across a row, you are both checking the dynamic application of the rules (the shape should keep oscillating across it’s phases) – but also as it moves across the grid, it’s testing the initialisation of cells to neighbours.
[James’ Reply: By dynamic I assume you mean that the static application of the rules is enclosed in a loop. This is not an integration issue.]
If either of these things have problems, then the spaceship would break up because of a bad implementation of either, which would cause a problem sustaining the spaceship phases.
Mike Talks says
I’m reflecting back on our discussion so far. And breaking out of it for a moment this is what I’m thinking, and I guess what I’ve learned.
I started with some high level discussion. I think reading around other answers there is a common fundamental thing going on which I might have missed initially. In a way “integration” is a rather abstract and formless concept of itself. It really needs a good enough example put on the table so the concepts can be explored in so far as they apply to the example on the table.
[James’ Reply: I agree. But I think the example you chose is not really an integration testing example, any more than testing with more than one character entered into a Word document would be an integration test of Word. The Game of Life uses “production rules” to produce iterative versions of output from a starting set of data. In that sense, a square is just like a bit within a byte, and having more than one bit turned on in a byte is not what we mean by integration.
Integration is about putting components (not bits of input) together, for some meaning of the word “together” (which is what I’m interested in clarifying).]
Were we on the same project, or had a past project with some shared understanding – I could discuss the concepts in that. I’m trying to find an example which is a real computer problem to explore – one rich enough and parallel enough to real world systems. I’m realising what works mentally for me might not work for someone else. Or I might be struggling in this kind of forum to get my ideas across.
[James’ Reply: I believe it’s working better than you think it is. Your example shows me that you are talking about a different thing than what I’m talking about. The example helps me see that.]
Typically when I’m engaging with someone and I’m at this point, I’ve that feeling of “well, I’m not sure we’re coming to enough of a common understanding … I need to change something”. And that could be,
* Change the example we’re using (which could be frustrating because we’ve both invested time with this – but if it’s not working need to be brave enough to)
* Change the medium we’ve communicating on (I could email or Skype you). I notice for every answer I have, it results in multiple questions. I think if we reduced that down to just a short answer and a question back and forth, we would better build up a common framework to proceed
* Change how I display my thoughts to you. Maybe something more visual like a Powerpoint slide, or even video of me using a Conways Life simulator, explaining as I go
I think the above are my ideas … but I think on deeper reflection, they’re probably what you covered in your mentoring testers workshop.
[James’ Reply: I agree with your idea for changing the mode of communication. I’m sure that would work better for us, especially since we know each other.]
Heiki Roletsky says
I recommend all thinking testers to look at the Youtube link (https://www.youtube.com/watch?v=EKWGGDXe5MA). Richard Feynman wasn’t just great scientist (which was probably most important role in his life), but in my eyes he was also great tester. He tested human limitations of knowledge about the Nature.
John Stevenson says
Interesting thought process James, very much appreciate how you have added your own thinking during the interview. There is much I will take away from that teaching and mentoring technique.
My own thoughts on integration testing and what it means, in my context are as follows:
Setting the scene:
We have various in house and third party components, which we shall for simplicity sake call sub systems. Each of these systems may be developed by different teams all working to a loosely defined specification of what they are supposed to do with regards to communications, flows, processes and interactions.
We take all of these ‘sub systems’ and attempt to put them together as a whole system, utilizing systems thinking. The purpose of integration testing in my mind is to try and uncover missing information, assumptions, misinterpretations and unexpected behaviour. As you quite rightly pointed out some of these ‘sub-systems’ could be either tightly or lightly integrated. The difference could be as follows:
Sub-system Alpha is independent from sub-system Omega in terms of dependencies, however Alpha has dependencies on sub-system Beta for some data flows. Sub-system Beta using the output from sub-system Gamma which in turn gets populated from sub-system Omega. From this, in my mind, some of the sub-systems are tightly integrated whilst others are less so.
[James’ Reply: We’re going to need a definition of “tight” vs. “loose” integration. I would propose that loose integration means that such components could be replaced by functionally equivalent ones without destabilizing the rest of the system. Tight integration means it is prohibitively difficult to remove/replace components without destabilizing. An example of loose integration would be a printer driver. I can load or unload drivers on my Windows system without messing up Windows, so those drivers are loosely integrated.
In your example, the dependency tree is A -> B -> G -> O. So, Alpha does depend on Omega, albeit indirectly and therefore, I suppose, loosely.]
The systems we work with contain many of these sub-systems which we then have to put together and integrate into a whole system. Some of the end to end flows can be automated the ones that have value to the business. The true hands on integration testing in these products is to uncover unexpected behaviour between the sub-systems. It is not looking for one particular element such as communication between sub-systems but to understand the interactions within the system as a whole. This is to me what integration testing means.
[James’ Reply: But this is vague, Jon. You are not telling us why we should have any worry about things working together as a whole. Is there any reason they wouldn’t? Yes, you and I believe there are such reasons, but I’d like to hear them specifically.]
There is far more to it than this, such as uncovering tacit information held by all those involved in creating the product and sub-systems. Uncovering what has not been said or understood and using that information to drive the testing effort on the whole system. Identifying the risks, lack of understanding and missing knowledge or expertise. The actual execution of integration testing forms a small part of the whole integration testing exercise.
[James’ Reply: Now that is really interesting. Can you say some more specific about it? What would such a bug look like and why wouldn’t it have been found in unit testing?]
John Stevenson says
[James: We’re going to need a definition of “tight” vs. “loose” integration.]
I agree with the need to tighten the definition of tightly and loosely.
[James: But this is vague, Jon. You are not telling us why we should have any worry about things working together as a whole. Is there any reason they wouldn’t? Yes, you and I believe there are such reasons, but I’d like to hear them specifically.]
I could provide a cop out answer to why, because we are fallible in our thoughts and implementation of design.
[James’ Reply: Sure, but that’s why we test, and we already tested the units. What I was asking about is why we should think that unit testing is not going to do the job?]
There could be a variety of reasons why when we integrate systems together they do not work. Some due to restrictions of time, laziness – in the sense of improper code design/reviews, lack of unit tests, miscommunication on what and how to implement, The very nature of developing a creative product leads to errors in implementation or the taking of short cuts. Not sure this answers your question but it is difficult to be less vague in the sense that I have yet to see large scale integrated systems function as expected.
[James’ Reply: Couldn’t it be argued that any of these things are handled by good unit testing and code review? Why the need for integration testing? Couldn’t it be argued that integration testing itself is lazy, since we could have found the bugs on a unit level?]
[James: Now that is really interesting. Can you say some more specific about it? What would such a bug look like and why wouldn’t it have been found in unit testing?]
An example of such a bug would be one where you use a third party library, which does have a defined specification, for two products that will need to integrate together. These products are developed by different teams. They both implement the library as they understand its definition. Due to such legal things as NDAs etc either side cannot communicate with the other. It is then passed to a team to ‘integrate both products in the system”. An unusual case emerges that due to the way the library has been implemented all the security features in product A are bypassed by product B as such this presents a case where customers could encounter fraud on their account. This would not have been picked up during unit testing since each side assumed the other had authentication covered. There are other examples I have encountered but due to the nature of the products I can not report in a public forum.
[James’ Reply: Great example! You nailed that one.]
Liza Ivinskaia says
I do not have exact definition of what integration testing is. But when I think about integration testing I think the following:
According to definition of integration: an integration is an act or instance of combining into an integral whole.
From my point of view understanding of what the “integral whole” means in a particular situation will lead me to an understanding of what integration testing is.
For example, if I have two components which suppose to build a response together when “the whole” in this case may mean “the response they built is correct according to the contract, they can handle unexpected input to each other, from each other etc”.
Another example is when you have two “systems” (applications, services). If they are intended to work as whole when I look for what is “joining” them together – it can be a business process (e.g. invoicing), it can be a user scenario (I buy a new mobile phone on the internet and what to get it delivered to my post office).
If Google is integrated with Bing – I would say no unless I use both of them in some of my business processes or user scenarios.
When it comes to “one system generate the xml the other system may read” I prefer to think about this type of testing in term of “interface testing”. It’s really important to have it on place otherwise the integration may be hard to achieve. Probably it can be considered as type of integration testing.
Crash how I think but please do it softly 🙂
[James’ Reply: I don’t have a problem with what you say except that I would like more details. You said something about handling unexpected input. Why couldn’t we simple test that on a unit level?]
Liza Ivinskaia says
ok, let me give an example to clarify the thought:
Say, we develop a component A which generates certain response, based on request sent by the client application B developed by another team.
Now, how our unit tests will look like and can we predict all unexpected inputs and surprises that this client B will give us? We can certainly predict obvious things on the interface level and build unit tests around it, like border values of parameters or missing values. We can probably predict and build some armor around some obvious non-functional errors, like timeouts, etc. But we can’t predict all simply due to the fact that the application B is done by other people – we can never know for sure what they put into it and how it has been tested.
[James’ Reply: How does that matter, though? Isn’t it true that we can’t predict all the things the human users will do either? Aren’t we always in this situation with anything that we test?]
Of course we can try to handle all inputs in the way: If I get this I do that; I get this I do that; for everything else I throw an “internal error”. The problem is that this “everything else” may have something really important for the final user (facing client application B in fact) and to get an “internal error” on that input may not be good enough for her.
[James’ Reply: I understand, but by your logic we can’t use testers to test, because only real users will provide realistic input.]
Liza Ivinskaia says
[ How does that matter, though? Isn’t it true that we can’t predict all the things the human users will do either? Aren’t we always in this situation with anything that we test?]
Yes, we are.
The question was “why we cannot simply test unexpected input on unit level” and my answer is – you can but it won’t be all and you will find more later on which you probably will choose to fix:
At one company I worked at we run into a situation when the client sent the expected input but did it in an unexpected way: when we expected one check-login request it sent 200. We discovered it only after we connected the client (developed by another team) to our service.
At another company we were responsible for sending files for printing at a print shop company. We got a lot of invoices in a xml file, transformed it to a print file and sent to the print shop company. One day we got a call from the print shop saying that the whole print job was stopped as it required a table with characters codes they did not have. We were more than surprised as we did not do any such changes on our side. After a lot of investigations we found a really special character in the xml file: one customer invoice from 500000 (don’t remember exactly how many it was but they were many) had it in the customer name. So we called to the xml file source company. The answer we got – it’s impossible, this character is prohibited to type in customer UI. After some investigations on their side it showed up that their customer service could enter data directly to the database and there were no “character validation” existed.
[James’ Reply: Good examples!]
[ I understand, but by your logic we can’t use testers to test, because only real users will provide realistic input.]
I did not say that. Of course we can! The more we know about the final users the more we can help to the developers in handling different inputs and outputs. My experience says that they (due to unknown reason for me) are more interested in code than in what the final user does 🙂
Michael Bolton says
The answer we got – it’s impossible, this character is prohibited to type in customer UI. After some investigations on their side it showed up that their customer service could enter data directly to the database and there were no “character validation” existed.
Funny you should mention it. Scroll upwards and look for the name “Mesut”, and observe the last character in the last name.
Anton Petrov says
Thank you James for feeding my brain. And of course thanks to commentators.
I think two components are integrated with each other if functionality of one depends on functionality of another. In other words if one component can’t fully or partially work without another one then they are integrated.
[James’ Reply: That feels like the essence of integration to me, too: interdependence. Not just that two things “work together.” Working together could mean two things working independently while sitting next to each other on the same table.]
So before the integration testing we have an assumption that another component works as we expect (or doesn’t work). Now it’s time to prove or refute it. Of course there could be different expectations like reliability, data formats, error handling etc.
From my POV it’s quite obvious at this point that we can only test such assumptions allowing two components to interact with each other. Unit tests do not work here since they rely on again those assumptions which could be wrong.
[James’ Reply: I’m not sure what assumptions you are talking about. Can you give me an example?]
Maaret Pyhäjärvi says
For a while, I’ve thought “integration testing” means mostly that there’s anything the dev in question did not create themselves involved in the scenario. I’ve favored naming the specific things we test over calling any of the testing we do “integration testing”. I find that there’s a lot of different mechanisms to consider to test for integrations, and often relevant aspects I see are the nature of the integration (intentional vs. accidental dependencies), direction of information exchange and scheduling of information exchange (batch vs. online).
I find that the very specifics of what information is supposed to be transferred has a lot to do on how the testing of that gets done.
I too have a set of examples:
* when testing a mobile virus protection that hooked on OS low level calls to notice malware, anything that would happen on the device outside the use of the visible parts of the software were integrations we’d test. There was an interesting mix of how different applications behave and how that behavior could be intersected. All of this was integration testing by nature, but we’d rather discuss the specifics of the risks
* when testing a web application, I needed to have discussions with developers on “integration testing” just about anything that was outside what they had created: the browser/OS/network (environment) was particularly challenging to get listed, people would generally prefer to focus on testing components by devs/teams 1 and 2 that they identified needed to work together
* when creating an insurance application, much of the communication between systems relied on batch runs scheduled or triggered on demand. Yet many of the reports the system would show relied on the data to be more up-to-date that they were.
So for defining integration testing: any testing with considerations of two systems with perceived boundaries working together over those boundaries. In an interconnected world, you’d limit this heavily on likelihood of things breaking.
On your example of google-bing integration through copy-paste, I would consider that integration testing worth spending time on if I could find out that there’s some hidden data that gets copy-pasted. Even then, I wouldn’t care to label it integration testing, it’s just a specific idea to consider on list of various targets and scopes we could test for.
I read your article and few responses. Perhaps I should now go back reading more responses.
Hey, you talked her over that’s clear, but with your last replica you turned conversation to wrong direction.
I think integration testing you were discussing is verification that both sides understand the contract the same way and one implements its functionality correctly for the same variety of input as another application produce as output.
[James’ Reply: Yes, that is the trap I was making for her. My goal was to get her to realize that what she chose as an integration example was not very interesting. That’s the Socratic process: you work with examples that expose shallow thinking.]
With your last statement you just replaced dumb xml file which cannot adjust itself on the way from from one component output to another intput, you replaced it with smart you, who copy paste something and make sure that input to one search engine is correctly cut from another search engine output.
[James’ Reply: Yes, that’s what I did. I wanted her to critique that as a way of getting to a deeper sense of what integration means.]
Dimitar Dimitrov says
My personal definition of integration testing is that your focus should be on inquiring about the interactions between components, especially exposing pathological edge cases where components behave sensible in isolation, but don’t work well together (the mountain bike with tricycle pedals analogy from the post above).
[James’ Reply: I don’t quite understand that analogy, because the tricycle pedals work for a tricycle and the mountain bike part works for a mountain bike. I agree they don’t work together, but I don’t agree that it is reasonable to say that they work “fine” in isolation. They don’t work fine if they were designed to work with the other mis-matched components. It would not be correct to say that the problems between them could not be detected until they were put together. In fact, we know very clearly before they were put together that they could not work together in the same bicycle.
The spirit of integration risk is that there is some special risk involved with putting things “together” that would not reasonably be analyzable when evaluating units that were designed to be integrated UNTIL after the integration event actually happened. I am asking: is there such a risk, OR can all the problems that will happen in integration be found PRIOR to integration?]
This definition is rather lax, as it does not determine what do we call component. This is on purpose – component is a unit with well defined interface and behaviour.
[James’ Reply: Why do we need to say “well-defined”? What does that mean, anyway? I’m not sure. Couldn’t we have components that are not well defined?
I think I would define a component as “anything that comprises or may comprise a part of some system.”]
It could be as small as a function or as large as an appliance (our QA team has a whole zoo of terms depending on the size of component, despite the fact that they never get close to the code). Often what is considered an “assembly” in one integration test may appear as “component” in another. It all depends which interfaces we want to take as apriori and which interactions do we wabt to test.
In my personal experience (btw my title is “developer”), most of the time when I do integration testing I am looking to verify end to end business scenarios, measure non-functional parameters (i.e. how does response time change as function of load?), or test with data derived from the boundary cases of the individual components.
[James’ Reply: But if your product consisted on one single component, you would still do that, and that would be ordinary testing. What makes integration testing special?]
Integration testing is important because systems are complex. Naive model checking falls apart quickly when we introduce some mutable state and external inputs. There are some techniques like TLA, state model checkers or constraint solvers that can model some parts of the business logic and help is prove that our algorithm is correct, but they don’t translate to production code, so the best we can do right now is to hooks the parts together and see what they actually do.
Chris (kinofrost) says
I’ll have a go!
Integration testing is testing performed to mitigate the risk that conceptually separate parts of a system will not work when combined because of the nature of the dependencies between those parts or new behaviours that emerge from their combination (where combination means that the parts share inputs, outputs or functions where before they did not) that were not considered (or impossible or difficult to test) when they were not combined.
[James’ Reply: I like this. I like how you took it a level deeper.]
Lalitkumar Bhamare says
“So, if I test the two independent programs, haven’t I done all the testing that needs to be done? How is integration testing anything more or different or special?”
I would explain my idea around this question with example of a Coffee Machine (which dispenses various types of coffee like expresso, milk coffee, plain milk etc).
So, the coffee machine as a whole is made up of some basic parts like the one that stores coffee beans on top, the one that crushes them into fine powder, the heating part, the one that lifts milk from jar, the one that ‘tells’ machine if expresso is to be dispensed or milk coffee and so on.
Now, let’s assume that all these components have been tested independently and the results matched with the purpose they were built for. Does that tell us if they would produce same results when they are made to work /together/ with other components? Would the storage part able to release desired amount of beans for crushing when it is mounted on machine? Would heating part work as effectively in the machine body as it would when operated independently?
[James’ Reply: Why wouldn’t it? (I know why it might not, but I’m trying to get you to say it.)]
For me, the idea of integration is not just limited to some sort of /exchange/ between two entities or systems but it is also about how those entities perform the desired operation /together/. For example: milk coffee would be the output of milk lifting component working on instruction of ‘milk coffee’ button, together with heating part and bean crushing part. The milk coffee dispensed in cup would be the end result that will tell about the quality and functioning of the machine as a whole.
[James’ Reply: What does “together” mean? That’s a vague concept. If I hold hands with my wife, we are joined together, but so was my son joined to her before he was born. Obviously there is a qualitative difference between those two “togetherness” scenarios. So, the word “together” doesn’t say much.]
How does it differ from testing the components independently?
I think the parameters that would /impact/ the functioning of components in independent and integrated environment would make big difference. A heating part tested independently might tell us that it is able to heat the liquid with desire temperature. What it may not tell us is whether it would heat the “coffee powder with milk while the mixture runs from the pipe” as effectively as it would when tested independently.
[James’ Reply: Why wouldn’t it? (I know why it might not, but I want to hear you say it. The point of this exercise is to get deeper.)]
In my opinion things that make integration testing anything more or different or special, are the control mechanisms, measuring parameters, impacting parameters, end results that might differ when tested with independent components and with system as a whole.
[James’ Reply: Why would they differ?]
That was on the conceptual part. How different “integration testing” would be in terms of performing actual testing?
I think the complex nature of system, the assumptions we make (may be based on results from tests performed on independent components) and analysis of the results/observations makes it challenging. How do we know what we think we see is what it is really? How do we know what we think has caused the problem is really the problem caused by /that/ thing and not by two things together or more? Would results of unit tests suffice to make that decision? May be or may be not….
[James’ Reply: I want you to tell me why they might not be.]
I look forward to know your idea of it, James. ON side note, I am doing my study on ancient schools in Hinduism with Nyaya school being one of them. From what I have studied so far, their “theory of inference” might be something that I feel can help testers on how to think about problems like this. More…in my blog when I write it 🙂
[James’ Reply: It’s about time an Indian tester did that!]
Lalitkumar Bhamare says
[James’ Reply: What does “together” mean? That’s a vague concept. If I hold hands with my wife, we are joined together, but so was my son joined to her before he was born. Obviously there is a qualitative difference between those two “togetherness” scenarios. So, the word “together” doesn’t say much.]
Lalit- by “together’ I meant the presence of two or more things at one place, or two or more things working towards combined goal, it can also be a combined effort towards something without having to directly interact with each other as such.
[James’ Reply: This still does not say much. What does one place mean? What does one goal mean?]
I think the parameters that would /impact/ the functioning of components in independent and integrated environment would make big difference. A heating part tested independently might tell us that it is able to heat the liquid with desire temperature. What it may not tell us is whether it would heat the “coffee powder with milk while the mixture runs from the pipe” as effectively as it would when tested independently.
[Jamesy: Why wouldn’t it? (I know why it might not, but I want to hear you say it. The point of this exercise is to get deeper.)]
Lalit – It would not, because assuring the behaviour of one component standalone without actually putting it in ‘presence’ or ‘conjunction’ or with ‘connection’ of other component is not possible. Until we actually do it, it would simply remain an assumption based on an experiment performed under different circumstances, different environment.
[James’ Reply: It is possible to simulate those conditions, isn’t it? And in software, can’t we build modules to have formally defined interfaces through which all interaction flows?]
An example- I am confident about ideas that I teach in my testing class. Would I be able to teach with same confidence when ‘you’ are present in class? May be not, I would be extra careful. What kind of impact someone else’s presence in my class can have on me? What would be the end result? I can only guess that or assume for that matter but what would actually happen can only be evaluated when such ‘set-up’ is actually made.
[James’ Reply: Isn’t that partly because you are not a well-defined mechanism, as software is?]
Just like we humans who have emotions and likely to behave differently in different “set-ups”, software components too by the virtue of their properties that might get impacted, influenced, altered in different “set-ups” may not let them behave the way they would independently.
Going deeper to the possible level I could, I can only say that it’s an impact or influence caused by presence/existence/appearance of other component(s) that might or not create desired results. I am not sure if I am going in correct deeper direction you want me to but I would like to dig this further if you help me understand what do you really mean by “going deeper” here. Deeper in what aspect of integration testing?
Connor Roberts says
1) Initially I had a problem with your statement that “one-way integration is a form of weak testing” but you’re right, as the limitation of only being “able” to do one-way integration testing much of the time does not negate the statement.
[James’ Reply: I did not make such a statement. I said that one-way communication was a weak form of integration. I was not speaking about testing at all.]
To elaborate, many times as Testers our relationship to third-party providers is obfuscated by layers of of bureaucracy. All the good intentions and action on our part cannot force a two-way integration test sometimes, thus one-way testing can be responsible testing (Note: I do feel there’s a difference between “responsible testing” and “sufficient testing”).
[James’ Reply: I don’t know what one-way testing is (nor two-way testing, either). Can you elaborate on that?]
We as Testers should be doing what we can within our power to make that communication happen so that we can do better testing work. Ideally our team would be in constant contact with Product Management (PM) in such a way that PM is ever-aware of our development and testing needs; however, at the end of the day, if we cannot get the information we feel that we need (not need, but “feel that we need”) then all we can do is inform PM about those risk gaps as part of our testing story, and move on.
[James’ Reply: Are you referring to one-way communication between people?]
We explore for value to the customer, within the confines of our reality, and then report on that. If we don’t like those “confines” then we have the choice to jump ship and take our craft elsewhere. Sure, in an ideal world our Testers who work on Product A (a web-app that absorbs a data feed from a 3rd party) can speak directly with the 3rd party’s Testers/SMEs to do more thorough integration testing, but many times the most important job of testing is understanding your environment enough to know that you may not need to do that, trusting PM enough to realize you can’t see the bigger picture, and most importantly, knowing when to stop.
2) To answer this question (seems obvious, but I’ll bite): “How is integration testing anything more or different or special?”, I liken this to the act of conversation between humans. Two humans, each in their own vacuum, simply strengthen their existing biases and misunderstandings. However, when a discussion occurs between the two, this creates a 1+1=3 effect. Person 1 and Person 2 are now engaging in a way that gives rise to previously unanticipated ideas and possibilities.
[James’ Reply: Can you provide an example of that?]
Integration testing does the same, exposes the unknowns so that we can better close the risk gap. Baking soda and vinegar – If I test them individually in their own flasks, I learn a little bit about each, and consequently submit a poor testing story; while it will be the client that is in for a surprise when the two ship together in Prod.
[James’ Reply: But isn’t it possible to determine the chemical properties of vinegar and baking soda without having to put them together?]
Connor Roberts says
[James: I don’t know what one-way testing is (nor two-way testing, either). Can you elaborate on that?]
There’s probably a better term for it, as I was trying to use your same terminology inappropriately.
[James’ Reply: I was referring to one-way communication, which means A talks to B but B cannot talk to A.]
‘Isolated Testing’ perhaps, where you can only test to the extent that you are able, as many times we do not have access to the third-party’s inner workings, only the output file that we are told we need to digest. Much of testing in this way is a bit one-way/isolated. We must be content to only test our behavior with their data as we cannot manipulate the other end to produce differing data sets. Sure, we can setup emulators, but at the end of the day they are just that, not the real thing. We have to be OK with that.
[James’ Reply: Is that black box testing?]
[James: Can you provide an example of that?]
The example was in the vinegar/baking soda. The reaction of the two gives rise to previously unimagined outcomes. As far as testing two pieces of hardware/software together; they will interact in a way that is beyond out prediction. This is why a lot of teams/Testers claim they “ran out of time to test” but that’s just an immature way of not being conscious about telling the testing story.
[James: But isn’t it possible to determine the chemical properties of vinegar and baking soda without having to put them together?]
No amount of explicit knowledge can give rise to tacit knowledge, nor give me a visceral understanding of what it is like to witness/observe the reaction of the two elements interacting together.
[James’ Reply: This isn’t about tacit or explicit knowledge. This is about knowledge. How can you say that we can’t know how two units that were designed to work together will work when together? What justifies you in saying that?]
With my current understanding, I would even be willing to say that integration testing is more thorough testing than testing the product in isolation. I find that I learn a lot more through rapid experimentation than I do from observation of the separate parts. This same reasoning drives my wife crazy when we get a new product, and I immediately begin galumphing. It’s my nature to ignore the instructions until I run into a problem. With software testing I do the same (to an extent, stakeholder concerns override that of course) until I run into a roadblock and we begin discussion, or my self-imposed time-box runs out, whichever comes first.
Connor Roberts says
Definitions (context) I am assuming for the purpose of this discussion:
1. Interoperability Testing = testing two independent products that ‘can’ work alone, but must work together for some shared purpose.
2. Integration Testing = testing two features/products/systems that may be independent in form and design, but are symbiotic in function. They must rely on the each other in order to operate, according to product claims.
[James’ Reply: I don’t see a good reason, yet, to distinguish sharply between interoperability testing and integration testing. I think we should call both things integration testing. Seems to me the methods in both cases are identical and the situations is nearly identical. Also, if we look closely at what it means for something to rely on something else, lots of things you might consider as interoperating from one perspective are integrated from another perspective. And if the only difference is perspective, perhaps there is no real difference. Example: Is a service that happily sits and waits to be used a separate device that I am interoperating with? or is it a system I am integrating with? I don’t want to have arguments about that unless it somehow helps me test better.]
[James: This isn’t about tacit or explicit knowledge. This is about knowledge. How can you say that we can’t know how two units that were designed to work together will work when together? What justifies you in saying that?]
So, we may be talking about interoperability not integration testing now, but to answer your question – My experience justifies that.
[James’ Reply: In this case invoking the concept of experience doesn’t help. We are trying to go deeper, here, not merely to take a stand on faith. You are essentially asking me to trust you. But trust is not the issue. I want to know WHAT and WHY and HOW.]
Many times we “design” two products to work together, but then we see them interact for the first time, during initial shallow testing, and we discover all kinds of risks and unknowns that we did not anticipate. I do not believe that any amount of knowledge about either individual product/feature/system could have given us that insight. Only when we see them operate together are we better scientists, and thus better informers for our stockholders.
[James’ Reply: I understand that you believe that and you believe that experience teaches that. Just remember, experience teaches many things that are not true, such as that man cannot fly. But of course that was dis-proven by people who created their own experiences.]
Could I even go so far as to say that this overrides the need to even use context as a determining factor when deciding to do this type of testing or not? If you are testing two items that MUST interact (based on client wishes) then you MUST observe their interaction in order to satisfy the burden of testing. Yes? No?
[James’ Reply: Testing is something we do in response to the perception of risk. In many cases where we integrate things, such as when you install a plug-in to Firefox, you probably won’t subject it to a formal testing process, will you? Does Mcdonalds test each sandwich they put together before they sell it?]
We can only learn and expose a limited amount by testing each item in isolation and we reach the need to test the two together, given that is what the client wants. Short of the client explicitly saying “I’ve heard you say you need to test those together, but do not test them together”, then the onus of integration testing is always on us by default for development that involves more than one independent piece.
[James’ Reply: I want to go beyond protestations of faith. At least lets speak of faith in deeper things before we give up.]
Connor Roberts says
[James: I don’t see a good reason, yet, to distinguish sharply between interoperability testing and integration testing…I don’t want to have arguments about that unless it somehow helps me test better.]
It’s important to delineate for the purpose of creating a common language. They are very different, just like testing and checking are different, thus I refuse to call them the same thing.
[James’ Reply: Sorry, Connor, but this makes no sense. Your first sentence implicitly claims that we have a goal of constructing a common language and that making a distinction between integration and interoperability serves that goal. The first claim is news to me. I am not trying to construct a common language, but rather a useful and practical language. The second claim is a non sequitor. We could as easily say that NOT distinguishing them is a better way to achieve a common language.
Of course you don’t have to speak the way I speak. But I don’t yet see a good reason to complicate my language. Simplicity is important.
Testing and checking are actually different, and in a profound way. Confusing testing with checking is not a pedantic triviality, but represents a fundamental failure of responsible craftsmanship.]
In my opinion – Integration implies reliance on something else to function, whereas interoperability does not (i.e. the separate entities can operate on their own, but also together).
[James’ Reply: I am suggesting that is an ambiguous standard and an unnecessary distinction. I’m looking for you to make a counter-argument that is something more than an assertion of how you feel.]
This would apply to testing two or more items, where “items” = feature, service, systems, products, etc; it does not matter what category the item under test falls into, the definitions stay the same. If I say to a tester or developer, “Have we done interoperability testing yet?”, I do not want to mislead or imply that the two items under test are symbiotic in nature, when they are not.
[James’ Reply: Why does that matter? So what if they are symbiotic and so what if they aren’t? Maybe they are symbiotic in some ways and not in others. Maybe two people will disagree on whether they depend on each other. It seems to me the important thing to is investigate the nature of whatever dependencies that there are.]
Just like when someone says to you, “I kicked off the automated test suite, then left for the day. It will be finished in the morning”. You (and now the community thanks to a lot of work you and others did) would say “You mean checks, not tests right, since they are unattended?” I feel this falls into the same boat.
[James’ Reply: The reason that is the case is because when people use the word test to mean something strictly non-human, they cheapen the whole concept of testing and put the world of testing and the enterprise of skilled testers at risk. This is not true with your suggested distinction. How is your distinction serving a practical purpose?]
I am open to hearing why you think the delineation is unnecessary, but as of now am a bit confused as the benefits of separating the terms seems obvious.
[James’ Reply: In many years of working with language, I have found that every distinction we want to make carries a cost. The higher the cost, the more likely language will be misused. This is part of the dynamic behind Zipf’s law. And on the value side, you have not yet mentioned ANY problem that would occur if we called the two things you want to distinguish by the same name. Can you actually cite one?]
[James: …Just remember, experience teaches many things that are not true, such as that man cannot fly. But of course that was dis-proven by people who created their own experiences.]
So perhaps I should have elaborated that there’s more here than experience. Is it not common sense to test how Product A works with B when the customer has specifically requested “I need these to work together”?
[James’ Reply: We don’t need to invoke common sense. Testing is about exploring risk. If a customer needs things to work together, and you perceive the potential for risk with that, then test it. If that testing is focused ON the risk due to interactions/dependencies between the sub-systems as such then I would call that integration testing. Otherwise I would just call it system testing, which already implies integration testing to some degree.]
I refuse to only do intentionally shallow testing when I know the client wants more. Testing Product A and B in isolation from one another is intentionally shallow and increases the risk gap unnecessarily.
[James’ Reply: I don’t see how me disputing your use of language is in any way suggesting that we do shallow testing! Geez, Connor, what are you thinking?
I think we are talking about integration testing. I can do that as deeply as you like.]
[James:…How can you say that we can’t know how two units that were designed to work together will work when together?…]
We can know ‘a little’, I never said we can know nothing, but you are implying that items (systems, products, features, services, etc.) will inherently work together simply because they were designed together. That is a lot of faith in the flawed human who is involved in every step of the process. When you say that, this is what I actually hear…
[James’ Reply: I’m not implying that. I’m just wondering if you have any reasons for your belief. I can give you specific reasons why integration testing is interesting or necessary, but so far I haven’t heard that from you.]
“Flawed humans made flawed assumptions about how some flawed requirements should be interpreted. Product Management then came up with flawed acceptance criteria and had flawed developers created a flawed product, on which we did flawed testing…”
Let’s be realistic here. I have to put some faith in Product Management to properly interpret client conversations and business risks that are outside of my purview, but as a tester I am required to NOT put a great amount of faith in the resulting design and intent leading up to a release. It’s my job to be a professional skeptic, throughout the entire process. It is this skepticism that fuels my internal requirement for interoperability and integration testing (the later if the items under test are symbiotic in nature). The only time I will not do this type of testing is if I am explicitly told by Product Management not to do so, and they are aware of that risk gap.
[James’ Reply: It sounds like you think it is a best practice. I don’t do best practices. Context-Driven people solve problems. If I perceive the potential for integration-related risk then I will do that sort of testing. Otherwise, I will not directly pursue it except as a hedging strategy.]
[James:…Does McDonalds test each sandwich they put together before they sell it?]
Test, no, but Check, yes. I’d be willing to say that they even do “interoperability checking” as they build it in the assembly line. If anything goes wrong (ripped bun, too much ketchup, etc) then they deem it unfit for the customer and they remake it. I worked at McDonald’s when I was a teenager, and by noon there’d be about 10 breakfast sandwich rejects stacked up in the employee break room that were deemed “unfit” for customers. We sure enjoyed them for lunch though, as they’d just go in the trash otherwise.
[James’ Reply: Good checking always implies testing. If you say there is no testing, you are telling me that the checking is perfunctory and false. I suspect you mean to say that the there is shallow testing… but that supports my point. My point is that you don’t ALWAYS perform integration testing, and you agree that there is a whole lot of integration testing that McDonalds does not do.]
When it comes to testing products, software, etc, humans have a threshold of complexity past which we cannot simply “trust” items will work well together. A McDonald’s burger is barely bigger than a Hello World app. The software we test is vastly complex, and once they reach a certain level of complexity, our hand is forced as testers to make our process more complex and diverse. That is, if we want to do good work and responsible testing.
(I am reminded of Ashby’s Law when I think about the need for integration and interoperability testing).
Connor Roberts says
We may not be communicating, as I am baffled by some of the replies. A bit bewildered, but will forge on as best as I can over text.
[James:…I am not trying to construct a common language…]
If a developer says to another developer “I am compiling”, we all know what that means. It has a specific meaning that is not very open to interpretation, so the possibility of miscommunication is very low. Testing does not have that as strongly as I believe the development worlds does…(continued below)
[James Reply: Advocating for common language is absurd. If you wanted that all you would have to do is standardize on whatever my language is. So, why don’t you do that, then? It must be because common language is not your top priority.
What people are really saying when they crow for a common language is that they want everyone to speak their language. Good luck with that.
Instead, I work on making my language the best it can reasonably be, and that’s why many people who communicate with me adopt my language. My vocabulary is better. It clean, simple and well-thought-out. I’m trying to help you think your language through, and if you made a good argument I would consider changing the way I speak, but so far you seem comfortable making appeals to your own intuition.
Remember commonality carries a risk. When creating a common standard of any kind, in any field, you necessarily turn away from innovation, variety, exploration, etc. But those things are good! We should desire variety (that is where Ashby’s Law figures in). Meanwhile, commonality of language, where truly needed on projects, is not necessarily difficult to negotiate when the need arises. It results in what linguists call a “pidgin.”]
[James:…I’m just wondering if you have any reasons for your belief…so far I haven’t heard that from you.]
If I tell a Product Owner that we’re going to do integration testing between Feature A and B before we release, they may think that A and B are reliant on each other, and thus convey to the client that both must be released together, as that word integration carries a sense of implied reliance on the other (e.g. Cyborg, cannot live with only biological or only mechanical aspects, must have both).
[James’ Reply: You are saying, based on your idiosyncratic definition of integration testing, that a program manager could falsely believe that two products must be shipped together? Connor, that is ridiculous. I’ve been in the industry a very long time. I can tell you this has never happened to me. Nor have I heard of it happening. I strongly suspect it has never happened to anyone.
Integration testing is about testing things that have been put together, somehow. The concept, although interesting and deep in places, is not inherently confusing, and certainly doesn’t have ANYTHING to do with decisions about packaging the product.]
However, it turns out that I really meant interoperability testing, and I come back a week later to tell the PO, “OK, we need a bit more time to test Feature B, but the team recommends we release A now”. The PO is confused, and I am responsible for that misleading due to the fact that I said “integration” originally instead of “interoperability”.
[James’ Reply: That is a bizarre example. A better strategy is to be clean in your language for your OWN benefit, not because you think other people are carefully listening. I promise you– they aren’t!]
I believe it is part of my responsibility as a Tester to be very clean in my words. I think you do to, but perhaps this is less important to you.
[James’ Reply: Not only do I think it’s a priority, but I am actually complaining that your language is not clean enough. Your usage has unnecessary complexity. I’m asking you to clean it up. It is your responsibility. But doing so is not primarily for other people. It’s primarily to provide a framework for your own thinking, and an ability to productively defend your point.
So far, your defense has been to offer an absurd scene of a program manager who got confused because first you convinced him that integration testing automatically implies a “high” level of dependency as opposed to interoperability testing which implies a “low” level of dependency– which is an unnecessary distinction in the first place– and then you concoct a misuse of those terms– which wouldn’t happen if you didn’t make that distinction in the first place– and then you claim that it will lead to poor management decisions– as if management makes decisions about packaging products based on the name of the test activity we use!]
OK, that is fine. I am not asking you to make it important to you. I am defining why I feel the need to delineate. It just ties into my need to not mislead others.
[James’ Reply: I also have a need not to mislead others. That’s why I consider false specificity to be a sin.]
Perhaps you can use these two specific words interchangeably in a conversation with folks that doesn’t mislead them. If so, props. But people did that with testing/checking 10 years ago too interchangeably and nobody cared either.
[James’ Reply: I don’t use the words interchangeably. I told you what I do: I call them both integration testing. I don’t use the term interoperability testing at all unless I am working with someone who is using that term and I feel like humoring them. Frankly, I don’t even use the term integration testing much, since I consider it a subset of risk-based test design.
As for testing/checking, maybe you still don’t what the difference is, but to say that nobody cared is like saying nobody cared about the difference between virii and bacteria before there was a name for it. They would have cared if they had the knowledge of why it mattered.]
[James: It sounds like you think it is a best practice. I don’t do best practices….]
No, there is one best practice, but it’s outside of anything man made. In testing, I do not hold to best practices. However, I find there’s some common sense guidelines that drive my thinking (heuristics that generally work most of the time, but are rare cases where I must shunt them).
[James’ Reply: Invoking common sense never wins an argument. By definition common sense is what we both have, but I am disputing with you about this, so either it’s not common sense or one of us is a damaged mind.
Instead you should talk about specifics. That would have the possibility of communicating.]
If a situation calls for testing X, I will use the tools that apply to X. If a situation calls for testing Y, I will use tools that help me better understand, exploit, investigate and ultimately report on Y. I am simply stating that I find integration and interoperability testing more often necessary than not, across both X and Y.
[James’ Reply: That’s not a helpful statement. I am able to speak more specifically, and I think you should aspire to that as well. Anyway, if you want me to listen to you, you will have to do so.]
Not just in my experience alone, but most of the testers I have spoken with believe integration testing is one notch below a “must” when testing two items that claim to work together. It’s this unspoken assumption many times that we’re going to do that kind of testing. We just sit down, without even using the term, and focus on creating an integration testing strategy.
[James’ Reply: I don’t believe you can know that. I think you are probably imagining that you all agree. Speaking as someone who’s day job depends on analyzing and improving how people understand things together in the testing space, I can tell you that it’s really not so easy to know what people think.]
At risk of introducing superfluous analogies…It’s like determining if I should take my canteen/water-bottle with me on a day hike in the desert. Ask around long enough, you will probably find one person who would recommend not taking it, but I’ll trust the wisdom of the crowd in this instance. Again, not a “best practice” persay, but you’d be hard pressed to find situations where it’s not necessary. I know you call these guideline heuristics, so we might be in violent agreement here.
[James’ Reply: That’s not how you decide to take a canteen. But if it is, you are completely unqualified to hike in the desert by yourself.
I remember a list of supplies needed for having a birth at home. Two things on the list I didn’t understand: bendy straws and a dozen baby blankets. I got them but I didn’t think I would need them. Having gone through the experience, I would say I needed ONE bendy straw and FOUR dozen baby blankets. But more to the point, I now know why they are needed and how the situation may change in ways that affect that need. I am more qualified now than I once was.
Heuristics are not blunt instruments. You speak of them as if it is a matter of probabilities and fitting in the crowd. That is a terrible way to think. But if you are talking to me now because you want to fit in with my “crowd,” then I suggest you will have to change the way you speak about heuristics.]
[James: I don’t see how me disputing your use of language is in any way suggesting that we do shallow testing! Geez, Connor, what are you thinking?…]
Earlier, when I gave the example of the baking soda and the vinegar, you seemed to keep implying that we can “know” how they will interact by studying the chemical properties of each of them separately. I was disputing your use of the word “know” as I feel while perhaps you are technically correct, it’d be a shallow understanding until we actually combined the two and learned from their interaction/reaction.
[James’ Reply: Again with your expressions of faith. Stop that and tell me WHY. Tell me HOW. Don’t just give a summary phrase and stop. THAT is shallow!
I am not advocating the avoidance of experimentation. I am trying to help you be a deeper thinker. You have made a claim and I would like to see if you understand the power of the claim you have made. I suspect you may not understand it, apart from a child-like devotion to a practice.
Other people in the comments have stepped up to this challenge. You can, too.]
Just like mold and food. We did not learn all about food, and know it would mold after we learned enough about it’s ‘normal/resting’ state. We had to observe that happen. Ancient man probably put some apples under a bush, or buried some in the dirt, then got an unpleasant surprise when he dug it up 6 months later to eat. — We learned from that initial observation, just like we do with integration testing and cannot confidently say that all risks are mitigated between Product A and B without having done integration/interoperability testing. Surely we’re not in disagreement on that.
[James’ Reply: Testing is not a risk mitigation process, it is a risk identification and analysis process. And testing has a cost. There is an opportunity cost, as well, to anything you do. If you feel– and you can defend the idea– that there is sufficient integration risk, then you will probably do integration testing. You can also learn about that risk without explicitly planning or performing any integration testing.
I can explain that thought process. What I keep asking you to do is explain that thought process to me. I am challenging you to do that on a level deeper than you have so far.]
Connor Roberts says
[James:…What people are really saying when they crow for a common language is that they want everyone to speak their language.]
I am completely intellectually humble on this topic. I have changed many of my definitions and how I think about words as I speak with others and learn more. I do not care if it is my definition or someone else’s, I simply want it to be something we all agree on.
[James’ Reply: You have completely ignored my point, so I will state it again. If it is really what you “simply want” that we agree on our language, it is in your power to have that right now: just agree with me. But I don’t believe you. I think you want agreement but you ALSO want quality, and that quality is MORE important to you.
Quality of language is more important to me, too, which is why I’m not going to focus on agreement. I can’t control what you agree about. I will focus on quality, and you will either agree on that or you won’t. Naturally, if you convince me to use a word in a different way, that is also okay.]
I feel right now I am learning more about integration and interoperability. I mean, there’s a reason we have two different words, so I am confused why anyone would push back on me trying to study their differences and dive deeper to better my understanding.
[James’ Reply: There is a reason? I don’t care. What I would care about is if there is a good reason. You haven’t stated one, yet, in my view.]
I am not requiring you to do so with me, as you seem to find the delineation less important. I am currently interested in them, and I am fine if that’s not OK with you – this is for my own learning right now, so that I can speak to them better in the future. If someone comes along tomorrow and gives me a whole new set of compelling criteria to redefine those words, then I will take that into account and pivot accordingly.
[James’ Reply: So, you are saying agreement is not very important to you, then, which is what I thought. If you DID want to achieve agreement, you would have to explain the delineation better.]
[James:…Meanwhile, commonality of language, where truly needed on projects, is not necessarily difficult to negotiate when the need arises. It results in what linguists call a “pidgin.”]
So you’re simply saying these two terms falls into that category, and not as important as terms like testing/checking since there are no moral implications there about humans being involved/not involved. OK, I can agree with that. That’s doesn’t mean I won’t continue to study them to benefit how I speak (which affects me and others)
[James’ Reply: Do as you wish. My reply to you was predicated on the belief that you were trying to explain and defend the way you use words. But it’s okay with me if you want to distinguish between interoperability and integration– unless we have to work together on something.]
[James:…A better strategy is to be clean in your language for your OWN benefit, not because you think other people are carefully listening. I promise you– they aren’t!…]
Everything I do must be for the service of others. People do listen. If they are not listening, then you’re doing something wrong and need to modify your conversational skills.
[James’ Reply: Sorry, no, that’s not how it works. You need to study how communication works and doesn’t work. Please see Proofs and Refutations, by Lakatos, for an interesting analysis of exactly why even mathematicians talking about mathematics to other mathematicians have trouble communicating.
Of course, your communication skill makes a difference, but much more important is the paradigmatic commitments by which people interpret your words. You have no control over that.]
Sidenote: This is probably where you an I diverge. For example, I’d never say to anyone, “What are you thinking?” because that’s a closed question that tends to trigger a mental U-turn in the conversation, people regress, and you never get through to them unless they are that small subset of people that can stay Vulcan/objective regardless of what gets thrown at them. I am in that crowd, so I can take it, but I have found that it is still possible to avoid shallow agreements and dive deep without all the shock statements, as those come across as disrespect to most humans. I only say this because I know you want others to challenge you also.
[James’ Reply: What you are calling “shock statements” are important. They matter for the same reason a fire alarm matters. I say “what are you thinking?” to inform you that my interpretation of what you said has indicated a serious problem that will prevent me from taking you seriously.
Since you are not a child and I don’t live with you, the top priority for me is not getting along with you or protecting the stability of your emotional state. Honesty and openness is more important. I feel it would be dishonest to disrespect you privately. I need to show you that openly. Then you can change or not change, but you will have all you need from me to do what is right for you.]
Back on point, this method of putting others first, even extends into considering how THEY may be affected by my speech. Doing this does not put the honing of my own testing craft at risk. I can still become a better tester/mentor and have this service mindset simultaneously. Testing is service to others, which is what interested me to go into the field to begin with; for the benefit of others ultimately, not myself. Any benefits that happen to flow back to me (learning about testing, better product knowledge, increased skill craft, etc.) are simply a side-effect of that service to others. I of course supplement this with external learning resources and being involved in the testing community, but the fact that I get paid to serve is outstanding.
[James’ Reply: I agree that testing is service. But I can’t fulfill that service without taking care of my own competence and clarity. That is the beginning of true service.]
[James:…You have made a claim and I would like to see if you understand the power of the claim you have made.]
The claim I made, in a nutshell is this (to make sure we’re on the same page): When a client says, “I need Product A to work with Product B”, then integration testing seems intuitively necessary probably the majority of the time. That is of course a heuristic, as there are cases where you may be able to ensure testing is sufficient without integration testing (a form of risk testing according to you, fine, we can call it that, I am trying to be more specific here). Thus, NOT a best practice, but a strongly suggested guideline, given the client request.
[James’ Reply: You say it’s a heuristic, but you don’t seem to be able or maybe willing to explain it. If you can’t explain a heuristic, it’s no better than a best practice. It becomes an article of faith.
The use of heuristics carries an ethical burden. It’s not just a word.]
[James:… I suspect you may not understand it, apart from a child-like devotion to a practice. Other people in the comments have stepped up to this challenge. You can, too.]
While the claim above seems obvious, and self-sustaining to me, I will happily satisfy your need for a more granular example (that’s my best guess at what you’re wanting). I’d rather spend time discussing testing craft/theory but part of my service to others is meeting them in the conversation on their terms, so here you go…
[James’ Reply: If you want to meet me on my terms, then you will have to understand those terms. I was not asking for an example necessarily, but for an argument– for reasoning– that goes beyond a protestation of faith.]
Among other products, my company makes websites for dealerships. We have multiple release trains, all separated by function, one of which is the Inventory Release train. The Inventory RT is composed of multiple scrum teams, each of which focuses on a different set of features within Inventory. Absorption of new 3rd party data feeds is one of those duties. So, our websites pull data from other providers that our product then needs to absorb and display on the website correctly. That data is just raw text in many cases, or a DB we tie into, so the team has to take data from that 3rd party, parse/massage it, then display it in various places throughout the website on multiple pages in our website solution. Now, when adding a new provider there has yet to be a case where we did not do what I’d consider integration/interoperability testing (both products need to work together for a new common purpose that has been requested by the client). We had to test that our product could take their data and use it according to the perceived wishes from the client. Sometimes we even had to call the 3rd party provider and have them tweak the data feed to get the right information. I would consider that verifying that the massage process works would be integration testing. Once the data was on the site, perhaps that just falls into a different kind of testing as that is an abstraction of the 3rd party’s original data?
[James’ Reply: In this example, you have said you do integration testing. You have not said why. I’m looking for something that argues against doing exclusively unit testing.]
So, am I open to the reality that there may be cases where a client says “I need A and B to work together” but we end up not needing to do integration testing? Sure, because I believe my claim is a heuristic, a rule of thumb, that does not ALWAYS apply. Even though so far I have not seen it in our product, I am aware enough to realize that there’ s a much bigger picture, and since I do not adhere to best practices, I am open to the chance of there being cases where it is not needed.
I hope that clears it up a bit and provides the support you were wanting. I could list a dozen other times just in the last year that we integrated with third parties, but they’d be very similar to this experience.
[James’ Reply: Can’t you see that you have provided no reasoning or content? Let me summarize what you said:
“We developed a system and decided to do something we called integration testing. I decline to describe what that is or how it’s different from unit testing; and I won’t always do that kind of testing, whatever it is, but I decline to describe the circumstances in which I wouldn’t.”
Here is an example sentence you used: “We had to test that our product could take their data and use it according to the perceived wishes from the client.” You had to? Why? Where did that risk come from? And why does unit testing not already take care of that risk?
I am looking for depth, not just long winded reiterations of a summary opinion.]
Connor Roberts says
I’ll ignore the rest and focus on this now, as I think it’s the heart of this conversation…
[James Reply:…I am looking for depth, not just long winded reiterations of a summary opinion.]
OK. I believe a unit test stops being a unit test when it has any external dependencies outside of the function it is testing. By nature, an integration test is much more complex, and must rely on external dependencies (DB setups, configs, etc.), with the goal being to exercise the product in it’s normal environment, or as close to it as possible to make the test more valuable.
[James’ Reply: Depending on what you mean by “product.” It sounds like you are describing a system test, or a field test. A field test is about testing the whole product in a realistic environment. A system test is what we call testing a complete product (whether or not it’s in a realistic environment). An integration test means testing something that is integrated in some way, which could be a sub-system that is being operated in a mocked up and unrealistic environment. A system test and a field test are also integration tests, but not all integration testing is system/field testing.
If by product you mean a particular unit within a bigger system, then an integration test is an attempt at a more realistic environment for that unit. But that is not necessarily more complex. An integration test may be simpler, because when units are put together they may not expose much of themselves to each other (e.g. a unit that can operate in many states or perform many services may be asked to perform only one of them for that system.)]
Unit tests traditionally carry more clout with programmers (and savvy testers), while integration tests speak to everyone else, especially the non-technical stakeholders. It might be the unit tests that find bugs earlier on in the process, but it is the integration tests that are more convincing/compelling to those outside of the development team. Given the trust relationship between stakeholders and team is sound, then these tests make them feel that the product is solid and working in accordance with perceived customer desires.
[James’ Reply: This is a marketing argument, then?]
Complex integration tests (that are complex to handle multiple risks, not simply for the sake of being complex) put our stakeholders more at ease than simple unit tests that operate in isolation from one another. Since it is our job to inform our stakeholders on risk, I’d much rather have a suite of unit and integration tests in my arsenal than simply unit tests that only test functions in their separated Goldilocks states.
[James’ Reply: Is there also a technical reason why we need integration testing?]
In short my testing comes back to consistency a lot of the time. While unit test make me us feel good about internal consistency, integration tests make us feel good about external consistency. The goal of our stakeholders is ultimately to “feel good” about the product, but most call it “mitigating risks to increase profit margins”. Integration tests help us do that in a way that unit tests alone cannot.
[James’ Reply: Is there a rational basis for that?]
Connor Roberts says
[James:…If by product you mean…”]
I should have said “item under test” instead.
[James’ Reply: My reply would have been the same, it doesn’t clarify things to say item under test. My point was that if you are speaking of a testing a unit that is different than speaking of testing an integration of units.]
When we talk about doing integration testing, it is typically about how a feature interacts with another, or specific low-level service within a larger feature that we’re testing, on its own as well as via integration testing with another service (from another internal or external product). This is regardless of the environment it is in, as I see value doing it in each along the way: Dev (Internal), Staging (Prod-Like) and Prod.
Thank you though for educating me on the difference between field, system, etc. I am not as consciously competent on that vocabulary as I should be.
[James: This is a marketing argument, then?]
Not at all. The majority of the folks I am trying to appease with my testing story are internal stakeholders (mainly Product Management), since most testers in our org do not interact with the customer directly except in our UX lab where we do real User Testing. Bothers me from time to time, since I like hearing it from the horse’s mouth, but we have to trust our Product Management in this realm. So, this is not a marketing argument at all, it’s a good testing argument.
[James’ Reply: I wasn’t referring to marketing as a group of people who do marketing. I was referring to your testing. You seem to have made an argument based on political calculation, rather than on the need to justify your testing on rational grounds. Because talking about “clout” is political thing, not a technical thing.]
My testing needs to be compelling, for my sake and my stakeholders benefit. Integration tests are a part of that much of the time, and speak better to feature stability than unit tests alone as they cover a how the item under test works with the requested external components.
[James: Is there also a technical reason why we need integration testing?]
I believe it helps challenge our biases on how good a product (software offering) we think we have. It’s easy for teams to feel good about what they have built, but if integration testing can bring us back to reality a bit by exposing flaws, shedding light on better use case scenarios, provide insight into how we can enhance our flow testing, and in general shine light on weaknesses in the overall architecture, etc. then we’re better off for it in the long run, as long as we adapt those findings back into our maintenance/architecture plan of work for upcoming sprints.
[James’ Reply: I’m looking for specifics… for the mechanisms… so that we can distinguish it from unit testing. Nothing you say here is necessarily different that what we could say about unit testing.]
[James: Is there a rational basis for that?]
You said this in reply to me stating that integration tests make our stakeholders “feel good”, more-so than unit tests alone. It’s hard to measure how someone “feels” but my rational basis for it is that once you have done integration tests on a product for a while, and shown the value of it to Product Management through multiple sprints (i.e. These are the bugs found during unit testing and these are the bugs found during our integration testing) then they start to ask for it, as a matter of habit. I’ve had Directors/VPs show up for a sprint review demo, a person who may not have been in regular contact with the team otherwise, but asks (unprompted), “How did integration testing go?”, because we’ve been telling that as part of our testing story for similar features in the past. The risks we’ve exposed in that testing in the past have appealed to him/her at some point, enough for them to feel the question is valuable. (Now, some management folks might just be using a buzzword, but we find that less frequent here than the genuine requests). Of course, some teams answer this better than others, and sometimes it doesn’t apply, as integration testing might not have been needed, in which case the team will explain the context. In most cases though, management is asking the right questions because we’ve trained them to do so. Not like a trained monkey, but like an informed stakeholder. This goes back to the establishment of a common language (which I still feel is important) – the team and stakeholders share a very similar understanding of what certain terms/phrases mean.
[James’ Reply: Nothing here is telling me anything about integration testing or why I might want to do it. See my next post for my attempt to explain.]
Connor Roberts says
[James: Nothing here is telling me anything about integration testing or why I might want to do it. See my next post for my attempt to explain.]
I feel I was very explicit in an earlier reply about the difference between unit and integration tests, as well as describing why integration tests provide more value (to both technical and non-technical folks) since integrations tests better represent what will happen in reality.
I know that might not be specific enough for you but I look forward to your next post.
[James’ Reply: I just reviewed all your responses. I see about three sentences, total, that address my question, out of all the things you’ve written. Some other commenters have been much more specific.
A lot of what you’ve written seems to treat integration testing as a “common sense” process, by which I mean that you seem to think it is too obvious to bother to define, describe, or explain.
I’d like to help you practice being technically specific about the dynamics of testing methods, but I think we should try that over voice, instead of writing.]
Connor Roberts says
[James:…A lot of what you’ve written seems to treat integration testing as a “common sense” process, by which I mean that you seem to think it is too obvious to bother to define, describe, or explain.]
Yes, I feel integration testing is to testers as walking is to humans. We just do it, because the need is there. (That sounds extremely narcissistic, but it is not intended that way.)
[James:…I’d like to help you practice being technically specific about the dynamics of testing methods, but I think we should try that over voice, instead of writing.]
I’d enjoy that. Being able to grow that skill would be nice. Thank you.
Clinton Billedeaux says
I really had never given it more thought than it’s just the reliance on some other process or system to complete some data manipulation that isn’t directly handled by the SUT.
So curious about the follow up.
Aleksis Tulonen says
Thought about this. Which I haven’t done that deeply earlier.
So far integration (testing) has had these kind of characteristics for me:
– Change in one system causes observable change in another system (by system I mean a product or solution that has a purpose as a whole that varies from those that are part of the same overall system)
[James’ Reply: Or a non-observable change. And does it matter that the part and the whole have different purposes?]
– There has been some kind of mutual agreement related to what kind of outcomes change in one system causes on the other (not being a situation of communicating with aliens, where we don’t have any idea how systems are communicating with each other)
[James’ Reply: Isn’t it still integration even if we are communicating with aliens? What if you are using a component that was produced by strangers, long ago, and there is no documentation. Wouldn’t it still be integration to incorporate it into your product?]
– In case of a problem, I can’t usually determine easily what is the issue as it requires investigation that is related to another system (one being integrated to ours)
[James’ Reply: That is an aspect of the platform testing problem, in my system, but not necessarily integration.]
– Related to previous point, it has usually involved collaboration with people who are not organization-wise located on same team as I’m working at
[James’ Reply: That is an aspect of the platform testing problem, in my system, but not necessarily integration.]
Challenges & questions that come up when writing the previous:
– Where do you draw the line on what is an integration and what is not?
[James’ Reply: Yes. What is your answer to that? I think you gave an answer, above, but I am questioning it.]
– If integration is providing possibility for systems to communicate between each other, and provide services for one another, where do you draw the line for systems? Where does the system end? Where does the integration start from?
[James’ Reply: Good question. What is your answer?]
– Going back to communicating with aliens, can there be an integration if both ends have no clue about each other? Like, very typically two systems are being integrated by strictly agreeing on how to communicate with each other.
[James’ Reply: You could integrate a system on earth with a pulsar in space, couldn’t you? The system on earth might be a clock, and the pulsar might regulate the clock. In this case there is no set of instructions, there is no agreement, and yet certainly it is possible, isn’t it?]
I think there might be red wire somewhere there, but I’m just going to throw this here as it demonstrates better confusion inside my head at the moment.
[James’ Reply: I like your questions. In my next post, I will give my answers to them.]
Aleksis Tulonen says
[Or a non-observable change. And does it matter that the part and the whole have different purposes?]
Non-observable change. I couldn’t come up with an example of what that might be. Help me on that? Otherwise doesn’t matter that the purpose is different. There could be two systems that have same purpose and are integrated to each other.
[James’ Reply: This is what you said: “Change in one system causes observable change in another system (by system I mean a product or solution that has a purpose as a whole that varies from those that are part of the same overall system)”
If there is no such thing as a non-observable change than to say “observable” is merely redundant. And if the purpose being same or different doesn’t matter then your entire definition of system disappears.
However, for the record, many changes, millions of them, are occurring in the systems we test that we do not have the option of observing. Yes, in principle, most of them could be made observable through the use of debuggers or chip emulators. But perhaps not in a practical sense during ordinary testing.]
[James: Isn’t it still integration even if we are communicating with aliens? What if you are using a component that was produced by strangers, long ago, and there is no documentation. Wouldn’t it still be integration to incorporate it into your product?]
Thought about this again and I guess it doesn’t matter if other ends don’t know each other. It’s more about being able to build a connection between two systems that will empower either or each of them. Of course, in order to do this you have to gain understanding of the both ends to be able to transfer information from one system to the other.
[James: Yes. What is your answer to that? I think you gave an answer, above, but I am questioning it.]
I view integration now as a connection (two way or one way) between two systems that enables them benefit from each other by providing information or otherwise utilizing the data provided other system. It makes them a larger system that is able to meet the needs of the users better (in ideal case) than they could’ve had without being integrated.
[James: Good question. What is your answer?]
I think systems end originally to the limits of what they’re capable of doing by themselves. When they get integrated to other systems, I think they become part of bigger system of systems, which extends their capabilites. In that sense integration starts from where subsystem capabilities end to.
[James: You could integrate a system on earth with a pulsar in space, couldn’t you? The system on earth might be a clock, and the pulsar might regulate the clock. In this case there is no set of instructions, there is no agreement, and yet certainly it is possible, isn’t it?]
That’s a great example. In the light of that example I’m moving toward agreeing that it doesn’t matter if there’s an agreement or not. What matters is what is built and we are able to experiment with.
Aleksis Tulonen says
[James: This is what you said: “Change in one system causes observable change in another system (by system I mean a product or solution that has a purpose as a whole that varies from those that are part of the same overall system)”
If there is no such thing as a non-observable change than to say “observable” is merely redundant. And if the purpose being same or different doesn’t matter then your entire definition of system disappears.]
Typically the purpose of a system is one way for me to distinct one system from another. But I can’t guarantee that there couldn’t be a situation where two systems are being integrated and they have the same purpose.
[James’ Reply: Then you are not speaking of the definition of a system. So, what IS your definition of a system? My definition is “a set of things in meaningful interaction.”]
Doesn’t make sense though if they would be almost identical systems, as you wouldn’t gain anything from integrating them with each other. Still a possibility though.
[James’ Reply: Why would it be no gain to add identical things together? What about a team of horses? Multi-engine aircraft? Quadcopter?]
Simon Morley says
I’ll answer with an experience report then try and synthesize to answer some of your questions.
In 2005 I was employed specifically in a role as a system integration tester – partly to do the “job”, which was only partly defined and partly to define it and establish something that others could take over. That job might not exist today, but a number of the ideas are relevant…
So, I was building a system comprising of software from different teams – part of my task was to give a status and assessment on the software to the project team (who would decide whether it was ok to ship on or whether urgent action was needed from one/many of the teams). The status and assessment was partly in my care how to achieve that (what I tested, what input/results from the teams to trust – yes, some teams were more reliable/accurate, what tooling support I needed to prepare/create, etc).
Between me and the project team we would discuss needs (customer or later users that would take the software), risks and status around teams in the delivery pipeline – and a lot of discussion with the teams to discuss what they thought their status was and how they saw potential risks of their software. This was an activity in exploring assumptions – and could range from a very honest uncertainty to a more care-free “no risk for us” silo approach.
Often these discussions revealed that we were in “no mans land” – i.e. new territory with no specification or no obvious oracle, so I’d then enlist help from relevant experts to come to some consensus – some before the testing, sometimes during and sometimes after.
In the early days there was a clear assumption that the process would take care of things – unit, component, integration and system tests would catch faults at the right stage if we followed the process that advocated them. However, the reality dawned (on some sooner than others) that the process and the procedure descriptions didn’t necessarily match, and that they couldn’t.
[James’ Reply: That’s a nice description. Is there any reason in principle that working on the units and testing the units could not reveal such problems? Why do you say “couldn’t?”]
So, to characterize / summarize – (1) I was exploring the gaps between what teams thought they were delivering and what they actually delivered. I was trying to anticipate ways to “add value” – i,e. not just have a specific set of “integration tests” (although there was an ever growing set of integration test approaches & ideas (heuristics) or repeat scripts/ideas from the teams (although this was an approach); (2) I was lifting the assumptions coming from the teams – sometimes stated, sometimes not – and testing those ideas; (3) capturing which assumptions had a related oracle.
Types of problems that were typical to find ranged from very basic cases (due to a non-aligned update in different components), to memory leaks and to race conditions (usually due to system-wide assumptions or also non-aligned updates). I.e. there were no typical “integration testing” problems, probably more typical “assumptions” that I was rooting out.
Of course, a lot of what I did was a function of the organisation and development processes used then, but the principles apply in many other types of development.
Today an integration test session might look at risks and assumptions connected to updating a third party library – what’s its history, previous types of problems connected to it, how is my/our SW dependent on it, open bug reports on it, etc.
Note, I haven’t directly answered the question of “slightly” or “very” integrated. I have made a first stab at “How is integration testing anything more or different or special?”
[James’ Reply: Tell me if this is a reasonable summary of your points… integration testing involves reviewing the architecture in which components fit, reviewing the results of lower level testing, unit update status and how they might be uncoordinated, discussing the status of assumptions and surfacing here-to-for hidden assumptions, and dependency review & analysis.]
Simon Morley says
I said “couldn’t” – because they could only ever match after the fact – in the words of Sommerville (Large-scale Complex Systems Engineering); “The process definitions set out the intentions of the system designers as to how the system should be used but, in reality, the people in the system interpret and adapt these in a range of different ways depending on their education, experience and culture. Individual and group behaviours also depend on organizational rules and regulations as well as ‘organizational culture’ – ‘the way we do things around here’. ” – he’s describing characteristics of a socio-technical system.
[James’ Reply: Sounds like you are talking about tacit knowledge. And by definition, tacit knowledge is not codified in explicit terms. Nice!]
On your summary – broadly yes, to clarify – I wasn’t necessarily focusing on “lower level” tests, but any/all results before the integration step I was to start. (So my assumption was not that they had to be lower level). The “how” in the unit update status encapsulatess a skill set/bag of approaches.
When thinking of integration levels, mapped on slightly—->very scale, I tend to do a rephrase, a change of the POV so that I can better define the term and understand the real situation.
[James’ Reply: Good idea.]
Integration of things has for me two major dimensions, complexity and freedom. Complexity I believe it is fairly easy to understand, as it is a measure of the complexity of the integration derived from the number of integrated elements as well as from the number of dependencies influencing the first order integration.
[James’ Reply: I definitely don’t understand that. I would never express complexity as the “number of” things. Are you saying that two products that consist of “17 things” are equally complex? And how do you decide what counts as one thing? Certainly when you add something to a system you potentially increase its complexity, but of course you could also decrease complexity by adding something (for instance, if you add an input filter).]
Freedom is for me a measurement of how tight two things are integrated. By this I have in mind the tight vs loose coupling of components.
[James’ Reply: What exactly do you mean by that and on what scale is that measured and by what instrument do you measure it?]
At the same time, it is good to consider the networks of paths used to transmit and influence components with one another.
[James’ Reply: Okay… How do you go about considering them?]
To get back to the original question, the difference I believe you seek to highlight related to a mix of the two, as a “very integrated” level can be achieved in both with loosely coupled complex integrations as well as with tight coupled simple scenarios.
I do not know if I made myself too clear, but I am open to add details where needed. Am I too far from your response?
[James’ Reply: I want you to open some of the black boxes you presented here. I think you may have some important insight, but it’s not clear to me, yet, exactly what you are talking about.]
Based on discussion with James, some reading, and after going through few of the above comments want to share some more points
Why do we use integration in software development?
– The concept of integration allows us to divide, design and program the different tasks of the application/a system in different components or subsystems.
– This division facilitates ease in the SW development.
[James’ Reply: Basically you are saying that we build things from parts.]
Where we use integration?
1. A standalone application if complex, we divide in different components and integrate them to deliver a single exe.
2. A system which is created by integrating different subsystem (may be subsystems are located in distributed environment). This might involve third party applications as well.
Because we divide and integrate the components, dependency gets created between these components/subsystems and interfaces (between the two components) causing data/information flow within them.
So, Integration test’s main aim is to see that the integrated components and their interfaces are fulfilling the dependencies of the components (dependent on a component) in the desired way.
[James’ Reply: Yes, dependencies are a major aspect of integration risk, and hence a major target of integration testing.]
So some of the main challenges to conduct the integration test are
– To know what dependencies a component/subsystem is fulfilling and How?
– Create the test information/data which will be exchanged between them.
[James’ Reply: That’s a big part of it, yes.]
Chris Tranter says
Ok, let’s have a go at this from a high level. In the process of developing a unit of code, or a complete system, I have created boundaries around this activity and made assumptions concerning the behaviours of the external elements that I rely on (or rely on me). When I remove these boundaries and test using the real code units or systems, then integration testing should mitigate the risks of incorrect assumptions and completely unknown impacts. As ever way, more to think about here than it seems on first glance.
[James’ Reply: That’s a big part of it, yes.]
Sean McErlean says
Surely the concept of how tightly / loosely integrated is analogous to module coupling?
[James’ Reply: Analogous??? It seems to be identical! Thank you for noticing that. The last time I read about coupling and cohesion was in a textbook in 1989 or so. It’s high time I refreshed my mind on that material.]
The mere fact of the two system running in separate processes rules out content coupling (probably) but all of the other concepts apply. For example common coupling may occur between two processes that are using a shared memory region. More integrated systems have more interdependency, more coordination and more data flow. Starting from there, the level of integration is probably not a scalar variable. Two pieces of software might be very tightly integrated in terms of their data but loosely integrated in terms of control flow.
I think possibly at this level if you had some theoretically set of complete unit tests U, you would cover all those sorts of dependency consideration. But that’s (1) is impossible and (2) assumes behaviour is well defined in order to generate those tests. So integration testing is both trying to reduce the problem space to something tractable and expose faulty assumptions around the software.
However, I have a nagging feeling that this doesn’t cover it. There is also the idea that the system is more than the sum of it’s parts. Again by analogy to lower level components you might have a function that perfectly calculates the product of any two numbers in a defined range, but there is no guarantee you are going to use that function in the right way. And I suspect we are also prone to the fallacy of composition when writing software. So integration testing is also a way of checking the overall system behaves in the way it is assumed.
[James’ Reply: Fantastic exposition, Sean! If we add to the concept of dependencies (coupling relationships) the concepts of emergent properties, assumptions and otherwise tacit knowledge, diachronicity of development and operation, and masking/filtering (which is why adding a component to system might make it actually simpler to test than the sum of testing its parts), I think we would be approaching a robust notion of integration risk and integration testing.]
Sean McErlean says
I was trying to think of concrete examples of emergent problems that wouldn’t be caught by infinite unit tests. I came up with things like:
1. Timing. A might read Bs output (cache write, file etc) but did so at the wrong time, then 100% functionally correct components might produce erroneous results.
2. Operational. Each unit reports correctly but the produces unintelligible or unmanageable reports when combined. Units can run individually but can’t run on the same machine because they overwrite each other or have conflicting dependencies etc.
3. Threading. A correct unit that is not thread safe is used by a threaded unit.
4. Cascades or other behaviour. Unit A asks for a replay of data. This puts increasing load on Unit B, which means Unit C has to ask for a replay, which puts further load on the system which means more units have to ask for replays and you get a cascading request storm. That’s a model that can likely be prepared elsewhere.
Then I considered that these are all maybe ultimately errors of specification that could ultimately be reduced to a unit level if you had the right oracle; maybe the cache write needs a version flag that prevents an erroneous read. So I wonder: can any integration test problem be reduced to a unit testing problem by a more precise restatement of requirements? All fixes for integration problems are ultimately changes to one or more units (though maybe environment muddies the water).
That is an interesting theoretical problem, but we come back to – testing is about knowing and busting assumptions. You aren’t going to get a perfectly specified program and we have trouble enough reasoning at the individual component level. so maybe the assumption part (or maybe a complete lack of any thought) is the single biggest risk factor.
[James’ Reply: Should I know you? Are you writing or speaking on testing, publicly?]
Sean McErlean says
No. Long time listener first time caller 🙂
[James’ Reply: Call more often.]
Maxim Mikhailov says
When diving into definitions it is usually good to start with historical meaning of the terms. This one is concerned with Latin “integer” (whole, entire). Integration is a union of several parts into the whole entity. There could be different kinds of integration – physical or mental, stable or temporary. Integration testing stands for the testing (exploration) of this integration. You should do that kind of testing because the union of two parts is not always just a sum of individual qualities of the parts, but it may provide new qualities (quality and quantity dialectic, Hegel).
Thus I suppose the integration testing does make sense (has meaning) every time the integration is giving birth to new qualities that didn’t exist before the integration.
[James’ Reply: Sweet! Keep going…]
For instance, you have an arrow and a bow.. sorry for such a weird example =) Lets say you tested the items separately and you’re quite sure of their qualities. Now what if you combine them? You will discover a new quality! Lets call it “far throwing”. It didn’t exist before you integrate these parts. Now it deserves to be explored (tested) as well as an arrow and a bow themselves.
[James’ Reply: This might be the case if a bow and arrow were designed independently for purposes that had nothing to do with each other. But bows and arrows are designed for each other, just as two parts of a soon-to-be-integrated system might be. Isn’t it possible, even feasible, to test the arrows alone, and test bows alone, and then to infer that they would work fine together?
In fact this is exactly what happens in real life, because when you use a bow, you use it with arrows that were not used in the testing of that bow (same design, but not exactly the same arrows). But do archers test specific arrows with their bows? No… Wait… I’m wrong about that. Good thing I googled it. See item 14 of this article: http://www.fieldandstream.com/articles/hunting/2012/10/15-steps-perfect-arrow-flight “With a well-tuned bow matched with the right arrows, there should be little difference between the point of impact of your field points and your weighed and spin-tested broadheads.”
Well, still, it’s possible for them to be unit tested separately…]
In Computer Science, e.g. examples provided by Bianca – code/HW dependencies, performance, interfaces – all these dimentions are possible sources of new qualities caused by the integration. They could be based on the knowledge of individual components, integration design specification, etc. Though during integration testing you may find qualities that were not designed.
[James’ Reply: I like that phrasing. “You MAY find…” Exactly. Part of the reason for integration testing is not necessarily that it is impossible to discover things in unit testing, but rather because some things might be way easier to find when we put things together.]
I didn’t get the idea of weak/strong integration. For me the strength depends on the context. For example you have a USB stick and a laptop. Are they integrated? Is that integration weak or strong? Obviously it depends at least on the fact is the stick is plugged in =) Hence I don’t think as testers we need to deal only with “strong” integrations and forget about “weak”.
[James’ Reply: I definitely test differently when things are weakly integrated and so do you. Weak integration means little dependency, strong integration means strong dependency. An example of weak integration would be a new, well-behaved plug-in to a browser, compared to a new build of a browser whereby I integrated a new feature by changing the code in 17 places. An example of very weak integration would be two programs that are both Windows compatible and which allowed you to cut and paste between them but otherwise had no dependencies or interactions.]
The main criteria here should be producing of new qualities by the integration. If the union doesn’t result in new qualities then it might be redundant to perform integration testing, and unit/module testing is enough. A good question is “How do we get if the integration produces new qualities or not?”.
[James’ Reply: Emergent properties are definitely one dimension of concern. But you have not mentioned dependencies, which are also very important. When I wrote software to make reports for a client, once, I used a library that was not IE compatible, and it turned out that organization could ONLY use IE, so my works-perfectly-great system had to be torn up and re-done so that it could integrate with their local platform. I don’t think that has anything to do with emergent behaviors.]
PS: Let me know if you are interested in my thoughts about the next level – moving to so called “system testing” =)
[James’ Reply: I will be interested in whatever you have to say if you keep bringing this kind of insight to the table.]
Shrini Kulkarni says
To me, in order to understand the idea of “integration” we need following :
1. Boundary : it is boundary that helps us to distinguish between two systems and determine where one ends and other starts.
[James’ Reply: There doesn’t need to be a boundary between two systems when they are integrated. Integration may involve erasing boundaries.]
2. interactions: information exchange between two or more systems that results in change in state of systems participating in the interaction
[James’ Reply: I agree, but interactions are not only about exchanges of information. For instance, heat can be a way that one component affects another.]
3. Change in state : Change in properties of systems due to interactions between two or more systems across the boundaries.
[James’ Reply: Okay. But boundaries aren’t necessarily present.]
Two or more systems are considered to be integrated when they can interact with one another across the boundaries resulting change of state of one or more systems.
[James’ Reply: Can’t we just say integration is when interdependent parts are joined together? Either there are boundaries or there aren’t, but it would be integration all the same.]
Key idea here is “boundary” as we can distinguish between parts of system and different systems only with boundaries.
[James’ Reply: How is that a key idea? I can integrate two systems by rewriting and reworking them into one system without any boundary between them.]
Only systems that are integrated can interact resulting in change of state of systems participating in the interaction.
[James’ Reply: I think you can have interaction without integration.]
Systems that interact between one another resulting in change in state of them are considered to be integrated.
[James’ Reply: In that case I am integrated with my software. Are you sure that makes sense?]
I guess later is true. Many times in the comments here there is nothing such putting systems together. Systems separated by physical space too can interact – hence could be integrated.
Coming to software –
I guess every software application is integrated with the operating system on which it runs. From this – it follows that every software application (when running) is integrated with at least one other software or hardware component.
To me, integration testing involves:
1. Identification of systems through their boundaries (not boundaries of domains of information that systems process but the systems themselves) and state variables of the systems – this is modelling
2 Identifying interactions between the systems during the systems in operation bringing about some useful outcome to users of systems (business scenarios for example) – this is activity of discovering motivations or objectives of creators/designers to create system/systems.
3. Inspection and inference of state of systems post interactions to see if they agree with expectations in the solution design.
What do you think?
[James’ Reply: I think the core issue in integration is dependencies. It’s not just that one system influences another but that it depends upon another that makes them integrated. But boundaries are and interesting part of integration testing, too.]
Alex Henzell says
I’m not a specialist tester, but I’ll have a go…
My thoughts are that one purpose of integration testing could be to check that everything that’s relevant has been specified, for example:
* Module A Unit Test: When Event47 occurs Module A sends the output string “SIGNAL_K44” to Module B
* Module B Unit Test: When Module B receives the input string “SIGNAL_K44” it performs Action82
* Integration Test: When Event47 occurs the integrated system performs Action82
[James’ Reply: I would call these checks rather than tests. Otherwise, this is fine.]
The integration test checks that Module A’s conception of “SIGNAL_K44” is the same as that of Module B.
For example, if when designing the system we’d forgotten to specify the type of string to use, Module A could create a string with a null-terminator character and Module B could expect a string without a null-terminator character. The unit tests would both pass, but the integration test may reveal the error.
Would you consider this to be integration testing or something else?
[James’ Reply: That check belongs to integration testing, yes.]
So, to your question: What’s the difference between 1 and 2:
1) The relationship of Module A to Module B regarding “SIGNAL_K44”
2) The relationship of Google to Bing regarding a business name
My initial thoughts are that the following differences exist and may be relevant to whether (and to what degree) we consider a system to be ‘Integrated’:
* Module A knows about Module B, and its output is intended specifically for Module B (and only Module B)
[James’ Reply: If we remove the anthropomorphizing language we can say that module A was designed with module B in mind. It was intended by its designer to satisfy the needs of module B.]
* Any change we make to the output of Module A would require a change to be made (or at least considered) in Module B
[James’ Reply: I recently learned there is a word for that: connascence.]
* Bing’s input requirement is less specific than Module B’s, e.g. it should accept a business name with letters in either lower or upper case, it should accept reasonable spelling errors, etc.
* Bing’s output requirement is also less specific that Module B’s, e.g. the business’s home page should be near the top of the results but could be in first place or second or third…
* The Google & Bing relationship relies on the pre-existing concept of a business name, whereas the Module A & Module B relationship relies on the system’s ad-hoc concept of “SIGNAL_K44”
[James’ Reply: Bing IS integrated with Google, but in a looser way. See how this sort of thing was discussed in 1983? ]
Shrini Kulkarni says
>>> There doesn’t need to be a boundary between two systems when they are integrated. Integration may involve erasing boundaries.
When I dealing with only one system (fully isolated from any other system) – can there be any integration? It is just one system and that’s all.
[James’ Reply: The process of integration makes one system out of more than one. Of course we can do that by dissolving and remaking the boundaries of internal parts.]
For integration we need at least two systems and how do we distinguish between two systems – via boundary.
[James’ Reply: You are thinking of loose integration only. When you eat food, your body digests that food and breaks down physical and chemical barriers. Some of that substance becomes part of you. Your body INTEGRATES some of that food to itself. This is tight integration. There remains no boundaries between that grain of rice and the rest of what you are.]
When integration erases boundaries (“combined” or “joined up”) – it ends integration until this new combined system encounters another system.
[James’ Reply: Do you really think that you don’t have to do integration testing if a developer tells you he eliminated an interface between one set of code and another set of code that he has now joined together into one code base? No, you don’t think that.]
Boundary becomes a key idea in integration as through boundary we will come to know where one system ends and other begins. Interactions within a system (boundaries of the system) are related to integration, boundary becomes important.
[James’ Reply: I only care about that when I am dealing with an integration problem that involves two or more loosely integrated sub-systems.]
>>> Can’t we just say integration is when interdependent parts are joined together?
“Joined” and “together” are problematic or restrictive in some sense. They bring in physical dimension of “nearness”. I guess the idea of integration stretches beyond physical nearness. Wireless/Radio communication, internet are examples of such interaction
[James’ Reply: I don’t see the problem. Together simply means we are considering them as one entity. This entity consists of parts that have interdependencies. That’s all I need in order to have an integration risk to deal with. The nature of that risk depends not necessarily on boundaries (although boundaries may exist) but it definitely involves interdependencies.]
>> I think you can have interaction without integration
Looks like this is true. I thought interaction is manifestation of integration. Let us say I meet someone who I have not met or know earlier and we chat few minutes – I can we interacted not integrated. When both us meet at a coffee shop few weeks later to discuss a common interest – that is integration or pre-integration?
[James’ Reply: You might say that there was integration but that it is subject to the Proportional Principle which says that a quality may exist in some degree, rather than necessarily all or nothing. We could say that integration is what makes interaction possible… but my point was that what people NORMALLY call integration is when two systems don’t merely interact by also depend on each other.]
Sun radiates light all around without bothering who gets impacted – is that an interaction? All life on earth is dependent on sun. Hence all life on earth is integrated with sun ?
[James’ Reply: We are quite strongly integrated with the sun, as anyone who has ever experienced the sun suddenly disappearing from its place in the center of our solar system knows!]
>>> In that case I am integrated with my software. Are you sure that makes sense?
As bizarre it might look – if I am dependent on my software – let us say there is a program that runs in my pacemaker implanted in my heart. I am integrated with “that” software. With technology enabling nano chips – we would have software running in such chips regulate things like blood sugar. An example of a software (and hardware) integrated with human.
[James’ Reply: That may be so, but that is not what we are referring to when we speak of integration testing, right? I think we are talking about testing a product that is comprised of parts that have been placed in a state of interdependency. Of course, they interact, and those interactions may or may not occur across a physical or logical boundary.]
But I get your point in “normal” circumstances saying a human is integrated with a software – does not make sense.
What I meant was two systems interacted resulting change of state are considered to integrated. If we consider two systems (only two – hence boundary important) – there can be one way interaction. Like sun through light/heat impacting, say one human being. When we use the term interaction – it implies dependency. Heat from the sun impacts life but what life does on earth impact sun? (may be we need create big enough impact and do it closer to Sun)
So we can have one way interaction and two way. Two way interaction indicates dependency hence it is a requirement for two systems to be integrated.
[James’ Reply: One-way interactions can exist, and one-way dependencies.]
Influence is an example of one way interaction. A powerful motivation speech by a thinker influences several audience but not other way around (most of the time).
Jokin Aspiazu says
Let me tell a story here about integration testing.
I once met a girl who I liked, for some time we would date, sometimes we would go for dinner, or take a ride on my motorcycle, or have breakfast together.
Every time we performed this kind of integration tests, we had a feedback round so we could check if we were both getting what we expected, without compromising our own isolated systems, we wanted to know if we could trust each other to the degree we were expecting.
Jump forward some years, and this lady has become my wife, and getting back to the first example, where two systems were communicating by writing and reading one file, now there are plenty of systems writing and reading this file, while other files are generated, some with the same name, some with higher priorities that require to halt the whole system, and other who gets runned on new background process so we don’t need to care about them.
And this is why from time to time, we try have dinner us two alone, or get a ride on a bike, so we are able to check our first integration tests, make sure we get the expected result, so the mutual trust upon we have built a lot of new things won’t fade away without us noticing it.
[James’ Reply: Wow! Great story.]
Heiki Roletsky says
I’m not sure if somebody already took the same view as me (above). I didn’t read all the comments, but 1st of all I thank you, James, to offer the challenge.
Here is my short view about integration (I am not talking about testing at the moment by intention). Integration is something which is not possible to define as unit and also as system. That means it has some kind of connection between
— 2 or more units or
— unit(s) and system(s) or
— system(s) and system(s)
Eg. If unit or system is my car then integration is my car in the traffic with traffic lights. Or if WordPress application is unit or system then integration is WordPress application in my web browser with particular OS.
So to make it into testing context I would propose that integration testing is
— finding risks in the integration (car doesn’t stop with red traffic light, traffic light is not going to show red light or green light. WordPress application, which is for publishing texts in written through Google Chrome which is running in Windows 7, don’t publish my text.)
— investigating those risks through experimenting different scenarios with that integration (driving car in the traffic with automatic traffic lights, running traffic lights manually and letting self-driving cars to drive. Inserting, editing, deleting texts in WordPress application, which is in Google Chrome which is running in Windows 7)
— evaluating outcome (explicitly mentioning what happens). Eg. car stopped or not stopped with red light. Texts are or are not inserted, edited or deleted.
I definitely can miss something here, but those are my ideas about integration testing.
Marek Langhans says
Definitely looking forward to your point of view on this matter. I’ve always stated that integration testing doesn’t exist. By that I mean it’s a broadly used term to mean different things. I don’t refute systems integration as such. But testing systems that are being integrated does suggest to use the term integration testing or system(s) integration testing, but does that help us?
[James’ Reply: I know how you feel. In my RST methodology I don’t mention integration testing as such. I talk about flow testing, scenario testing, and risk-based test design, all of which handle some aspect of it. But I do think integration testing exists and that the term can be used in a meaningful, practical way.]
From a tester perspective, since a bug found within integrated systems is a bug that is reported against a certain system, not any integration, it always has impact to non/functionalities of a certain system. In the end what is tested are those systems and their functionalities and capabilities/attributes.
[James’ Reply: Here you have to watch out not to fall into the subsumes fallacy. See page 9 of this: http://web.cecs.pdx.edu/~hamlet/subdency.pdf. Basically, even though one kind of testing might seem to be cover all that another kind covers, it might not actually be the better technique to use.]
You perform functional and/or nonfunctional tests under different conditions than when the individual systems were tested separately, but still the same systems and their functions and attributes as in previous phases are tested.
[James’ Reply: Not necessarily, but possibly. You might actually test less when testing the integrated product, and yet it could be better testing than testing on a unit level. Can you see how that could be possible?]
Even the misshapen bicycle may be comfortable, reliable, quick, functional on all levels. The only problem may be it’s not good looking.
Thomas Ender says
[James: I know how you feel. In my RST methodology I don’t mention integration testing as such. I talk about flow testing, scenario testing, and risk-based test design, all of which handle some aspect of it. But I do think integration testing exists and that the term can be used in a meaningful, practical way.]
I was waiting for this point. Did you change your mind? why do you think explicit integration testing exists and the term can be used in the spoken way?
[James’ Reply: I will blog about that. Integration testing exists. I don’t think it’s a high priority to study as a subject unto itself but yes, it exists and it is interesting. If you read the comments you will already see that some people have provided good reasons to think about it.]
Hrushikesh Waikar says
What I’m reading here more about systems and their integration. But, we also do subset of System integration test at the unit/component integration levels as well.
In one way, I can say we don’t need to check the success of a build when the individual components/units are working fine! Having said this is people working on the same group and understandably having same reference materials still have conflicting outputs produced causing build failures. Then Systems written or developed by different teams also might have the problems of communications. Which to me is the reason to test integration of the systems.
[James’ Reply: Yes. See Conway’s Law.]
Now I can see these can be classified in many ways such as:
Direct communication problems which are related to the protocol followed for the communication.
Indirect communication problems which are more often result of a dependency which might have not be assessed before integrating the systems like synchronization speed between components or network capabilities.
Another set of problems may arise with compatibility of systems due to their upgrades (Example System A has a new version to address a set of issues which system B was not aware of or was not assessed for).
[James’ Reply: Good examples.]
I think Integration means, like if we build two systems independently and join them as a one system we need to check various points.
1. The integration of those systems won’t affect any of the functionalities of individual system, but can work together.
2. The communication between the two systems will go smoothly, like for a shopping website and a payment gateway.
3. There should always be mitigation if the communication between the systems fails.
[James’ Reply: That’s a good start.]
Doug Buck says
Truly fascinating topic, one I shall think further about when I’m not supposed to be cooking dinner.
My initial thoughts are, do I understand enough about the sub division of what can be integrated, to even begin thinking about a deeper understanding of the special cases of the sum of the parts being greater than the whole.
Doug Buck says
So I have now thought further, and noted down my thoughts about the question of integration testing, and have come to the conclusion that I am unable to adequately explain to myself in enough detail what the answer is. I see the question clearer than I did, and as I see it now its what makes integration behave in a way that can only be testing during that state, I can see the edges of the answer, but I am unable to explain it, I can see in the comments where people’s thoughts are going, but I couldn’t take their formed thoughts and improve my own understanding from them. The best I can currently do is:
Integration testing is investigating the impact from deliberate, or accidental changes in separate systems once they are able to interact in a way that permits the passage of energy, in forms such as data, heat, sound for outcomes of this interaction that are both desired and undesired.
[James’ Reply: That is not a bad take on it, man. I have had long discussions about this with Michael Bolton, and actually many of the comments have helped clarify my own thoughts. This is not an easy subject. I will be writing an article about this, soon, though, based partly on how all this discussion has stirred my mind. Thanks for contributing.]
Doug Buck says
Thanks, I appreciate the response, I feel that i’m only at the most basic understanding of the question. The reply from the other readers has really made me think hard about what I thought I understood.
I look forward to reading what you write about it.
Florian Jekat says
I thought two days about the topic of integration testing and integration on its own. I’d like to share my thoughts with you.
The first thing that came to my mind was integration as something that brings single parts together forming a bigger thing. What is the bigger thing?
The bigger thing, the bigger picture is the software, the product. And what is a software? Software is an implementation of an abstract idea or solution of a problem. Using the engineering heuristic of breaking things in smaller pieces for easier understanding/ handling a big picture breaks down in smaller ones. To build the whole big picture all small pieces have to be captured and integrated to a new thing. This new thing is bigger as the sum of its pieces.
Back to the idea that software is an implementation of an idea. Ideas can be integrated to the big picture. Lowering the abstraction level source code is the last step in implementing the software.
Starting from a human, society perspective I asekd myself what is essential of integration. I see three parts which are interesting:
knowledge, because to be integrated it can be helpful to know what you can do, where you are, what the history is,…
interactions, because integration covers more parts than one. But wait, I can interact with myself and with others. First I thought there are only interactions between different parts, but I can interact with myself. Am I an integration? I cover this question later.
Situations alias context as a combination of interactions, knowledge and environment
Combining ideas I can think of a class in source code as an integration, too. It integrates variables (knowledge or states), interactions and context. Classes are bundled together to components, etc. And the last integration is performed with the user. A software is integrated in a social context with a user.
Regards from Germany.
[James’ Reply: What happens when you get concrete about these ideas? What is integration testing?]
Chandra Golla says
Integration testing is the process of testing when two or more individual modules are connected through interfaces and combined as one system for their I/O communications to achieve system requirements. It is also programmatic based testing as Stubs and Drivers are required to test the integrated system. System architecture place important role in integration as it depicts the standards of interfaces (input & output data standard, processing time, error handling, integration process restoration etc..).
Again as James outlined, it’s very vast subject and can go in to many directions.
[James’ Reply: This is not bad, but you must understand that when I integrate two things I have written, I may re-design or even eliminate the interface completely.]
Looking forward to read your detailed writing.
The problems you seem to be raising is because you (nor anyone else) don’t have precise notions of “system”, “module”, “dependency” in this (very general) context and then are simply focusing on the resulting vagueness of “integration”.
[James’ Reply: You are describing a process called learning. In fact, I do have precise enough ideas about these matters; more precise than I have yet revealed. I got to them by going through the process of inquiry that you see in this post.]
In most situations humans are willing to wing it anyway because it works for some definition of “good enough”. If not, the focus will be narrowed to the context at hand and more precision can be applied as warranted.
[James’ Reply: Yes, most humans do that. And most humans are not very good at their jobs. But I am an expert in testing, so more is expected of me. And if you aspire to be an expert, more will be expected of you, too. This post is for people who have such aspirations.]
But it will not be a grand unifying theory applicable everywhere with full precision. Words are used for communication and are purposefully vague, and pushing too hard on them will result in Swiss cheese.
[James’ Reply: That’s like cautioning a driver who begins to accelerate that he will never be able to exceed the speed of light. There is a lot of room for improvement, yet!
The secret to my success is that I examine language and my own understandings so that I may proceed assertively, with confidence, and win all the arguments that happen along the way. It’s okay if you don’t want to do this. Maybe my blog is not for you.]
“In fact, I do have precise enough ideas about these matters; more precise than I have yet revealed” … “Yes, most humans do that”
You’re doing it too, right in this thread! E.g, your “definition” of system in an earlier comment is exactly what I meant by winging it.
[James’s Reply: I don’t see how that is winging it. My definition of system works pretty well. Are you confusing informality with shallowness?]
BTW the examples of aliens, pulsars, etc. demonstrate whatever secret definition you have in your head is also pretty lax.
[James’ Reply: How so? Do you know what a pulsar is? Do you know how it works and what role it played in my example? If so, what problem do you see, here?]
“That’s like cautioning a driver who begins to accelerate that he will never be able to exceed the speed of light. There is a lot of room for improvement, yet!”
I did not say there is no avenue for improvement, in fact quite the opposite in my second paragraph! Please read carefully, this is exactly the same type of mistake I was originally describing!
[James’ Reply: I think I am reading carefully. And here you are conflating “avenue” with “room” and then saying I am the one being sloppy.
What I am saying is that you say you are worried about pushing words into swiss cheese– a confusing analogy but one I’m going to guess means “pushing for specificity and clarity beyond what is reasonable”– and my reply is yes, that is a potential problem, but we are nowhere close to doing that. So, relax. Meanwhile, stop whining and deal with the problem. Are you trying to argue that we humans “just know” what integration means? Well, no, I see no evidence of that we testers know this well enough.]
“Maybe my blog is not for you.”
Your egotistical proclamations notwithstanding, I was just trying to help you avoid an unfruitful path.
[James’ Reply: Oh, am I being egotistical? I always wonder why people bring that up, since it requires quite a lot of self-obsession to worry about things like that. Here we are, on my blog. You are responding to me. And you think I am wrong to feel that this is my party? Egotistical or not, I am accountable for everything I say, here… and you are anonymous.
I’m reminding you that you may be in a conversation that isn’t a fit for you. But I guarantee you that my blog is a fit for me!
Not only is the path I am on fruitful, it is the key to my success. It’s why people know who I am and hire me. If you really think it is not fruitful, you will have to do more than offer a couple of vague cynical paragraphs to carry your point.]
I see you have no interest in discussing the topic seriously and are taking every criticism personally, so I’ll dip out now.
[James’ Reply: Whoever MJ was, he’s gone. And he should go, because people need to know their limits…
For the record, I asked MJ a series of real questions which he decided not to answer. I don’t know why he decided that I was not serious, but it seems he was not serious enough to bother to explain himself.
This is what it means to have a debate: your assertions get questioned and if you want to be taken seriously then you can’t whine about ego and run away. You have to deal with the questions. MJ questioned me and I responded. I have responded to almost every comment made here. I respond most of the time, even if I’m annoyed, because it is my responsibility to do so.]
Augusto Evangelisti says
I’d like to look at this from a different perspective. As an ex developer I have written units that integrate with other units. When the other units were written by me, I had less integration issues than when the other units were written by a team mate and these were less than the ones I discovered when integrating with something I couldn’t talk to directly.
I believe that the more distance between the creators of parts that integrate creates the more issues. It is obviously not a matter of kilometres but a matter of distance in understanding. Shared understanding/knowledge is what can reduce integration problems. Assumptions about the other units are made when integrating, and integration testing challenges such assumptions.
An integration issue is often a missed conversation.
[James’ Reply: Yes, that is one of the challenges with integration.
Cognitive distance is reduced by acquiring shared understanding, but this also reduces critical distance, which is necessary for successful testing. One of the challenges of being a tester is to manage the various kinds of distance that harm or help our work.]
Augusto Evangelisti says
Thanks for your answer James, I am not completely bought in the necessity of critical distance or at least in the fact that we cannot mitigate against it efficiently before we test, could you give me an example where this becomes an insurmountable issue or a link to some reading on the subject?
[James’ Reply: Read literally anything about critical thinking. But a particularly good book about this is Thinking Fast and Slow, by Kahneman.
See also Proofs and Refutations, by Lakatos, and Conjectures and Refutations, by Popper. See especially Popper’s writings on the “conspiracy theory of ignorance.”
A keyword to search for is “de-biasing,” too.
An example of how critical distance is necessary is Donald Trump. Look carefully at his views and how he expresses them. Do you think his utterances and decisions will be wiser or more foolish if he were to surround himself with people who don’t automatically affirm whatever he already believes?
Harmony is a heuristic, and so is critical distance. But critical distance is so important that I will sometimes argue with people whom I think are fools, just in case they say something useful, though Michael Bolton does a lot more of that than I do.]
Franz R says
I guess there are many perspectives from which you can look on “integration”.
And I like to look on it in a more abstract way.
Generally speaking, I would understand “integration” as adding something new – let’s call it B – into something existing – let’s call it A. After a successful “integration”, B will form a union with A – let’s call the union A’. The new A’ will appear to someone from outside like A did before, only if you know what to look for, you can now see that A’ offers more “variety” than A did before.
[James’ Reply: That’s true for one form of integration. But we can also imagine encapsulating A using B, in which case A may not be visible from the outside.]
So what does this mean for the “integration” of B. I would say that it means that B will be added into an established environment that (hopefully) works. It will of course bring something new into A, but B has to conform to the rules and standards of the environment.
A’ will now form a new environment with the union of A and B that can also have new standards and rules.
Hope I did not become to philosophical now 🙂
[James’ Reply: I’d like you to become more philosophical and general than this. You are talking about one scenario of integration, but could you speak of all of them at once? What is essential about integration?]
I’m doing some integration testing right now. We just completed a web service that calculates the salary of an employee based on some inputs from a database, and a web request. We’ve built the service using a data contract, that is, a very well defined set of data object definitions that will come from the database and web request.
We have unit tests that mock those data objects, running various checks against expected input, and we can also execute tests against a standalone version of our service using a tool we created.
As far as we are concerned, the service works according to the contracts provided, but guess what? When we deploy the service the mechanism that we use to mock the data objects generates an object that is different from that created by the database which means when we run the service a serializer exception is thrown telling us there is a problem working with the real data object.
We fix our mock, run some tests to verify we have solved the problem and redeploy.
We are then hit by another problem. The data object from the web request, which is also consumed by other systems, has changed and our service throws another serializer exception.
That’s annoying, but good job we chose to test the service in this manner before deploying to production.
To me, that is at least one type of integration test. Testing in isolation, only proves that our output might be correct, but not how we behave on our interface.
[James’ Reply: What a beautiful example.]
Integration means to me bringing together components that have been developed in isolation possibly previously using mocks/stubs to represent the ‘outsider’ component.
[James’ Reply: Is isolation important? If there were no isolation, wouldn’t it still be integration to put them together? Seems to me that isolation (involving a barrier of some kind that discouraged the transfer of information) makes integration problems more likely, though.]
I feel loath to give an example but I will. I currently test utilities trading platforms. After each trade is completed the details are sent to the parties post trade system to be reconciled. In early development & testing for one customer the post trade component was mocked because we did not yet have access to their test environment.
All tests eventually passed internally, but when we had access to a real world system all manner of problems ensued in getting the systems to work together, even though our component had been developed to a strict specification. These problems generally fell into the same categories of problems seen when testing a non integrated system.
With that in mind – we know from experience that components/systems that do not require integration will have bugs. To mitigate risk as a tester we should then also make the assumption that a system requiring integration will have bugs.
I believe it is this innate distrust of all software that has made me a diligent tester over the years.
James Huggett says
Fun exercise, here’s my attempt at understanding:
System A and System B exist independently*.
They both have interfaces that allow them to communicate with another System.
System 1 communicates with System 2.
We now have System 3, which is consists of System 1 and 2 in communication.
[James’ Reply: This is one form of integration. Another form could be the disintegration, re-arrangement, and re-integration of both systems into a new system. In such a case, there is no longer and System 1 or 2, but all of their former functions are integrated into a brand new System 3.]
(Pedantic note: System 3 (as with S1 or S2) is embedded in a larger system of the world, S4, which has infinite complexity. S4 previously had S1 and S2 as subsystems but now has the *added complexity* of S1 and S2 in communication.)
The interfaces to S1 and S2 are only used *when communicating with the other system* (or other systems which are able to communicate with them)
Thus, the possible inputs into each other are ‘hidden’ in another system, so to speak. Inputs which are sent or received by the interface will then be consumed by the internals of each system. So, points of ‘failure’ or unpredicted behavior can be on the interface of the system, or within its internals.
[James’ Reply: Yes, and also in possible conflict that originates in S4, which S1 and S2 both depend upon in possibly conflicting ways.]
With a computer system (assuming testing of all possible inputs in all states is impossible as it usually is) then then to test S1, you *have* to connect it to a system of similar or the same complexity as S2 – and in that case, why not use S2 as it will be of matching complexity (rather than complexity biased in some direction which is less meaningful to the system designers/testers).
[James’ Reply: Ah, this is an argument I don’t think anyone else has clearly stated in the comments, so far. Thank you. It’s an economic argument: integration testing may be the least expensive way to achieve goals that could IN PRINCIPLE be fulfilled by unit testing but may be prohibitively difficult to achieve in that context.]
Note: The problem with test ‘stubs frameworks’/harnesses is that they act has a stand in for, in this example, S2, but their sampling of inputs is going to be biased/limited in some way, and may also ignore aspects of S3 which have a large impact on the meaning and impact of system behavior.
Put another way, S3 can only exist *when* S1 is connected to S2. If you want to explore or understand S3 as much as you can, connecting S1 to a less complex system, will (possibly?) not be sufficient.
The interface on S1/S2 may have a certain ‘standard’ it (supposedly) conforms to, but that assumes the standard can predict how a System’s internals handle *any* type of input (or sequence/timing etc), which would be the same as assuming the standard is omniscient: impossible.
We could perhaps say; the type and timing of input will be within certain classes due to the behavior of the interface (a type of filter perhaps), but the job of discovering *the total behavior of the system* (S3) has not been done, and seems to ignore the possibility of emergent or undiscovered behaviors.
[James’ Reply: There is also an opposite implication related to what you said, above, and that is testing in an integrated context may SIMPLIFY what we need to do, because perhaps only a small part of the what a unit is capable of will be exercised by the full system. Imagine using PRINTF in a program to do a particular thing– I actually don’t case if PRINTF does ALL the things that it is supposed to do, only that it does what my program needs. In this way, integration is a form of code slicing that may focus us better on what is needed.]
So, testing S3 in this example could be called ‘integration testing’ as S3 is an integration of S1 and S2 – and it *would* need both S1 and S2, at least theoretically. The behavior of S3 is *implicit* in the design of S1 and S2 in a way – so even isolated, there is a range of potential behaviors that can occur *only* when connected to the other system. An interface that say excepts 1 or 0 as a single input can communicate with at least 3 other ‘systems’: one that only uses 1s, only 0s, 1s or 0s (then there are nulls, bad voltage etc), even if it’s only supposed to communicate with one that uses both 1 and 0.
Lightly and tightly integrated could be a determination of how much influence one system has over another’s behavior, in other words, the extra complexity gained by joining the systems together.
*No Systems are ever completely isolated: I’d argue there is no true black box in the universe and any system is inherently ‘leaky’. One could say: if a system has an interface it is automatically then part of a wider infinite system, and not fully knowable or predictable.
James Huggett says
[James: There is also an opposite implication related to what you said, above, and that is testing in an integrated context may SIMPLIFY what we need to do, because perhaps only a small part of the what a unit is capable of will be exercised by the full system. Imagine using PRINTF in a program to do a particular thing– I actually don’t care if PRINTF does ALL the things that it is supposed to do, only that it does what my program needs. In this way, integration is a form of code slicing that may focus us better on what is needed.]
A concrete example might be a client app that “only uses 4 of the possible 20 API methods”.
‘only’ is possibly a Gerald Weinberg “lullaby word” 🙂 so I’d be interested if the app actually only used 4 of them (for now).
In this example though, my client app would be a poor ‘integration test’ of the server app from their perspective because its complexity is such its only using a portion of their system, whereas their server system is more than complex enough to act as an effective testing tool for my client app.
For big complex systems then, possibly there’s a problem of finding another system complex enough to test it – and maybe this is why people are so important as testers as we are very complex life systems!
[James’ Reply: That is Ashby’s Law of Requisite Variety at work. My first screw-up as a test manager was when my team tested a linker by using a C compiler to generate object files to be linked. What we didn’t realize until later was that the C compiler only produced a small subset of the full variety of object files capable of being linked. We thought we had a nice functioning linker until we started in on assembler testing and found a whole lot of linker bugs.]
James Huggett says
I was thinking about Requisite Variety as I mulled this thread before seeing your comment! 🙂
I also wonder if Varelas/Maturana’s concept of Autopoiesis is relevant here (not that I claim to fully understand it!) and what some of the implications are…linked systems in expanding co-emergence. The computing conundrum: I can test a system with great care, present my findings and modelling. Then the system under test changes (with fixes etc), so the previous findings may not be valid any more. I changed the system. Now what to do? Where does this end? – if I test again…this will mean more changes (or if I program a new feature: this means more feature requests). Maybe the answer is it does not end unless stopped with resource deprivation (money, people, cpu/storage) or very conscious stabilization.
[James’ Reply: Reviewing my copy of the Tree of Life… Autopoiesis means self-regenerating, so I don’t think that quite fits, unless you include the developer as part of the product under test. But part of that concept is ontogeny: which is the history of changes that preserve the integrity of the system in question. Ontogeny is an ongoing challenge for testers, because we must evaluate not just the product, but also also guess at the changes in our knowledge of the product that come as a consequence of change.
Complete testing is not possible, in any case, not just because the product changes. So we rely on the heuristic that the detectability of a problem is generally proportional to its importance. In other words, a problem that successfully hides from us is probably not very important. (This is completely untrue for certain classes of problems, though, such as security.)]
James Huggett says
Another exploration of this subject:
System A communicates with System B through an API.
The development teams for both systems, and integration team, are happy that everything is working “as designed” and all known bugs have been fixed. Performance testing shows that System B can handle expected traffic from System A. System A and B are on the same very reliable enterprise network.
New versions of both systems are rolled out into production.
Someone forgets to change an authorization key on System B.
All System A’s calls to System B are de-authorized.
System B has an aggressive retry policy and all its instances start hammering at System B.
System B is overwhelmed and its other functions (not just the API) are compromised.
System A has no way to abort its retry behavior and has to be shut down to allow System B to recover.
System A has some modules that do not handle terminating with active sessions correctly, and some Bad Things happen like data loss.
System A and B are non-functional for several hours, there has been data loss, and it has cost the company A Lot of Money. No one is very happy about this.
In this story, it’s easy to blame the sys admin who was forgetful. The integration test/development team would perhaps understandably be frustrated and reluctant to blame their own work. However, the damaging behaviors are causing by both systems in integration, in an ‘unusual’ stress condition, with bugs in older modules coming to light that previously were not known about, or not seen as a risk. If the systems had been developed differently, the impact of the sys admin’s mistake would have been less. The ‘meltdown’ conditions were seen as possible from an external source, but had not been considered that the attack could come from one their own systems inadvertently. There had been stress testing – but not of this exact scenario. They were more concerned that the system was scaled appropriately for normal usage. It was understood that System B at least, would fail under enough stress.
(Moral: integrated systems can attack each other, even if they are both supposed to be friendly.)
So… ‘so far proven’ benign behaviors (that are present when a system is de-integrated or stand alone) can change in significance into malignant ones when integrated. This can happen in reverse way: integrated systems that are *dis-integrated* can have ripple effects throughout the entire system – *not just the interface between the two*.
So could (dis)integration testing (as distinguished from unit level testing) be the attempt to discover the important *changed significance* of behaviors of systems when they are either more closely integrated, or dis-integrated?
An example of disintegration testing would be “We removed all hooks to our analytics service because our users don’t like being spied on, can you check this hasn’t affected anything?”
I think this also speaks to why dismissing certain bugs or behaviors as “edge cases” can be fool hardy: it can indicated a certain weakness or inconsistency of design along the edges of a system or module – that can re-emerge later (perhaps during integration/disintegration with a new system) as suddenly “not an edge case at all” or “we need to completely re-write this module its a mess”.
Information provided by (dis)integration testing potentially has impact on the design of both systems (like any testing). This is probably information you want sooner in the design/development timeline than just at the end ‘when you are about to hook them up’.
[James’ Reply: Cool! I hadn’t thought about this before, but dis-integration testing is also a form of integration testing.]
Jeff Nyman says
Interesting post and “integration testing” is one I’ve been attempting to come to grips with, particularly with the focus some make on separating out “integration” tests from “integrated” tests.
[James’ Reply: I would say a test cannot be integrated, since a test is an event involving a human tester who is not a part of the product. An output check can be integrated, though. A tester designs those checks and a tester reviews their results.]
You ask the following: “Lots of things in out world are slightly integrated. Some things are very integrated. This seems intuitively obvious, but what exactly is that difference?”
I don’t know and I’m eager to explore these ideas. I’m considering a system that is integrated (in some way). Let’s say A and B.
A –> B
B <– A
Here A and B could be considered collaborators of each other. For me, this means they have a contract with each other. That contract is based on interactions, mediated by data. (Maybe. I'm off-the-cuffing it here.) If there is no contract, they are not collaborating and thus are not integrated in any way.
[James’ Reply: But can’t things collaborate without a contract? Isn’t that what symbiosis is, in biology? The ants haven’t “agreed” to protect a tree, and the tree hasn’t “agreed” to be a home for ants, but each happens to benefit from the other.
If a neighborhood is nice because one person in that neighborhood decides on his own to pick up all the trash, and the reason he is will to do that is that people only drop “good” trash and not “bad trash” such as medical waste– then the people in the neighborhood are integrated and collaborating in a way that they may not realize, and this integration can fall apart unexpectedly causing surprise to all.
So, I see your example as one kind of integration… Intentional integration, let’s say.]
So when I consider integration, questions become operational to me. I can imagine A asking: "Does B even try to answer my question?", "Do I ask the question correctly of B?", "Does B give me a response I can use?" B, likewise, can ask: "Am I able to receive a question I should be able to answer?", "Am I able to return a valid response to a correctly formed question?"
None of these questions would be operational in this context if there were no integration.
[James’ Reply: But can’t you do unit testing that answers those questions? For instance, when I plug my phone charger into the wall I don’t worry that it won’t work in THAT particular outlet. I don’t need to perform integration testing there do I?]
(Whether A and B are "slightly" or "very" integrated isn't answered by this at all, I realize.) The reason I'm even thinking this way is because it seems to match how I would frame testing in an "integration context". For example:
A asks a question that B can answer.
A asks a question that B can't answer.
A misses checking for a response that B might give.
A checks for a response that B can't give.
As you can probably tell, my thoughts are still percolating.
[James’ Reply: Good start. What about mutual dependency on a third component?]
Jeff Nyman says
[James: But can’t things collaborate without a contract? Isn’t that what symbiosis is, in biology? The ants haven’t “agreed” to protect a tree, and the tree hasn’t “agreed” to be a home for ants, but each happens to benefit from the other.]
Good point. But is symbiosis an “implied contract”? Well, no that wording doesn’t work so well. How about an “emergent contract”? That might be the other side of the intentional integration idea.
[James: My point is that a system may function as a result of an unintentional and possibly ephemeral arrangement. As a tester, I would want to learn about those things, not just about the intended bit.]
You bring up an interesting point. Symbiosis is the interaction between two different things, like living organisms. Some people like to use the term “interaction testing”. So this also makes me wonder about the distinction between interaction and integration when applied to testing. And I bring that up simply because I think a lot of discussions about integration testing end up bringing up “collaboration tests” and “contract tests”. And it’s often not clear to me where the boundaries lie. I find developers driving a lot of these discussions and distinctions, and I want testers to have better input into that.
[James’ Reply: Recall the difference between claims testing and function testing. In claims testing we test explicitly against what we are told the product can do, whereas in function testing we test what it CAN do, regardless of whether it’s suppose to do that. I don’t want to be limited to testing intentional aspects of the software, or else I will miss certain kinds of bugs.]
[James’ Reply: But can’t you do unit testing that answers those questions? For instance, when I plug my phone charger into the wall I don’t worry that it won’t work in THAT particular outlet. I don’t need to perform integration testing there do I?]
Hmm. I don’t think you could do that testing via so-called “unit testing” — unless you broaden out your definition of what a “unit” means. Which is yet another area that I think suffers from a lack of operational language. What actually is a “unit”? If a unit test is confined to B (such as methods in B), I’m not checking anything on A at all. Even if I’m mocking A because some part of B requires some response, I still haven’t checked the integration because I haven’t integrated anything.
[James’ Reply: Yes, but my point is that, though to may have done integration testing with a stand-in of B (another outlet) you haven’t done it with the real B (this outlet).
What I would reply in this situation is that we have a reasonable confidence that this B is like that other B… But to your point about whether this could be done with unit testing, well sure it can. We can test the power supply separately and design out product to accept input from any compatible power supply. You probably have multiple phone chargers lying around your house, right?
A unit in essence is any system considered outside the context of its natural environment. If you test a video card within a harness of some kind and not the motherboard for which is it designed, that would be unit testing. Same if you are thinking of a sub-routine. By convention, unit testing normally refers to objects in the OO programming sense.]
This is a really interesting discussion. This happens to be coming to me just as I was reading up on how J.B. Rainsberger was trying to clear up his “integrated tests are a scam” issue (http://blog.thecodewhisperer.com/2015/12/08/clearing-up-the-integrated-tests-scam/) based on a confusion of, in his words, the distinction between “integration tests” and “integrated tests”.
[James’ Reply: Since Rainsberger is mis-using the word test, I had trouble at first understanding what he was trying to say. He is speaking of artifacts, rather than the human process of testing. Therefore I think he is talking about checks, not tests. I don’t know where he thinks the testing is actually happening in his practice because his language is obscuring it. The reason I distinguish between checking and testing is that every single time I use the word “test” I want people to focus on the person performing the test and what that person is thinking and doing.
I’m going to guess that by integrated test he means a programmer testing using a machine-based output check that exists on a sub-system level or higher. He probably means the equivalent of “a complicated end-to-end check performed by a programmer who wrote code to do it.” Seems to me that such checking can be quite useful, but it looks like he is concerned about missing opportunities for careful testing using lower level checks, as well as maintenance cost. Except maintenance cost is only an issue if you decide to preserve your checks. If our system is testable enough, we might construct throwaway checks instead in the course of informal testing. Then again, I am speaking as an independent tester, and maybe he’s recommending against the developer doing that stuff.]
What I’m finding is I don’t have effective thinking in place for this yet, which means I lack the effective vocabulary to articulate anything as well. I’m striving to figure good ways to think about this.
Michael M. Butler says
“Good checking always implies testing.” Umm, I find this phrasing a tiny bit odd. Good /use/ of checking always implies testing. Both in developing and in looking at results. Yes? No?
[James’ Reply: Checking implies use, doesn’t it? How can you do checking and not be using checking? (That is not a rhetorical question… let me know if you see a way.) Since checking is generated by testing and serves testing, to have a GOOD check you must have wrapped it in testing that made it good and makes it good.
A check might not be wrapped in testing. I don’t see how that could be called good checking.]
“I am not trying to construct a common language, but rather a useful and practical language.” Different context, I know, but the check/test distinction is part of what I hope will become much more common language (and thought), at least. 🙂
[James’ Reply: I want everyone to use my language. If so, it will be the common language. I’m not holding my breath, though.]
Jeff Nyman says
So, as I think more on this, a key component of the integration testing problem is to think up specific dependencies and then how those dependencies may fail to work with each other. So if someone asked me what the “integration risk” was, I would say it’s based on specifics of those dependencies.
[James’ Reply: I think there are two definitions of integration testing, one of them is testing that is focused on exploring integration risk. Dependencies are the major source of integration risk, I would think.]
But what specifics? Well, all of this presumes some communication between the dependencies (i.e., components). If these components are totally isolated islands-unto-themselves that never send nor receive, then it’s questionable what they do anyway. They can certainly be tested as a component in isolation but if they never communicate with anything, they are always isolated.
[James’ Reply: Is there a difference between communication and interaction? I think so. Don’t be too mesmerized by communication as such and forget about other kinds of influences. Also, don’t forget indirection. Two things may communicate directly, or indirectly via a third component, or they may not communicate at all, yet influence each other via something like performance variations (component A slows down component B indirectly, causing a time-out in component C that brings proceedings to a halt.)]
So let’s assume some communication is possible, which means even if the components work in isolation (i.e., doing calculations or whatever they do), they may fail when put together with other components. Again, assuming communication, this means these components may receive, but never send. These components may send, but never receive. Or these components may send and receive.
So one of the questions that seems to come up is: why can’t I just understand the system as a whole by understanding each component? And that leads to: why can’t I just test each component and then have confidence that the system as a whole works? And that leads to: why not just do all this at the “unit level” instead of doing so-called “integration testing”?
Okay, so, let’s see here. Given two objects, A and B, A and B must share information with each other. To do so, they must communicate. And they do so by sending messages to each other or to an intermediary that communicates for and with them.
These components A and B can communicate appropriately. What does that mean? It means they send the correct messages, accept the correct messages, do not accept (or gracefully handle) incorrect messages.
These components can communicate inappropriately. This means they could send the wrong message, send the right message but at the wrong time, accept the wrong message, accept the right message but at the wrong time, don’t send any message at all (when they should), do send a message (when they should not).
There can be performance factors that mitigate the communication. The intervening medium (network, for example) may not allow A and B to communicate effectively (messages get truncated or mangled; messages get rerouted and take more time than planned). The network is part of the platform but how A and B handle what comes to them over that platform is on them.
Ah! But wait! Let’s say B sends a message to A but network latency causes the message to arrive late, which means A should reject the message. Do I need to integrate A and B to test that A does this correctly? No, not necessarily, because I could send in a message to A that is late and see how A handles it. That would be a “unit” test.
[James’ Reply: I see you on a useful track, here. But my question is, how do you anticipate all these eventualities? Yes, you can test whether A times-out properly, but can you predict how often a time-out will occur? Maybe you need to adjust the time-out period? What SHOULD it be set to? We may need to tune the system based on real-world delays. Maybe the least expensive source of information on real-world delays is real-world testing.]
An intervening component may alter the communication between A and B, including negating it entirely. For example, maybe a cache between the components provides a response to B that A would have had to send; but in this case, A was never communicated with. Well, A wasn’t communicated with directly by B. But perhaps component C — the cache — had to communicate with A to determine if what it had in cache was up to date.
Could I test this at the unit level? Well, I could trip a flag in B that says “message was received from C and not from A” and make sure B processes the message just as it would if the message was sent directly from A. That would tell me that B could respond to that message. But it wouldn’t tell me that C actually sent the message or that C realized it should send the message based on what it got from A and that all of this worked as part of an act of distributed communication.
Now I could do individual tests at the unit level of each component. For example, I could test that C sends a message given the right conditions by stubbing the conditions and sending to a mock. I could test that if Mock-A were to send a particular message, C would send that message to Mock-B.
There is also potential integration to consider. Not the best wording perhaps, but what I mean is that A and B may have numerous ways to communicate information. But perhaps in a given context, only one bit of that is being used. So let’s say B asks A about [X]. A has that information on hand. But if B asks A about [Y], that information must come from a database. That database may be in our control (part of integration) or not in our control (part of platform). But the database, in either case, is getting A the information so that A can send it to B.
Could I unit test that? Well, I could certainly stub the response from the database so that A would just send the info to B as if it made a database call. But, then, by that logic I could also just mock A and have B respond to a message that Mock-A sent. So I have Stub-D + Mock-A talking to Real-B. I could have other tests that use Stub-D and Mock-B with a Real-A.
What it keeps coming to is that I could test each component in isolation (i.e., various unit tests) but I’m not testing the actual communication that may utilize different platforms (libraries, virtual machines, APIs, networks). Some of those may be in our control and some not. There may be combinations. I might utilize the Google Geocoder API and then have that sync up with our own Relative Location API.
This is probably a horrible analogy, but in my head popped the idea of checking the effectiveness and efficiency of the post office in getting a letter to me. I could write my own letter and put it in an envelope and stick it in my own mailbox and say: “There! Post office delivery works.” I could even write my address really sloppily on the envelope but assume the mail sorter and/or driver had good eyesight and got the mail to my correct mailbox. “There! Post office delivery works.” I could put no address on the envelope on NOT stick it in my mailbox. And then say: “Perfect. If the post office doesn’t have an address, they can’t deliver.” But I haven’t really checked much here. I’ve mocked the interacting components.
I haven’t put into place the efficiency of the delivery because I’ve abstracted away the human being, the mail truck, the road conditions, the traffic, etc. I haven’t really tested the effectiveness because I haven’t put the system as a whole under load. I’ve abstracted that away. I certainly haven’t tested any security (i.e., some package requires my signature).
Thoughts still percolating …
[James’ Reply: It seems to be percolating very nicely, Jeff. This is helping me with my next post.
One implication from what you’ve written: it seems that lots of things might be testable in isolation, but many of those things might be considerably MORE testable in IMPORTANT ways if they were not isolated.]
Amit vyas says
It is like Sun and the human body. We need sun rays that trigger harmones in our body which inturn creates another set of harmones/vitamins that inturn help in the processing/management of the human body.
The sun here is the source, the sun rays are the xmls, the triggering agent for ‘harmone creation’ in the human body is the listner placed in the skin cells and newly created harmone is the output of the process that triggers another set of operations in the body.
These so called xmls act differently for different creatures(systems), for human body the processes are different, for a animal body it might be different and the further operations that take place are in the same manner different.
Amit vyas says
Now when God created this integration he must have thought of the following :
– what is the requirement of the human body
– how much is required
– where can this requirement be fulfilled from
– can the fulfilling agent also fulfill similar requirements of other systems(animals, plants and other creatures)
– how much intake capacity does the listeners hold
– what happens when this capacity gets breached
– what happens in lack of it
– can this also be procured from sources other than the Sun
When I think i am running an integration test I want to explore the ways in which the components and interfaces which comprise the system interact with each other and also the user who wants some kind of output to appear whatever that is.
I want to explore things like states so what happens when one application is in the middle of an ETL process with another system and then a user creates data, will it process it? will it miss it and process it the next time it runs? will it error and crash? will it error and corrupt the data.
[James’ Reply: Okay. How specifically do you determine what the interactions are?]
I like to use a whiteboard and draw up what I think is happening between the applications involved in the integration test, I usually do this with a developer so they can provide input as well.
antoine bourse says
What a BIG subject. Lots of people perform integration testing without knowing it, people does things so called “module testing” (module refers to a set of files related to the same functionality/goal in the same layer). But what is different between checking that some functions of the same module fit togheter and some functions of different modules fit together ? If you copy all the source files in the same directory the module disappear.
So now, what we do ?
We have a design (module detailed design, SW architecture, SYS architecture), this design specifies:
– ressource sharing
– other things that i surely forget to mentioned 🙂
Design doesn’t specify what to do but HOW to do and it must be checked by test or verification.
The point which is not so clear is the functionality, you know the expected functionality of the module, you know the expected functionality of the full stack or software but do you know the exact expected functionality of a set of integrated element ?
Integration testing is the mean to check that the way we choose to perform a functionality with respect of timing, safety, security… realized by more than one function works in all case.
(short answer for a big subject, too much things in my head and not enough courage to write it all at the moment 😀 )
Brian Christian says
I would say we have two types of integration.
– Strongly integrated
– Softly integrated
In integration we can start by saying its a coupling of two things or objects. This coupling in software is the way information is passed from one object to another and or how it can be used. And how they are coupled reveals if its a strongly coupled object or loosely coupled.
When I look at integration testing I first determine “can I get in-between the UI layer or under it, is there any way to expose it and what does it look like if it is exposed? If exposed what can I do with it? ”
In a strongly integrated system, I can only test from endpoint to endpoint. I can’t access the communication layer with the exception of perhaps observing calls by capturing the calls as I stand in between. This is much more difficult to test as my access is limited and learning the circumstances that reveal interesting behavior is difficult. This also adds a layer of concern when code changes are made to one object as they can and often do directly affect the other which sets my teeth on edge trying to probe and observe for that interesting behavior.
Whereas a more loosely coupled integration may offer the opportunity to mock or pretend to behave as the other object, something that I find useful when trying to test integration. Not to say this can’t be done in a more tightly coupled integration, but your ways to pry into the application are not as abundant.
I think understanding how something is coupled, is the beginning of understanding integration testing.
Giuseppe Torchia says
Key factors are both the nature and the relation between them (the systems) .
How they communicate beetwen them it doesn’t matter.
” One creates an XML, the other reads it. Neither knows about the other.”
We need to decide what it does mean for “the other reads it”.
Who is the owner of knowledge?