Tag: integration testing

Bug hunting - Where to look

In my previous post I wrote about knowing your prey. In this post, I’ll talk about good places to go hunting. All these tips are dependent on knowing the application you are testing. The better you know the ground, the more you will know where to look and the easier it will be to spot when something’s not right.

Functionality bugs

This is where most testers spend most of their time looking. Testers are pretty good at finding bugs here because they can be uncovered using your normal boundary value analysis type tests and other simple destructive tests like putting rubbish values into input fields.
They will enjoy a brief period of success when they find a few high priority bugs. Sadly many testers then continue to invest their time running less and less important tests of this type and find less and less important defects. A more experienced tester will recognise the rapidly diminishing rate of return on the investment of his time, and have a look in some other areas instead.

A trap so called “automation testers” or “developers in test” often fall into is to limit their testing to this area, because the tests are easy to express in test code and are easy to make pass.

Integration bugs

Integration testing is an often discussed and frequently practised type of testing, but again the temptation is to gain false confidence by testing only one aspect of it. Can the two systems talk to each other and does data that is sent from System A successfully arrive in System B? Yes? Good! A lot of integration testing stops at this point, but there’s much more fun to be had.

What happens if the message doesn’t arrive successfully in System B? A number of things could happen. Data in the two systems could get out of sync, or all subsequent messages could get locked up behind the one that didn’t get though, or the maybe no-one would notice that the connectivity had been lost until days later.

Systems operate in real life, and sadly real life involves outages that happen for any number of reasons. A good tester will want to know how his system behaves in this part of the life cycle.

Data quality bugs

Any experienced test will tell you the creation of useful quantities of test data is hard work. As a result, one of two things frequently happen. Sometimes the test data is created specifically for the tests. This is great for proving the various happy path tests, but this artificially created data often doesn’t reflect the myriad of different types of data that the system will have to deal with in the real life. Alternatively, the only data that is available is a copy, or more likely a subset of the live data. It can be often be hard to get the specific data sets you need to test all your edge cases. In both cases, it’s easy to find there are tests you might want to run, but can’t.

Another pit fall is to assume that all the data in the production environment is of good quality and is complete. All too often it isn’t. How will your system cope when it encounters bad data?

Environmental issues

Have you ever worked anywhere where there are stable and plentiful test environments that are accurately represent the live environment? Me neither. Just because something works in your test environment doesn’t necessarily mean it will work in your production environment where the config and architectural topography is likely to very different. For example, connectivity between the different systems in your test environment might be dead easy because it is a “de-militarised-zone” and is a virtualised environment, but your live environment is likely to have all sorts of firewalls.

I’m sure you’ll think of many more places to look. Please leave your ideas in the comments below!