Category: Software testing and QA Management

Credit : http://www.flickr.com/photos/futureshape/8127977215/

Faith based testing

Imagine you have two testers in front of you, both giving you a report on the testing they have just done. Whose view do you trust most?

Credit : http://www.flickr.com/photos/minifig/72865809/The application is completely tested and all the bugs have been found and fixed!

or;

Credit : http://www.flickr.com/photos/minifig/72865809/All the tests that were planned have been run, and all the bugs that were found have been fixed.

(more…)

Testers are scientists

I love testing for the same reason that I love science. To be a scientist is to be curious about the world around you and learn about it through experimentation. To be a tester is to be curious about the software around you and learn about it through experiments that we call tests. We can benefit greatly from applying the scientific method to our testing.

In introducing me simultaneously to scepticism and to wonder they [my parents] taught me the two uneasily co-habiting modes of thought that are central to the scientific method.
– Carl Sagan

Software behaves differently to the physical cosmos. The laws of physics are constant but the the laws of software are infinitely fluid and different on every system you will test. It’s as if we testers travel to a new universe with every new application. We must approach each new universe with a sense of wonder and curiosity about the laws that govern it. We are told how it should behave in the requirements and the developers who made it tell us how they think it behaves. But we must be sceptical about all that what we are told, and learn for ourselves how each new testing universe really behaves.

Turning hypotheses into theories

The word “theory” is often mis-used in a dismissive way; “The theory of evolution is only a theory“. This mis-use of the word suggests that a theory is just someone’s idea that might not be true. This is just wrong. A theory is a model that has been proven by a number of observations, and can be used to predict the results of future observations. For example, the theory of gravity is based on lots of experiments and measurements, starting with Newton’s, and is really useful in predicting with great accuracy things like by how much passing asteroids will miss Earth.

Carl Sagan

The wonderful Carl Sagan. There are few humans who could’n’t learn from this amazing man.

This is different to a hypothesis, which is a proposed explanation without any expectation or evidence of its truth. This is often what people mean why they say have a theory about something. They actually have an idea about something which demands to be tested to see whether it’s true or not. The development of a hypothesis into a theory, and the subsequent revision of that that theory as we learn more is the foundation of how to do science. Science isn’t perfect, but it is by far the very best tool we have to learn about and describe the world, or in our case, our software. We must approach testing using the same technique scientists use to learn about the world.

1. Ask a question

First, be curious. What will happen if I follow this? How does this part of the system work? Are there any security flaws in this system? Is this system able to handle the demand from all our customers?

2. Do some research

Learn as much as possible about what you are researching or testing. What other research and testing has been done before? What was learned and what theories were strengthened or destroyed by this work? What methodological problems were experienced and how were they overcome?

It’s also good to get as familiar with the territory as possible. There’s no point trying to do a complex experiment in quantum mechanics unless you’ve got a strong understanding of the physics first, and likewise there’s no point trying to do tests in a complex application unless you have a strong understanding of the product first. A tester must be an expert on the product before he can be expert in testing it.

3. Come up with a hypothesis

Hypotheses can be positive or negative. Proving a positive hypothesis will increase your confidence in the quality of that part of the system under test, and disproving it will result in a bug or bugs being raised. Proving a negative hypothesis will result in a bug being raised. Disproving increases your confidence in the quality of that part of the software under test.

In testing, we set out to unequivocally prove or disprove hypotheses.

Hypotheses can also be used to describe why certain bugs happen. This is very helpful in focussing investigative work. For example:

The reason the error message is received is because the firewall is rejecting the in-bound message

There may be several such competing hypotheses, each suggesting a different cause for the problem. Each hypothesis can then be tested individually, ultimately disproving all of them apart from those that are true.

4. Design an experiment to test the hypothesis

One of the great things about the scientific method is that the results from experiments are published for peer review with enough information for the reviewing peers to repeat the experiments for themselves and see if their results are the same. This is also a perfect lesson for testers designing test scripts.

Scientific methods are designed to isolate and measure the effect of just one variable that is under test. Everything else in the experiment stays the same, including the method of measurement.

5. Conduct the experiment

Sadly, conducting testing experiments is often the least interesting part of the job. It can be seen as a mundane process of following instructions and clicking buttons over and over again.

When conducting the experiment, we must be engaged, dispassionate and un-emotive. We must record what we see, whether that supports an eager project manager’s aspirations or not and whether it supports our own expectations or not. This is where test automation can be really helpful.

We tend not be very critical when confronting evidence that supports our conclusion.
– Carl Sagan.

Imagine the scene: A tester has spent all day testing something; he’s feeling good about it, all his tests have passed and he’s looking forward to signing it off. But then something weird happens that doesn’t seem to be reproducible. Does he gloss over it, putting it down to a “glitch” and sign off, or does he suddenly doubt his previous tests and redouble his efforts to reproduce the unexpected result? This is when the psychology of a true tester is most apparent.

6. Analyse the results

The human brain can switch off and not notice anomalies, or equally as bad, can start to make up excuses for anomalies we might see. Although the human brain is brilliant at spotting patterns, it can sometimes see patterns that aren’t there.

Respect the facts, even when they are disquieting and they seem to contradict conventional wisdom.
– Carl Sagan

7. Construct a theory

The more tests you have run in a particular area, the more you learn about it, and you will soon be able to come up with a theory. The theory might be that a particular component always fails under certain specific conditions, or it may be that a theory that the system is secure against the 3 most common forms of security breach, or anything that is relevant. For any theory you propose, you must have the evidence to support it.

8. Repeat the process to strengthen or revise your theory

Once you’ve got your theory, every further test you run will either strengthen that theory and give you more confidence in it, or will disprove your theory.

Many positive test results can support a theory. It only takes one to destroy it.
– Stephen Hawkins

Stephen Hawking

Stephen Hawking would be a great tester. He’s got even bigger things on his mind though.

Depending on how you look at it this can either be dispiriting or uplifting. (Assuming of course the test that destroyed the theory is valid, repeatable, reliable, etc.) A true scientist will be excited by his theory being blown. He will think, “Wow! Everything we thought about this is wrong! Something else is happening that we don’t understand! We need to think harder and differently and go back to the drawing board to try to understand this! The world is not as it seems! How exciting!”

We must let let reality speak for itself, supporting a theory when its predictions are confirmed and challenging a theory when its predictions proven to be false

What this means for testers

The ability to think and reason is what separates us from the machines. This is the stuff that human beings are best at. Let’s let the machines do the boring, dirty and dangerous tasks of collecting the evidence for us so we can  spend as much of our time as possible thinking critically.

 

LinkedIn discussion on this blog here.

Putting our trust in the machines

We’ve been sold a dream about test automation that it has become deeply unfashionable and almost taboo to question. Whilst test automation undoubtedly has its place, testers should be candid about its limitations. Only then can we apply test automation where we will get the most from it.

Automated tests don’t tend to find very many bugs

Automated testing is best suited to the least interesting kind of tests. The tests are typically highly repetitive and test one thing at a time on a completely stubbed out system with artificial test data and nothing else running on the environment. There is value in running these tests, but as we know from my previous post on where to find bugs, this is only a small fraction of the different places where bugs tend to cluster.

In my experience a huge majority of automated tests fail do so because of environmental, data or connectivity problems or because the test itself is not reliable.

Only seldom do automated tests fail because they have found a bug.

Automated tests are designed to pass, not to fail

A failed automated test is a bad thing. It means an embarrassing red background on a build monitor and time spent finding excuses for why the test failed. It means maintaining the test, or in other words changing it so it passes. Consequently, automated tests are designed to run reliably. This is not necessarily the same thing as a test that is designed to exercise the system and find a bug if there is one.

Automated tests are just more code

Test code is equally, if not more liable to contain bugs than the code it’s trying to test. Developers don’t make good testers, and testers don’t make good developers. Some people like to build things. Other people like to smash things up. It will take a tester longer to write the code and it will often be verbose, inefficient and prone to failure.

Automated tests turn experienced test analysts into junior developers, and sell this as progression

Using and maintaining automated tests is really boring

The dream we’ve been sold is that once our automated tests are written, they run and run in perpetuity. This is simply not true. A failing automated test ought to initiate an immediate response to work out what failed and why, to get the defect fixed, to get the test updated if necessary, and re-run. Running automated test, analysis the results and maintaining tests takes a lot of time and it’s not interesting work for skilled testers. It doesn’t exercise any of their creative and destructive skills. So we hire excellent testers who are well equipped to aggressively test our products and we turn them into servants of the build machine, desperately trying to feed it tests that it will not fail. This is a waste of talent and money.

It takes ages to write automated tests

Ants on a lollipop

Ants on a lollipop. Image credit – http://bit.ly/6ojRy

I find it dismaying when I see huge amounts of time being spent persuading a small number of automated tests to run when this time could be spent testing some software. All testers know that the earlier in the software development process that bugs are found, the easier, cheaper and quicker it is to fix them. And yet we see so often that testers’ first actions on being presented with a new feature set is to start writing test code. I want to see testers all over that new feature like ants on a dropped lollipop. Instead, I see testers meticulously designing tools to lick the lollipop for them.

Everything changes all the time

The nature of agile development is that the rate of change of the application is very high. Huge proportions of the code base can be replaced, added to, dramatically changed or even removed with each iteration. We no longer operate in a world where we build a component, test it, and then never touch it again for months and happily let automated regression tests fuss over it every now and again. Again, this suggests a very high maintenance over-head for automated tests because large numbers of them will need maintaining all the time. I’m sure we’ve all seen testers who struggle to find the balance between testing the stories in their current sprint adequately, whilst automating the stories from the previous sprint, whilst updating and fixing the automated tests from two sprints ago, whilst trying to run all these automated tests and analyse the results. These poor testers get further and further behind with their automation whilst spending more and more time on it and less and less time testing. The end game is that very few of the automated tests run usefully, and the tester has no time to do any other manual testing. This sucks.

Repetition should not breed confidence

The most junior tester who’s learned about equivalence partitioning will tell you that it’s a waste of time to test every value in the same partition, because they are, by definition, equivalent. And yet we see automated tests do exactly that.

The paper thin veneer of confidence that only the same test passing 10,000 times brings

Imagine you had an automated test that ran four times day for six months and passed every time. What does this really tell you? Does it tell you the software is perfect, or does it tell you the test is lightweight, or does it tell you it’s not necessary to run that test so often and you should consider writing a different test instead?

Tools are passing fashions

Tool choice breeds resentment like almost nothing else in the world of IT. Some people have an almost religious fervour about the tools they love and have decided to hate for often arbitrary reasons. One tester may spend six months writing tests using one tool, only for those tests to be ignored by his successor who prefers a different tool. What a waste! Again, all this time could have been spent finding bugs rather than writing disposable code.

The best testing tool ever made is the human brain

So you’re saying we shouldn’t bother with automation?

No, I’m saying we should do automation properly. When done well by the right people with the right attitude towards both testing and development it can be brilliant. But sadly very much of it is a long way from being brilliant. I think we as a testers should ask ourselves very seriously if it’s the best way we should be spending so much of our time. Here’s my recommendations:

  • Get people doing what they’re good at. Have experienced testers do test analysis and design, and get junior developers to work with them and write the test code and run and maintain the tests. Once an automated test suite gets above a certain size, it can easily be a full time job maintaining, managing and using it. Managers need to acknowledge the need for this headcount.
  • Be very picky about what is worth automating and what isn’t, and be honest about the true cost and value of the resultant automated tests.
  • Understand what automated unit or component tests have been written by the developers and guide them in making these tests effective. Don’t try to duplicate them with automated functional tests.
  • Be highly aware of time you spend that is not spent actually doing testing. Keep a log of this and show it to your manager.

There is LinkedIn discussion on this post here.

There is LinkedIn discussion on this post here

The lost art of writing tests - Part I

One of the most important skills a test analyst can have is his ability to write good test scripts. The main point is you must write your tests assuming that someone else will have to execute them.

Imagine you have just started a new job, and you inherited a suite of a thousand tests from your predecessor. After some initial training on the application, you are asked to do a regression test by executing all the scripts you have inherited. But when you try to execute the tests, you are unable to do so because:

  • The tests assume that you know things that you don’t.
  • The test are written in sloppy shorthand English and are difficult to understand.
  • The tests mention things that don’t seem to exist in the application, leading you to believe the tests are out of date.
  • Many of the tests appear to be the same.
  • All the tests are marked as High priority.

With test cases in this state, is it reasonable to expect you to effectively test the system? Is it likely you will be able to execute the tests in the same amount of time it used to take your predecessor? No.

If the guidelines for writing test cases can be summarise in one simple point, it is this:

Write your tests assuming that someone else will have to execute them.

In this post I make no distinction between manual test scripts, or automated test scripts. Exactly the same rules apply. Tests are tests, irrespective of whether it’s you executing them manually, someone else executing them manually, or a computer executing them for you.

In this first blog in a series of two, I will talk about the ISEB, or ISTQB’s characteristics of good test cases. In the second post, I will expand and improve on this with my own set of characteristics.

ISEB’s four E’s

The ISEB Foundation in Software Testing[1] syllabus lists the characteristics of a good test case as follows:

  • Evolvable
  • Economic
  • Effective
  • Exemplary

Let’s examine each of these.

Evolvable

Software changes over time as new features are added and bugs are fixed, so our tests also need to be updated to reflect the changes in the software.

Because maintaining test cases is time consuming (therefore expensive) and uninteresting they should be written in such a way that they need little, if any maintenance. And if they do need any maintenance, it should be easy to do.

Ideally, once you’ve written your test you should be able to execute it again and again, without ever having to edit it. If you do have to edit the test each time you execute it, the test is not “evolvable”, and you will waste lots of time editing it.

Test that are “evolvable” are sometimes described as “future proof”, or “low maintenance” or “repeatable”.

You can make your tests evolvable by considering the following:

  • Use templates rather than copying steps.
  • A group of tests will often have the same pre-requisites. It is boring and waste of time, and possibly inaccurate to copy the same pre-requisites into each test, so write them once, and create a template. Reference that template wherever you need it. You now only have to maintain the one template, not several copies of the same pre-requisites.
  • If you know a series of steps will be repeated in many tests, then create them as a template.
  • Avoid being un-necessarily specific about things that may change. For example:
  • Don’t be too detailed where you don’t need to be. For example:
    • “Step 3 – Click the blue button labelled ‘Submit’ in the bottom right hand corner of the screen” – this is not evolvable, because it is possible that a design change may make the button green, re-label it “Continue”.  If this happen, your test would then need to be maintained if it is to stay valid.
    • “Step 3 – Submit the form” – this is evolvable because no matter what colour the button changes to or what it is labelled, the test will still be valid. The test doesn’t mind what or where the button is, as long as the form can be submitted, which is what is important.
  • Avoid referring explicitly to elements of the page which are parameter driven, or dynamic. For example:
    • “If the user is inactive for more than 30 minutes, he is timed and out and must re-authenticate.” – This is not evolvable, because it is likely that the time-out limit is set by a parameter. If the parameter is changed to 15 minutes, the test becomes invalid.
    • “If the user is inactive for longer than the time limit set in the time-out parameter, he is timed and out and must re-authenticate.” – This is evolvable, because the test is still valid whatever the parameter is set to. (In this instance it would be a good idea to mention where the parameter can be seen, and set.)

Economic

Test should be written so they are quick, easy and cheap to execute. Any test that takes a long time to execute, or is difficult and complex to execute is unlikely to ever be executed properly. A test that is simple and quick and can be done at any time and is repeatable is likely to be useful and executed willingly.

For example, imagine if you needed to test a batch job that runs every night at midnight. Would you want to write you test so that it needed the tester to be in the office at midnight in order to see the batch job running? This would not only make the tester unhappy, but it is also a test that can only be run once every 24 hours.

It would be far better to write the test in such a way that the tester could kick off the batch job at any time, (possibly by setting the server time to 23:59, or by manually intervening and forcing the batch job to run) which would then allow the test to be run during normal working hours, and be run as many times as necessary, and whenever it is needed.

You can also make your tests economic by ensuring you have control over system parameters. To refer back to the earlier example where we were discussing a test of the system timeout, rather than setting the timeout to 30 minutes, you could set it to 2 minutes. This means that the person executing the test does not have to sit around being unproductive for 30 minutes waiting for his session to timeout. He would only have to sit around being unproductive for 2 minutes!

Effective

If a test passes without finding a bug, we can either be:
a.    Confident there is not a bug
b.   Concerned that our test was not effective

So how can we be confident, and not concerned? By making our tests “effective”! If there is a bug, an effective test will find it. A good way of gaining confidence in the system is by trying hard to find bugs, but failing to find any.

A common mistake is to only write tests that examine whether the system does what it is meant to do. Tests of this nature can be easily derived simply by translating the requirements document into more test-like language. These tests may well be valid but without tests that also examine whether the systems does what it is not supposed to do you will not have an effective set of tests.

Consider the following to make your tests effective

  • Think about what bugs there could be, and write tests that would find them
  • What bugs have been found in similar applications?
  • Risk Based Testing techniques
  • What bad things could happen to this piece of functionality, and how does the software behave when they do happen?
  • Boundary Value Analysis / Equivalence partitioning
  • Negative tests
  • Branch / Statement coverage
  • …and all the other testing techniques.

Exemplary


It seems that in an attempt to make all these characteristics start with the letter E, the final characteristic “Exemplary” is somewhat forced. ISEB’s interpretation of “Exemplary” is “A good test case is exemplary, meaning that it does more than one thing for us (is capable of finding more than one fault.)”

There are two arguments against this:

This is not what exemplary means. To summarise various entries from various dictionaries[2], exemplary means serving as an illustration, specimen or model and being worthy of imitation. It does not mean “does more than one thing”.

If we ignore the label “Exemplary” and focus on the definition “is capable of finding more than one fault” we still have a problem. If a good test is capable of finding more than one fault, then it is it not going to be a specific test designed to find a particular bug. If the test fails, it should be explicit what the problem is, but if the test could cause all sorts of things to fail how do we know which of them failed and which passed? If you take the ISEB recommendation literally, you could reasonably end up writing one enormous test that covers the entire system, justifying your lunacy by claiming that if a good test is capable of finding more than one fault, a truly wonderful test would be capable of finding all the faults so we should have as few a number of test as possible each covering as much ground as possible, the result being one huge test. This would clearly undesirable to say the least[3].

If however, we interpret “Exemplary” as meaning that you should be able to take each test, and hold it up as an example of how all other tests should be written then I am somewhat happier. However, the previous three E’s are intended to be the “means” to the “end” of achieving a good test, but exemplary is definitely an “end”. I would have no objections to saying the following:

“If your tests are Evolvable, Economic and Effective, then they will be Exemplary!”

This is both a correct use of words, and good advice.

We can do better than the four Es

The four Es are a good starting point, but read my next post for my improvement on this.

How to: User Acceptance Testing

If you’re not a tester and you’ve been asked to plan and run a User Acceptance Testing (UAT) phase, you need to know what to do. Perhaps you’re a project manager who knows you need to have some UAT, or perhaps you’re a product owner who’s been asked to do some. This post is for you!

Work out what it is you’re trying to prove

  • Decide whether it’s about User testing, or Acceptance testing, or whether we’re just talking about project acceptance which may or may not involve additional testing.
  • Work out what it is you need to see in order to be confident the system supports your business processes.

Get involved early, before the code is written

  • UAT should be as much about preventing bugs from being written / misunderstandings from happening as it is about finding bugs.
  • Any UAT must be consistent with the requirements. Therefore whoever is going to be doing UAT must have been involved in the requirements gathering. I like to think of this as Requirements Acceptance Testing (RAT!)
  • Attend showcases and spend time looking at early versions of the software, accepting that it will be incomplete and buggy because it’s still being functionally tested.
  • By having early conversations and agreements about UAT, you can probably avoid having to generate test documentation that no-one will ever read.

Work with the functional testers. They are your friends

  • Understand the testers’ skills, limitations and motivations.
  • Have them explain what their tests are doing and understand the test coverage they are providing.
  • Explain what you’re looking for in your acceptance tests, and see whether they are already doing similar tests. (If testers know what UAT you are planning, they will want to do these tests themselves first so that when they hand it over they know it will work.)
  • Get the testers to spend some time with the users so they get a better idea of how the software will be used.
  • There’s no value whatsoever in UAT repeating tests that have already been run by functional testers. Now read this bullet again.
  • Do your acceptance testing in the company of a tester. He will be very interested in watching you and making sure his functional test scripts are updated to include tests that you are doing that he hasn’t covered.
  • Make sure that functional tests are updated to cover any UAT bugs you find.
  • Testers have expertise in raising defects, getting them across to developers and managing the defect life cycle. Let them do it.

Happy / unhappy paths

There are more than two paths and they all need to be considered.

  • Perfect path – The ideal, no errors route through the journey. This never happens in real life.
  • Typical path – The user will make some errors and hunt around for a bit for stuff before completing the task
  • Unhappy path – The user gets confused, lost, abandons and re-starts journeys, makes the same mistake repeatedly, etc.
  • Evil path – User is maliciously trying to break / hack the system. This person is probably does testing for a living. If he doesn’t he should.

Use the users

In my experience most of the embarrassing bugs that elude testers and UATers are discovered in Production within 24 hours of the system going live. Many are found in the first few hours. Get the people who typically find these bugs to do whatever it is they plan to do in the first 24 hours after go live in prod, to do in the test environment before it goes live. This is really the crux of UAT.

Don’t bother writing UAT scripts

  • Writing good test scripts is a specialist skill. Even most professional testers are not very good at it. UATers are even worse.
  • If your requirements and acceptance criteria are good, they should also serve as your tests.
  • If you think of things you want to UAT that don’t feature in your requirements, then your requirements aren’t good enough. Expecting features to be present that don’t appear in your requirements is to believe in magic.

Bug hunting - Where to look

In my previous post I wrote about knowing your prey. In this post, I’ll talk about good places to go hunting. All these tips are dependent on knowing the application you are testing. The better you know the ground, the more you will know where to look and the easier it will be to spot when something’s not right.

Functionality bugs

This is where most testers spend most of their time looking. Testers are pretty good at finding bugs here because they can be uncovered using your normal boundary value analysis type tests and other simple destructive tests like putting rubbish values into input fields.
They will enjoy a brief period of success when they find a few high priority bugs. Sadly many testers then continue to invest their time running less and less important tests of this type and find less and less important defects. A more experienced tester will recognise the rapidly diminishing rate of return on the investment of his time, and have a look in some other areas instead.

A trap so called “automation testers” or “developers in test” often fall into is to limit their testing to this area, because the tests are easy to express in test code and are easy to make pass.

Integration bugs

Integration testing is an often discussed and frequently practised type of testing, but again the temptation is to gain false confidence by testing only one aspect of it. Can the two systems talk to each other and does data that is sent from System A successfully arrive in System B? Yes? Good! A lot of integration testing stops at this point, but there’s much more fun to be had.

What happens if the message doesn’t arrive successfully in System B? A number of things could happen. Data in the two systems could get out of sync, or all subsequent messages could get locked up behind the one that didn’t get though, or the maybe no-one would notice that the connectivity had been lost until days later.

Systems operate in real life, and sadly real life involves outages that happen for any number of reasons. A good tester will want to know how his system behaves in this part of the life cycle.

Data quality bugs

Any experienced test will tell you the creation of useful quantities of test data is hard work. As a result, one of two things frequently happen. Sometimes the test data is created specifically for the tests. This is great for proving the various happy path tests, but this artificially created data often doesn’t reflect the myriad of different types of data that the system will have to deal with in the real life. Alternatively, the only data that is available is a copy, or more likely a subset of the live data. It can be often be hard to get the specific data sets you need to test all your edge cases. In both cases, it’s easy to find there are tests you might want to run, but can’t.

Another pit fall is to assume that all the data in the production environment is of good quality and is complete. All too often it isn’t. How will your system cope when it encounters bad data?

Environmental issues

Have you ever worked anywhere where there are stable and plentiful test environments that are accurately represent the live environment? Me neither. Just because something works in your test environment doesn’t necessarily mean it will work in your production environment where the config and architectural topography is likely to very different. For example, connectivity between the different systems in your test environment might be dead easy because it is a “de-militarised-zone” and is a virtualised environment, but your live environment is likely to have all sorts of firewalls.

I’m sure you’ll think of many more places to look. Please leave your ideas in the comments below!

Bug hunting - Know your prey

It’s easy for testers to get distracted by processes or the latest tools but really, it’s all about finding bugs. Lovely, horrible bugs. So what do we know about them?

Bugs live in hives

Bugs are sociable creatures. Where there is one, there is very likely to be a whole family of them huddled together in the dark. A tester who finds a bug is well advised to dig a little deeper or push a little harder in the same area and see what else is lurking. He will often be rewarded with more finds.

Bugs are sociable creatures

Why does this happen?

  • The area of the application was particularly complex and fiddly to develop increasing the likelihood of errors.
  • The component under test is dependent on other components that are themselves buggy.
  • This part of the code was written in a hurry. Possibly just before a deadline was due. Possibly on a Friday afternoon. Possibly at three in the morning. All these things would make human error more likely.
  • The developer may have misunderstood the requirement. If one part of a requirement has been misunderstood then it’s quite likely that other parts of it have been misunderstood too.

They are hard to kill

Just because a bug has been found and fixed doesn’t mean it can forgotten about. It’s very easy for bugs to re-spawn even after they’ve been squished by a code fix. For example, if the fix for a bug was only patched into a test environment rather than being properly included in the code base and deployed, subsequent versions of the code won’t contain the fix and the bug will regress. If the tests are manual the tester needs to make sure all his bug fixes from at least two or three versions of code back have stayed fixed by including the tests for them in his regression test pack. This is where automated tests are great, but only if either the automated test found the bug in the first place, or a new automated test has been written specifically to test for the bug.

Perhaps controversially, in my experience automated tests are not very good at finding bugs because they are designed to pass rather than designed to fail. Look out for another post on this topic later, so save your hate mail until then please.

They are seldom where you expect them to be

Many bugs that evade testers are a result of a failure of test analysis rather than a failure of test execution.

Testers seldom make mistakes and miss bugs when they are executing test scripts (although if course it does happen). This means that if a tester has identified a test, designed and executed it, he will usually find the bug if there is one lurking in the code. If he’s looking for it, he’ll probably find it.

Missed bugs are most often a failure of test analysis rather than a failure of test execution.

Most bugs that get missed are where the tester didn’t even think to look. This is really frustrating for testers, especially when the bugs would have been really easy to find if they had even had a quick look in the affected area. It’s also frustrating for everyone else, because people start thinking the testers can’t be relied upon to find even simple bugs.

The analogy here is someone who has lost their kitten. They hunt high and low, looking in all the cupboards, checking the drains outside, checking in the attic, and eventually even taking up their floorboards and dismantling the staircase to try and find the kitten. In short they do a very thorough job of destroying their house in their hunt for the kitten which is actually with the next door neighbours enjoying a saucer of milk.

The lessons to be learned

Coverage is king. It is more important to get your test coverage right rather than the perfect the detail of each test.

Be prepared to change the focus of your testing as you learn more about the thing you are testing. Don’t waste your time slavishly running thousands of low priority tests which all seem to be passing when you are suspicious about an area that has shown to have bugs.

Make sure any defects you find are covered by tests in your regression suite, and make sure you run that suite often.

If you’re reviewing tests, (either your own or other people’s) spend less time reviewing the tests that have been written, and more time reviewing what test haven’t been written. This is where the bugs will be.

Have your critical thinking reviewed by your peers and have heated discussions with them about where to spend your time.

In my next blog in this category, I’ll discuss the different sorts of places where bugs like to hang out.

LinkedIn discussion on this posts here.