One of the most important skills a test analyst can have is his ability to write good test scripts. The main point is you must write your tests assuming that someone else will have to execute them.
Imagine you have just started a new job, and you inherited a suite of a thousand tests from your predecessor. After some initial training on the application, you are asked to do a regression test by executing all the scripts you have inherited. But when you try to execute the tests, you are unable to do so because:
- The tests assume that you know things that you don’t.
- The test are written in sloppy shorthand English and are difficult to understand.
- The tests mention things that don’t seem to exist in the application, leading you to believe the tests are out of date.
- Many of the tests appear to be the same.
- All the tests are marked as High priority.
With test cases in this state, is it reasonable to expect you to effectively test the system? Is it likely you will be able to execute the tests in the same amount of time it used to take your predecessor? No.
If the guidelines for writing test cases can be summarise in one simple point, it is this:
Write your tests assuming that someone else will have to execute them.
In this post I make no distinction between manual test scripts, or automated test scripts. Exactly the same rules apply. Tests are tests, irrespective of whether it’s you executing them manually, someone else executing them manually, or a computer executing them for you.
In this first blog in a series of two, I will talk about the ISEB, or ISTQB’s characteristics of good test cases. In the second post, I will expand and improve on this with my own set of characteristics.
ISEB’s four E’s
The ISEB Foundation in Software Testing syllabus lists the characteristics of a good test case as follows:
Let’s examine each of these.
Software changes over time as new features are added and bugs are fixed, so our tests also need to be updated to reflect the changes in the software.
Because maintaining test cases is time consuming (therefore expensive) and uninteresting they should be written in such a way that they need little, if any maintenance. And if they do need any maintenance, it should be easy to do.
Ideally, once you’ve written your test you should be able to execute it again and again, without ever having to edit it. If you do have to edit the test each time you execute it, the test is not “evolvable”, and you will waste lots of time editing it.
Test that are “evolvable” are sometimes described as “future proof”, or “low maintenance” or “repeatable”.
You can make your tests evolvable by considering the following:
- Use templates rather than copying steps.
- A group of tests will often have the same pre-requisites. It is boring and waste of time, and possibly inaccurate to copy the same pre-requisites into each test, so write them once, and create a template. Reference that template wherever you need it. You now only have to maintain the one template, not several copies of the same pre-requisites.
- If you know a series of steps will be repeated in many tests, then create them as a template.
- Avoid being un-necessarily specific about things that may change. For example:
- Don’t be too detailed where you don’t need to be. For example:
- “Step 3 – Click the blue button labelled ‘Submit’ in the bottom right hand corner of the screen” – this is not evolvable, because it is possible that a design change may make the button green, re-label it “Continue”. If this happen, your test would then need to be maintained if it is to stay valid.
- “Step 3 – Submit the form” – this is evolvable because no matter what colour the button changes to or what it is labelled, the test will still be valid. The test doesn’t mind what or where the button is, as long as the form can be submitted, which is what is important.
- Avoid referring explicitly to elements of the page which are parameter driven, or dynamic. For example:
- “If the user is inactive for more than 30 minutes, he is timed and out and must re-authenticate.” – This is not evolvable, because it is likely that the time-out limit is set by a parameter. If the parameter is changed to 15 minutes, the test becomes invalid.
- “If the user is inactive for longer than the time limit set in the time-out parameter, he is timed and out and must re-authenticate.” – This is evolvable, because the test is still valid whatever the parameter is set to. (In this instance it would be a good idea to mention where the parameter can be seen, and set.)
Test should be written so they are quick, easy and cheap to execute. Any test that takes a long time to execute, or is difficult and complex to execute is unlikely to ever be executed properly. A test that is simple and quick and can be done at any time and is repeatable is likely to be useful and executed willingly.
For example, imagine if you needed to test a batch job that runs every night at midnight. Would you want to write you test so that it needed the tester to be in the office at midnight in order to see the batch job running? This would not only make the tester unhappy, but it is also a test that can only be run once every 24 hours.
It would be far better to write the test in such a way that the tester could kick off the batch job at any time, (possibly by setting the server time to 23:59, or by manually intervening and forcing the batch job to run) which would then allow the test to be run during normal working hours, and be run as many times as necessary, and whenever it is needed.
You can also make your tests economic by ensuring you have control over system parameters. To refer back to the earlier example where we were discussing a test of the system timeout, rather than setting the timeout to 30 minutes, you could set it to 2 minutes. This means that the person executing the test does not have to sit around being unproductive for 30 minutes waiting for his session to timeout. He would only have to sit around being unproductive for 2 minutes!
If a test passes without finding a bug, we can either be:
a. Confident there is not a bug
b. Concerned that our test was not effective
So how can we be confident, and not concerned? By making our tests “effective”! If there is a bug, an effective test will find it. A good way of gaining confidence in the system is by trying hard to find bugs, but failing to find any.
A common mistake is to only write tests that examine whether the system does what it is meant to do. Tests of this nature can be easily derived simply by translating the requirements document into more test-like language. These tests may well be valid but without tests that also examine whether the systems does what it is not supposed to do you will not have an effective set of tests.
Consider the following to make your tests effective
- Think about what bugs there could be, and write tests that would find them
- What bugs have been found in similar applications?
- Risk Based Testing techniques
- What bad things could happen to this piece of functionality, and how does the software behave when they do happen?
- Boundary Value Analysis / Equivalence partitioning
- Negative tests
- Branch / Statement coverage
- …and all the other testing techniques.
It seems that in an attempt to make all these characteristics start with the letter E, the final characteristic “Exemplary” is somewhat forced. ISEB’s interpretation of “Exemplary” is “A good test case is exemplary, meaning that it does more than one thing for us (is capable of finding more than one fault.)”
There are two arguments against this:
This is not what exemplary means. To summarise various entries from various dictionaries, exemplary means serving as an illustration, specimen or model and being worthy of imitation. It does not mean “does more than one thing”.
If we ignore the label “Exemplary” and focus on the definition “is capable of finding more than one fault” we still have a problem. If a good test is capable of finding more than one fault, then it is it not going to be a specific test designed to find a particular bug. If the test fails, it should be explicit what the problem is, but if the test could cause all sorts of things to fail how do we know which of them failed and which passed? If you take the ISEB recommendation literally, you could reasonably end up writing one enormous test that covers the entire system, justifying your lunacy by claiming that if a good test is capable of finding more than one fault, a truly wonderful test would be capable of finding all the faults so we should have as few a number of test as possible each covering as much ground as possible, the result being one huge test. This would clearly undesirable to say the least.
If however, we interpret “Exemplary” as meaning that you should be able to take each test, and hold it up as an example of how all other tests should be written then I am somewhat happier. However, the previous three E’s are intended to be the “means” to the “end” of achieving a good test, but exemplary is definitely an “end”. I would have no objections to saying the following:
“If your tests are Evolvable, Economic and Effective, then they will be Exemplary!”
This is both a correct use of words, and good advice.
We can do better than the four Es
The four Es are a good starting point, but read my next post for my improvement on this.