Imagine you have two testers in front of you, both giving you a report on the testing they have just done. Whose view do you trust most?
The application is completely tested and all the bugs have been found and fixed!
All the tests that were planned have been run, and all the bugs that were found have been fixed.
On the surface they sound like they’re saying a similar thing, but they’re not. The first statement is potentially dangerous, even though it sounds bolder and more confident than the second, slightly weasley sounding statement.
The first tester is telling us the following:
- He has tests that cover absolutely every possible part of the application and it’s integrations. Every possible use case is covered, all possible positive and negative tests are covered, every possible performance, load and stress test are done and every possible UAT, accessibility and security scenario are covered by the tests. His planned test coverage is exhaustive, even though we all know exhaustive testing is practically impossible.
- His tests of all types were so well designed that they would have expose all and any defects.
- Every single test of every single type and priority has been run perfectly and all defects exposed have been tracked, fixed and re-tested perfectly.
Do you believe him? I don’t. What is far more likely that he has some tests that cover some fraction of the potential coverage, they’ve been run and found some bugs that have since been fixed. He now has a suite of passing tests that he is misinterpreting as meaning the software is bug free. He is placing far more confidence in those passing tests than they deserve. In short, he is giving a test report based on misinterpretation of incomplete data, his own aspiration and ambition, and faith that everything will be OK.
Fortunately we seldom hear people talking like our first tester because junior testers learn pretty quickly that saying things are bug free isn’t accurate or helpful and almost guarantees future embarrassment. But do we apply the same rigour to our automated tests? When your build monitor or test report dashboard is reporting 100% of tests passed, do you allow yourself to interpret those results as meaning the system is bug free? Whilst we can get confidence from automated test results if we know exactly what they are doing, we must be careful not to place more trust in them than they deserve. Have you ever heard someone say “I don’t know how that bug got into production. All two thousand of the automated tests passed.”?
Compare this with our second tester. She is thinking like a scientist. She recognises that her test coverage will never be complete or perfect and she’s reporting on the empirical evidence that has been gathered. Empirical means acquired through observation or experimentation – making claims without experimental evidence is science fiction, not science. Her statement demands sensible follow up questions about which tests have been run and perhaps more importantly which tests haven’t been run.
A general rule in life, not just testing, is that the more utterly convinced someone is of something extraordinary, despite any shortcomings in evidence or even contradictory evidence, the more sceptical you should be about it. Extraordinary claims require extraordinary evidence and this applies to testing just much as science, religion or politics.
A tester must be a scientist and only use the available evidence to form his conclusions. He must be ready to abandon or adjust those conclusions when new evidence emerges. Imagine a tester comes you and says, “We thought the application was pretty solid and we were just about to sign it off when a UAT person stumbled across a really nasty defect. This sucks, and we’re going to have to go back and re-assess our tests and even our whole test approach. It also raises some questions about the quality of that bit of the application. If there’s one defect like that there’s might be other things too.” You might respond to this in two ways: You can either chastise the tester for having rubbish tests and leaving it until UAT before critical bugs or found (I.e. tell him off because he hasn’t told you want you want to hear), or you can can realise that he is exactly the kind of person you want in your team, and openly support him in improving the team’s testing.