We’ve been sold a dream about test automation that it has become deeply unfashionable and almost taboo to question. Whilst test automation undoubtedly has its place, testers should be candid about its limitations. Only then can we apply test automation where we will get the most from it.
Automated tests don’t tend to find very many bugs
Automated testing is best suited to the least interesting kind of tests. The tests are typically highly repetitive and test one thing at a time on a completely stubbed out system with artificial test data and nothing else running on the environment. There is value in running these tests, but as we know from my previous post on where to find bugs, this is only a small fraction of the different places where bugs tend to cluster.
In my experience a huge majority of automated tests fail do so because of environmental, data or connectivity problems or because the test itself is not reliable.
Only seldom do automated tests fail because they have found a bug.
Automated tests are designed to pass, not to fail
A failed automated test is a bad thing. It means an embarrassing red background on a build monitor and time spent finding excuses for why the test failed. It means maintaining the test, or in other words changing it so it passes. Consequently, automated tests are designed to run reliably. This is not necessarily the same thing as a test that is designed to exercise the system and find a bug if there is one.
Automated tests are just more code
Test code is equally, if not more liable to contain bugs than the code it’s trying to test. Developers don’t make good testers, and testers don’t make good developers. Some people like to build things. Other people like to smash things up. It will take a tester longer to write the code and it will often be verbose, inefficient and prone to failure.
Automated tests turn experienced test analysts into junior developers, and sell this as progression
Using and maintaining automated tests is really boring
The dream we’ve been sold is that once our automated tests are written, they run and run in perpetuity. This is simply not true. A failing automated test ought to initiate an immediate response to work out what failed and why, to get the defect fixed, to get the test updated if necessary, and re-run. Running automated test, analysis the results and maintaining tests takes a lot of time and it’s not interesting work for skilled testers. It doesn’t exercise any of their creative and destructive skills. So we hire excellent testers who are well equipped to aggressively test our products and we turn them into servants of the build machine, desperately trying to feed it tests that it will not fail. This is a waste of talent and money.
It takes ages to write automated tests
I find it dismaying when I see huge amounts of time being spent persuading a small number of automated tests to run when this time could be spent testing some software. All testers know that the earlier in the software development process that bugs are found, the easier, cheaper and quicker it is to fix them. And yet we see so often that testers’ first actions on being presented with a new feature set is to start writing test code. I want to see testers all over that new feature like ants on a dropped lollipop. Instead, I see testers meticulously designing tools to lick the lollipop for them.
Everything changes all the time
The nature of agile development is that the rate of change of the application is very high. Huge proportions of the code base can be replaced, added to, dramatically changed or even removed with each iteration. We no longer operate in a world where we build a component, test it, and then never touch it again for months and happily let automated regression tests fuss over it every now and again. Again, this suggests a very high maintenance over-head for automated tests because large numbers of them will need maintaining all the time. I’m sure we’ve all seen testers who struggle to find the balance between testing the stories in their current sprint adequately, whilst automating the stories from the previous sprint, whilst updating and fixing the automated tests from two sprints ago, whilst trying to run all these automated tests and analyse the results. These poor testers get further and further behind with their automation whilst spending more and more time on it and less and less time testing. The end game is that very few of the automated tests run usefully, and the tester has no time to do any other manual testing. This sucks.
Repetition should not breed confidence
The most junior tester who’s learned about equivalence partitioning will tell you that it’s a waste of time to test every value in the same partition, because they are, by definition, equivalent. And yet we see automated tests do exactly that.
The paper thin veneer of confidence that only the same test passing 10,000 times brings
Imagine you had an automated test that ran four times day for six months and passed every time. What does this really tell you? Does it tell you the software is perfect, or does it tell you the test is lightweight, or does it tell you it’s not necessary to run that test so often and you should consider writing a different test instead?
Tools are passing fashions
Tool choice breeds resentment like almost nothing else in the world of IT. Some people have an almost religious fervour about the tools they love and have decided to hate for often arbitrary reasons. One tester may spend six months writing tests using one tool, only for those tests to be ignored by his successor who prefers a different tool. What a waste! Again, all this time could have been spent finding bugs rather than writing disposable code.
The best testing tool ever made is the human brain
So you’re saying we shouldn’t bother with automation?
No, I’m saying we should do automation properly. When done well by the right people with the right attitude towards both testing and development it can be brilliant. But sadly very much of it is a long way from being brilliant. I think we as a testers should ask ourselves very seriously if it’s the best way we should be spending so much of our time. Here’s my recommendations:
- Get people doing what they’re good at. Have experienced testers do test analysis and design, and get junior developers to work with them and write the test code and run and maintain the tests. Once an automated test suite gets above a certain size, it can easily be a full time job maintaining, managing and using it. Managers need to acknowledge the need for this headcount.
- Be very picky about what is worth automating and what isn’t, and be honest about the true cost and value of the resultant automated tests.
- Understand what automated unit or component tests have been written by the developers and guide them in making these tests effective. Don’t try to duplicate them with automated functional tests.
- Be highly aware of time you spend that is not spent actually doing testing. Keep a log of this and show it to your manager.
There is LinkedIn discussion on this post here