It’s always fun to run a Selenium automation demo for those who have never seen it in action. It looks as if an invisible robot is sitting in front of the screen manually running through a test suite at blazing speed.
Observers from management or HR might look at the invisible tester blasting through a test suite and be tempted by its cost-cutting potential. “Hmmm,” they might think. “Why am I paying for all this QA staff when this magical Selenium thing can run the tests more efficiently and accurately, with no personnel costs at all?”
Everyone has seen impressive videos of robots in manufacturing – machines bolted next to an assembly line where a human assembler used to stand, welding and riveting at amazing speed. Surely, the same efficiencies can be gained by applying virtual “robots” to software testing, right?
This kind of thinking can lead to trouble, if, as has happened at more than a few companies, HR and company leadership doesn’t understand the objectives and strategies of automated software testing. It can be a costly mistake to trim QA staff under the assumption that Selenium automation is a direct substitute for human testing.
The problem with this cost-cutting logic is that test automation in software development is fundamentally different from automation in manufacturing. Over the lifecycle of any software product or application, code becomes increasingly complex, with many layers of dependencies pointing in many directions. As a result, the whole test process becomes heavily dependent on human testers who are thoroughly familiar with the product. These testers can use their judgement, experience, and savvy to predict the probable weaknesses of a new feature set and use that as a guide to focus their testing efforts. This is a process that works and is impossible to automate.
Unfortunately, no amount of judgement and experience can predict all the ways that a build for a new feature set might fail. Things often break where you expect them to – but they also break in ways that can’t be logically anticipated. Manual testing is a poor methodology for finding these “edge case” defects simply because it takes too long to run comprehensive test suites by hand. But this is exactly when test automation becomes really useful – as a complement to human testing rather than a replacement for it.
Automated tests can thoroughly and “automagically” test all the nooks-and-crannies of a product that are assumed to be stable so that human testers can stay focused on new features and functionality. Thus, the combined efforts of humans and “robots” result in test plans of greater depth and breadth and consistency than are ever possible with only one or the other methodology.
So don’t be fooled by the false choice between human and automated testing – or be tempted to cut your QA staff loose because you have added automated testing to your repertoire. You will still need human judgement, experience, and savvy to test all of the new features of your product and to help extract the real benefits of automation.