|
I want to automate as many tests as I can. I’m not comfortable running a test only once. What if a programmer then changes the code and introduces a bug? What if I don’t catch that bug because I didn’t rerun the test after the change? Wouldn’t I feel horrible?
Well, yes, but I’m not paid to feel comfortable rather than horrible. I’m paid to be costeffective. It took me a long time, but I finally realized that I was over-automating, that only some of the tests I created should be automated. Some of the tests I was automating not only did not find bugs when they were rerun, they had no significant prospect of doing so. Automating them was not a rational decision.
The question, then, is how to make a rational decision. When I take a job as a contract tester, I typically design a series of tests for some product feature. For each of them, I need to decide whether that particular test should be automated. This paper describes how I think about the tradeoffs.
Scenarios
In order for my argument to be clear, I must avoid trying to describe all possible testing scenarios at once. You as a reader are better served if I pick one realistic and useful scenario, describe it well, and then leave you to apply the argument to your specific situation. Here’s my scenario:
1. You have a fixed level of automation support. That is, automation tools are available. You know how to use them, though you may not be an expert. Support libraries have been written. I assume you’ll work with what you’ve got, not decide to acquire new tools, add more than simple features to a tool support library, or learn more about test automation. The question is: given what you have now, is automating this test justified? The decision about what to provide you was made earlier, and you live with it.
In other scenarios, you might argue for increased automation support later in the project. This paper does not directly address when that’s a good argument, but it provides context by detailing what it means to reduce the cost or increase the value of automation.
2. There are only two possibilities: a completely automated test that can run entirely unattended, and a "one-shot" manual test that is run once and then thrown away. These are extremes on a continuum. You might have tests that automate only cumbersome setup, but leave the rest to be done manually. Or you might have a manual test that’s carefully enough documented that it can readily be run again. Once you understand the factors that push a test to one extreme or the other, you’ll know better where the optimal point on the continuum lies for a particular test.
3. Both automation and manual testing are plausible. That’s not always the case. For example, load testing often requires the creation of heavy user workloads. Even if it were possible to arrange for 300 testers to use the product simultaneously, it’s surely not cost-effective. Load tests need to be automated.
4. Testing is done through an external interface ("black box testing"). The same analysis applies to testing at the code level - and a brief example is given toward the end of the paper - but I will not describe all the details.
5. There is no mandate to automate. Management accepts the notion that some of your tests will be automated and some will be manual.
6. You first design the test and then decide whether it should be automated. In reality, it’s common for the needs of automation to influence the design. Sadly, that sometimes means tests are weakened to make them automatable. But - if you understand where the true value of automation lies - it can also mean harmless adjustments or even improvements.
7. You have a certain amount of time to finish your testing. You should do the best testing possible in that time. The argument also applies in the less common situation of deciding on the tests first, then on how much time is required. |
|