|
链接:http://www.testingreflections.com/node/view/3150
Writing Test Cases as almost the only widely accepted and QC-specific idea/technique are object of my wonder since I started in testing field. Now after almost 10 years I have at last some understanding about it. Today I could say that I am advocate of exploratory testing, but I was actually even before I learned the term and idea. And still writing TC makes sense in a lot of cases, it is just wrong to believe that it is the silver bullet in any context.
Last year I compared Test Cases with shield and testing itself with a sword . I still believe Test Case creating could have two purposes/goals:
1) Test Cases are supposed part of the deliverable to the customer. TC goal credibility in this case. Typically UAT (acceptance) level.
2) Test Cases are for team internal use only. Typically System level testing. Testing efficiency should the goal in this case. The idea is to write test cases based on design while code is incomplete, so that we could test product quickly once the code is ready.
With moving to more agile development the second case begin to fail. I’ve seen this happening in my company and posts about this happening in other companies as well. It appears this ends up in one of the following ways:
a) TC are used internally, but the goal is credibility, not efficiency. It also means that TC are dramatically reworked during test execution
b) Exploratory type testing take place, only specific regression TC are written during exploratory testing or afterwards
c) Exploratory type testing take place, no TC written
I will not investigate further type a) as it is simple evidence of weak test manager – he was unable to convince management that this is ineffective usage or this resources. There are also sometime test cases created only to have something to report test progress against. Like we have 80% of test cases written and 70% of them pass. I have already attacked this approach and will in future as much as possible. This is most typical mistake to measure quality in number of defects open and test progress in numbers of test cases, I suggest everyone to read James Bach to learn why.
Cases b) and c) are both OK and depend on either we will need reusable test cases. To be true I believe that regression tests cases written and automated test scripts written have a lot in common. I would even say there are three levels:
I Pure exploratory testing
II Executing the test case written
III Executing automated test script
The design time increases from up (I suppose 0 for exploratory tests) to down, while test execution time decreases. However the scope of defects to be found also decreases, because automated tests for example will only validate what you scripted to validate, that means you should forecast what the defects may appear. While during manual testing you may see the indirect evidence of some defect. More over, the more detailed test case is, the more times one tester have executed it already (and as a result runs it faster now), the less probably he will find those indirect validation problems.
So much theory, now a little bit of practice. What I do in new project is following:
First of all I find any UI automation for the first release of product to be useless. This may be different for one-release projects, but I don’t have experience with those. Of course unit tests like JUnit executing specific API functions makes sense and are ideally created by developers, but sometimes testers may help with that.
Next I don't write ANY tests cases any more during testing cycle. I only update Test Plan that at the end of release has a very detailed "features tested" list with some hints and notes about features not working and bug IDs. Just after the release I do create test cases document details on how to invoke each feature, what input is expected by the feature etc. It is little bit like documentation, but have different goal/approach: goal is to make regression test execution as fast as possible – e.g. I attach data to be imported as much as possible to reduce data preparation time and I don’t care to describe why I use exactly such data – have no time; I explain in details how to do perform the most trivial use case, tester (unless newbie) may add details such as error handling using brains.
I try to use Testing Dashboard as replacement for formal test report with test cases executed/passed/failed/no executed. Sometimes I just communicate progress informally as my “gut-feeling”, and this is actually what PM wants to know, not the numbers of test cases. |
|