|
链接请见:
http://www.testingreflections.com/node/view/3675
Most of the testing definitions says that it’s goal requirements verification. PMBOK says that on any project there are both identified requirements (needs) and unidentified requirements (expectations), both are The Requirements, both should be tested. While identified requirement testing means happy-path testing, I believe what we Testers do are mostly the expectation testing. That’s why don’t like and never use try to trace requirements to test cases (never created any requirements traceability matrix).
How to differentiate between Expectations and Needs ?
Let me start with an examples from real life. I’ve seen defect from customer that mouse wheel don’t work in our application. That was never specified as requirement. Developer say that’s requirement problem. OK let’s assume requirements document should specify such a details. So should it specify that mouse should be supported at all and that left button click on application’s button should work as button clicked and that pressing button a on keyboard ... well how about the common sense?
Expectations grow
The issue is that user’s expectations grow. Because of they work with computer more, because the applications become better and more user friendly, etc. For example network lost situation. I believe that 10 years there was no expectations that application should detect network cable unplugging and react adequately. It was even OK for the application to hang up for quite a long time in such a situation. As everyone see his windows XP reporting network cable lost almost immediately after it’s unplugging, they expect every application to be of the same tolerance level. They expect applications troubleshooting the problems they have caused by misusing the application, expect application to warn them when they are about to erase some data, expect certain security level which disallow them to harm the important data, expect to receive notification (if not progress) about long-running queries, expect copy-paste to work from-to different applications, they expect pup-up menus, short-cuts, correct tabulating order, etc., etc.
Does it mean development/testing scope grow?
If the expectations grow, it means that for the same written requirements (needs) we have much more functionality to provide to the customer comparing to what they had to provide some 10 years ago. However Tools provide most of the expected features. I mean applications used to develop or run applications. O.S. WEB Application Server, WEB Browser itself, JAVA Swing components, etc. etc. While those tools mostly comes pre-tested, we don’t have to test those features as well.
What changes I believe is the complexity of analyze what the expectation we should fulfill. Unfortunately, it seems that developers have no time to do this analyze. One need to have the “big picture” in mind; concentrate on usage instead of functionality; think “what else is expected” instead of “does it works as intended by me”. Yes, I mean Tester.
Developer implements Needs, tester verify Expectations
If we have item in requirements that say “it should be implemented”, I yet have to see case when developers simply don’t do it without any reason. It is actually project manager who have to make sure for each requirements item there are development activity, although it may depend on methodology details:
If developers do TDD, this is straight-forward. For each documented requirement at least one test is created and need to be passed. However even if this is not TDD, but at least continuous integration, documented features are added one-by-one, so we are certain all documented requirements are implemented.
In more waterfall-like project when they create one-big design document out of requirements and then implement it – there may be some assumptions done when writing design, which will be dropped during implementation; there may be miscommunications between developers (integration problems) if more than one person is involved in a single feature implementation.
Nevertheless, from what I’ve seen majority of problems found by testers are of two types: either not implemented expectations or regression (including integration) problems. OK, I only mean code defects here.
My approach “explore - receive fixes - write regression tests”
I do plan to blog my approach in details and I have already blogged few of them. So here are only the outline of it:
1) analyze requirements and plan the resources and test distribution
2) Using exploratory testing of new implemented features
3) Wait until exploratory testing is completed and majority of defect fixed (test other features meanwhile)
4) Using the notes kept during exploratory tests (typically done by the same person), write regression test cases – better automated. (typically after the next version of product is released and there are quiet period).
Epilogue and my further blogs
Recently learned that I am not approved speaker at EuroStar 2006, for which I have collected some data and ideas. It means I have no reason now to write up a single paper covering all of them, so I will be blogging them all one-by-one. |
|