51Testing软件测试论坛

 找回密码
 (注-册)加入51Testing

QQ登录

只需一步,快速开始

微信登录,快人一步

手机号码,快捷登录

查看: 6126|回复: 6
打印 上一主题 下一主题

[翻译] Testing expectations (unidentified requirements). Against traceability

[复制链接]

该用户从未签到

跳转到指定楼层
1#
发表于 2006-5-5 22:54:19 | 只看该作者 回帖奖励 |倒序浏览 |阅读模式
链接请见:
http://www.testingreflections.com/node/view/3675

Most of the testing definitions says that it’s goal requirements verification. PMBOK says that on any project there are both identified requirements (needs) and unidentified requirements (expectations), both are The Requirements, both should be tested. While identified requirement testing means happy-path testing, I believe what we Testers do are mostly the expectation testing. That’s why don’t like and never use try to trace requirements to test cases (never created any requirements traceability matrix).

How to differentiate between Expectations and Needs ?
Let me start with an examples from real life. I’ve seen defect from customer that mouse wheel don’t work in our application. That was never specified as requirement. Developer say that’s requirement problem. OK let’s assume requirements document should specify such a details. So should it specify that mouse should be supported at all and that left button click on application’s button should work as button clicked and that pressing button a on keyboard ... well how about the common sense?

Expectations grow
The issue is that user’s expectations grow. Because of they work with computer more, because the applications become better and more user friendly, etc. For example network lost situation. I believe that 10 years there was no expectations that application should detect network cable unplugging and react adequately. It was even OK for the application to hang up for quite a long time in such a situation. As everyone see his windows XP reporting network cable lost almost immediately after it’s unplugging, they expect every application to be of the same tolerance level. They expect applications troubleshooting the problems they have caused by misusing the application, expect application to warn them when they are about to erase some data, expect certain security level which disallow them to harm the important data, expect to receive notification (if not progress) about long-running queries, expect copy-paste to work from-to different applications, they expect pup-up menus, short-cuts, correct tabulating order, etc., etc.

Does it mean development/testing scope grow?
If the expectations grow, it means that for the same written requirements (needs) we have much more functionality to provide to the customer comparing to what they had to provide some 10 years ago. However Tools provide most of the expected features. I mean applications used to develop or run applications. O.S. WEB Application Server, WEB Browser itself, JAVA Swing components, etc. etc. While those tools mostly comes pre-tested, we don’t have to test those features as well.
What changes I believe is the complexity of analyze what the expectation we should fulfill. Unfortunately, it seems that developers have no time to do this analyze. One need to have the “big picture” in mind; concentrate on usage instead of functionality; think “what else is expected” instead of “does it works as intended by me”. Yes, I mean Tester.

Developer implements Needs, tester verify Expectations
If we have item in requirements that say “it should be implemented”, I yet have to see case when developers simply don’t do it without any reason. It is actually project manager who have to make sure for each requirements item there are development activity, although it may depend on methodology details:
If developers do TDD, this is straight-forward. For each documented requirement at least one test is created and need to be passed. However even if this is not TDD, but at least continuous integration, documented features are added one-by-one, so we are certain all documented requirements are implemented.
In more waterfall-like project when they create one-big design document out of requirements and then implement it – there may be some assumptions done when writing design, which will be dropped during implementation; there may be miscommunications between developers (integration problems) if more than one person is involved in a single feature implementation.
Nevertheless, from what I’ve seen majority of problems found by testers are of two types: either not implemented expectations or regression (including integration) problems. OK, I only mean code defects here.

My approach “explore - receive fixes - write regression tests”
I do plan to blog my approach in details and I have already blogged few of them. So here are only the outline of it:
1) analyze requirements and plan the resources and test distribution
2) Using exploratory testing of new implemented features
3) Wait until exploratory testing is completed and majority of defect fixed (test other features meanwhile)
4) Using the notes kept during exploratory tests (typically done by the same person), write regression test cases – better automated. (typically after the next version of product is released and there are quiet period).

Epilogue and my further blogs
Recently learned that I am not approved speaker at EuroStar 2006, for which I have collected some data and ideas. It means I have no reason now to write up a single paper covering all of them, so I will be blogging them all one-by-one.
分享到:  QQ好友和群QQ好友和群 QQ空间QQ空间 腾讯微博腾讯微博 腾讯朋友腾讯朋友
收藏收藏
回复

使用道具 举报

该用户从未签到

2#
发表于 2006-5-12 13:36:24 | 只看该作者

第一部分

Testing expectations (unidentified requirements). Against traceability
测试期望功能(不确定的需求)。反对可跟踪性


1)Most of the testing definitions says that it’s goal requirements verification. PMBOK says that on any project there are both identified requirements (needs) and unidentified requirements (expectations), both are The Requirements, both should be tested. While identified requirement testing means happy-path testing, I believe what we Testers do are mostly the expectation testing. That’s why don’t like and never use try to trace requirements to test cases (never created any requirements traceability matrix).
大多数对于测试的定义都认为测试的目的是验证需求。PMBOK认为任何项目都包括确定的需求(称为必需功能)与不确定的需要(称为期望功能),两者都是需求,都应该进行测试。尽管对确定的需求埋没测试意味着道路一帆风顺,但我相信我们测试人员做得最多的是对期望功能的测试。这就是为什么他们不喜欢而且从来不尝试对着需求去写用例(从没有写过什么用例需求对照表)。

2)How to differentiate between Expectations and Needs ?
Let me start with an examples from real life. I’ve seen defect from customer that mouse wheel don’t work in our application. That was never specified as requirement. Developer say that’s requirement problem. OK let’s assume requirements document should specify such a details. So should it specify that mouse should be supported at all and that left button click on application’s button should work as button clicked and that pressing button a on keyboard ... well how about the common sense?
怎么区分期望功能与必须功能呢?
让我用实际生活中的一个例子说明吧。我曾经遇到从客户那里反馈过来的软件缺陷说在我们的软件里鼠标的滚轮不能用。这一点从没有明确为需求。开发人员说那是需求说明书的问题。好吧,我们假设需求说明文档里应该说写明这个细节,那也应该写明在所有程序里都应该支持鼠标操作,还要写明在软件里的按钮上方单击鼠标左键应该像点击此按钮一样,还要写明按键盘上的按键a会……那么按常理会怎么想呢?


3)Expectations grow
The issue is that user’s expectations grow. Because of they work with computer more, because the applications become better and more user friendly, etc. For example network lost situation. I believe that 10 years there was no expectations that application should detect network cable unplugging and react adequately. It was even OK for the application to hang up for quite a long time in such a situation. As everyone see his windows XP reporting network cable lost almost immediately after it’s unplugging, they expect every application to be of the same tolerance level. They expect applications troubleshooting the problems they have caused by misusing the application, expect application to warn them when they are about to erase some data, expect certain security level which disallow them to harm the important data, expect to receive notification (if not progress) about long-running queries, expect copy-paste to work from-to different applications, they expect pup-up menus, short-cuts, correct tabulating order, etc., etc.
用户期望的功能增加了
现在的问题是用户的期望功能增加了。这是因为用户越来越多地用电脑来工作,还因为现在的应用程序变得越来越对用户友好了,等等。举个例子,设想网络中断的情况。我想在10年前决不会有用户希望应用程序要检测网线是否被拔出而且还要做出适当的处理。那时即使应用程序因此中止运行很长时间也没事。现在,因为用户可以看到自己的 windows XP 系统会在网线被拔掉后立即报告网线中断,所以他们也希望每一个应用程序也能做到同样的水平。他们希望软件能处理由于他们误操作而引起的问题,他们希望软件能在他们试图删除一些数据的时候能提醒他们。他们希望软件有一定的安全级别从阻止对重要数据的破坏。他们希望需要长时间运行的任务如果不动了,软件会发出提示。他们希望可以在不同的程序间进行复制粘贴操作,他们希望应用程序有弹出式菜单,有快捷方式,有合适的制表顺序。等等,等等。

4)Does it mean development/testing scope grow?
If the expectations grow, it means that for the same written requirements (needs) we have much more functionality to provide to the customer comparing to what they had to provide some 10 years ago. However Tools provide most of the expected features. I mean applications used to develop or run applications. O.S. WEB Application Server, WEB Browser itself, JAVA Swing components, etc. etc. While those tools mostly comes pre-tested, we don’t have to test those features as well.
What changes I believe is the complexity of analyze what the expectation we should fulfill. Unfortunately, it seems that developers have no time to do this analyze. One need to have the “big picture” in mind; concentrate on usage instead of functionality; think “what else is expected” instead of “does it works as intended by me”. Yes, I mean Tester.
这是否意味着开发或测试的范围要扩大?
如果期望功能增加了,这就意味着与10年前想比,对于同样的写在纸的的需求(必需功能)我们必须向用户提供多得多的功能。不过一些工具软件已经提供了大多数的期望功能。我是指那些用来开发和运行软件的软件,比如操作系统,WEB应用程序服务器,网络游览器本身还有JAVA Swinng组件等等。这些工具大都已经经过测试了,所以我们不必另外测试这些功能了。
我觉得所发生的变化是对所期望满足的功能进行分析的复杂性增加了。但不幸的是,看来开发人员是没有时间进行这个分析了。一个人需要脑中有大图景,要集中精力于程序的使用而非功能;要去思考还有其他期望的功能吗?而不是思考软件工作符合我的要求吗?当然,我说的是测试人员。


[ 本帖最后由 brilliantking 于 2006-5-12 22:22 编辑 ]
回复 支持 反对

使用道具 举报

该用户从未签到

3#
发表于 2006-5-12 13:38:49 | 只看该作者

第二部分

5)Developer implements Needs, tester verify Expectations
开发人员实现必需功能,测试人员确认期望功能
If we have item in requirements that say “it should be implemented”, I yet have to see case when developers simply don’t do it without any reason. It is actually project manager who have to make sure for each requirements item there are development activity, although it may depend on methodology details:
如果在需求说明书中有一项需求标明“应予实现”,然面我不得不看到开发人员没有任何理由,只是不去管它。实际上,确保每一项需求都有相应的开发活动,这是项目经理要做的事情,尽管这里可能要依赖一些技术细节:
If developers do TDD, this is straight-forward. For each documented requirement at least one test is created and need to be passed. However even if this is not TDD, but at least continuous integration, documented features are added one-by-one, so we are certain all documented requirements are implemented.
如果开发人员去进行TDD(测试驱动开发),这是很简单的。对于每一个标于文档中的需求,至少要创建一个测试用例并予以通过。不过, 即使这不是TDD,但至少是一种连续的累积过程,写在文档中的功能一个一个加进去,所以我们可以肯定所有写在文档中的需求都应该得到实现。

In more waterfall-like project when they create one-big design document out of requirements and then implement it – there may be some assumptions done when writing design, which will be dropped during implementation; there may be miscommunications between developers (integration problems) if more than one person is involved in a single feature implementation.
在更接近使用瀑布模型的项目里,当开发人员根据需求撰写了一份长篇的设计文档并且去实现时,可以会有一些设计时已经假定做好的功能,但在实现时却漏掉了;此时如果不止一个开发人员参与了某个单独功能的实现,这就可能是开发人员这间缺乏交流的缘故。

Nevertheless, from what I’ve seen majority of problems found by testers are of two types: either not implemented expectations or regression (including integration) problems. OK, I only mean code defects here.
无论如何,据我的经验,测试人员发现的主要的问题有两类:要么是期望功能没有得以实现,要么就是回归(包括集成)测试的问题。自然,我这里仅指代码中的缺陷。

6)My approach “explore - receive fixes - write regression tests”
I do plan to blog my approach in details and I have already blogged few of them. So here are only the outline of it:
1) analyze requirements and plan the resources and test distribution
2) Using exploratory testing of new implemented features
3) Wait until exploratory testing is completed and majority of defect fixed (test other features meanwhile)
4) Using the notes kept during exploratory tests (typically done by the same person), write regression test cases – better automated. (typically after the next version of product is released and there are quiet period).
我的方法“探索性测试——收到修复软件——写回归测试用例”
我确实在计划把我的测试期望功能的方法详细地传到blog上,而且我已经传上来一些。这里只是我的方法的一个大纲:
1)分析需求说明,计划测试所用材料以及测试的范围
2)对新实现的功能进行探索式测试
3)重复进行,直到探索性测试完成,主要的缺陷已被修复(同时测试其他的功能)
4)用探索式测试里保留下来的笔记(通常是同一个人写下来的)来撰写回归测试用例——能自动化会更好一些。(这一步通常在软件产品的下一个版本开发出以后,并且有静默期)。


7)Epilogue and my further blogs
Recently learned that I am not approved speaker at EuroStar 2006, for which I have collected some data and ideas. It means I have no reason now to write up a single paper covering all of them, so I will be blogging them all one-by-one.
结语,以及我以后的blog
我已经为 EuroStar 2006 准备了一些材料和思路,最近才获悉我作为报告人的资格没有被核准。这就是说我现在我不用去写一篇大文章去介绍所有的测试方法,所以我将把我的这些方法逐一传到blog上。
回复 支持 反对

使用道具 举报

该用户从未签到

4#
 楼主| 发表于 2006-5-12 21:22:36 | 只看该作者
to:brilliantking
翻译中一些我感觉不是很好的地方我直接修改了一下,比如题目,呵呵。

这篇文章看起来比较晦涩一点,所以我想说说我的理解:
大家都知道我们在进行系统测试时,需要找出需求规格所对应的显性需求(也就是文中提到的必须的需求)以及隐性需求(也就是文中提到的期望的需求),都需要进行测试,都需要体现在需求跟踪矩阵中。对于显性需求而言,是非常简单的,但对于隐性需求就比较难办了,我们能一下子就把所有的隐性需求都找出来吗,因此文中作者提到“没有必要将用例和需求的对应关系在需求跟踪矩阵中体现出来”。那怎样把隐性需求抓出来呢,作者提到了借鉴探索式测试,在探索式过程中发现要测试的地方都形成用例记录下来,以便将来做回归测试。

我们这里姑且不管该文作者反对将用例和需求对应起来的想法是否正确,但他提到的通过探索式测试挖掘隐性需求的思路还是值得我们工作中去借鉴的。
回复 支持 反对

使用道具 举报

该用户从未签到

5#
发表于 2006-5-12 22:10:20 | 只看该作者

感谢版主斧正!

到底还是版主见多识广,唯有实践出真知。
回复 支持 反对

使用道具 举报

该用户从未签到

6#
发表于 2006-5-18 16:34:20 | 只看该作者

谢谢skinapi 、brilliantking

谢谢两位
回复 支持 反对

使用道具 举报

该用户从未签到

7#
发表于 2006-6-16 16:21:24 | 只看该作者
谢谢skinapi 、brilliantking!
致敬!
回复 支持 反对

使用道具 举报

本版积分规则

关闭

站长推荐上一条 /1 下一条

小黑屋|手机版|Archiver|51Testing软件测试网 ( 沪ICP备05003035号 关于我们

GMT+8, 2024-9-28 03:18 , Processed in 0.081574 second(s), 28 queries .

Powered by Discuz! X3.2

© 2001-2024 Comsenz Inc.

快速回复 返回顶部 返回列表