51Testing软件测试论坛

 找回密码
 (注-册)加入51Testing

QQ登录

只需一步,快速开始

微信登录,快人一步

手机号码,快捷登录

查看: 2233|回复: 0
打印 上一主题 下一主题

Testing VS development progress: different languages?

[复制链接]

该用户从未签到

跳转到指定楼层
1#
发表于 2005-12-12 12:31:32 | 只看该作者 回帖奖励 |倒序浏览 |阅读模式
Submitted by Ainars Galvans on Fri,09/12/2005

The issue of test progress invisibility is one that I'm battling with for several years. This time I will point out absence of the specific practices (two of them) that I believe become necessary in the light of multi-layer architectures and agile development methodologies (iterative life-cycle).
I don't have a goal to suggest the best solution, only to demonstrate that we probably need to solve that.


Issue #1: Do you know code coverage tested?
A single function (user interface item) could trigger different code, created by different programmers in different time (or even code incomplete yet) to be executed. So it appears that testers and developers speak different languages (or in different dimensions) when talking about progress.

There are a lot of publications supporting testing measures by functional areas. It works fine for “old times” architecture where single code piece stands for single functional component – functional area. If we have multi-layer architecture with each layer developed by separate team/person then each our test cover code writer by each developer to some degree. The issue is that we don’t known that degree. It may appear that after the first 95% of tests executed we have only tester 5% of functionality implemented in some layer and the last 5% will cover twice as much. It could appear that defects found so far block as from testing some other layers.
Should we map testing to the code? It seems that we don’t see the need of such a practice… Although we’ve got plenty of tools to support it at leas in JAVA worlds and few of them are for free. Well there are publications about using % of code covered but this typically ends up in requiring given % to be achieved that is wrong for black-box testing I believe.

Issue #2 We don’t have quality insurance: Testing progress = current quality.
When testing is added to project schedule just as one (or several) more items along with actual deliverable creation it cause misinterpretation. There are number of publications that say you can’t plane/estimate whole testing phase you could only plan single test cycle. Number of cycles depends on defects found.
Of course testing is complex effort and it’s size depends on quality of work done by other project members and on level of acceptable quality. This is cost of the medicine tomorrow because of we don’t care for our health today. Could you plan how much $ you will spend on medicine next year? Unfortunately there is no Quality insurance (just like health insurance) in IT, there are only Quality Assurance.
Still management see a testing as quality improvement effort. It means that quality is the measure of testing progress. Managers are not so smart you think? How could one measure my (testing) job in terms of how good the others (developers) have done their job (quality implementation). Perhaps it is just our indisposition to them? James Bach in A Low Tech Testing Dashboard suggests to add quality to the testing dashboard in weekly status report. Does he think quality is testing progress measure or it is only an information that manager wants?

Do we need new practices ?
In real projects I observe moving to more agile development style, where code pieces are added one-by-one and has complex integration with each other. Management wants us to test pieces as they are added no matter if other parts (integrated with) are ready at that moment. This helps developers to improve quality today, not waiting for whole system to be ready. Or in other words we need to be able to measure parts and they - to improve parts as we measure. This way we will make test progress to be the same as quality measured progress, solving both of the issues.
They will only want to know coverage of SRS tested after some type of the code freeze happens – more or less formal moment almost at the end of iteration. This is the time when testing against SRS, use cases or acceptance tests cases are required, when bug-fixing are driven by the only goal to pass all these tests – to validate all requirements. This is time when we are able to measure 100% of the code, but only need to improve the quality to the degree required for delivery.

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有帐号?(注-册)加入51Testing

x
分享到:  QQ好友和群QQ好友和群 QQ空间QQ空间 腾讯微博腾讯微博 腾讯朋友腾讯朋友
收藏收藏
回复

使用道具 举报

本版积分规则

关闭

站长推荐上一条 /1 下一条

小黑屋|手机版|Archiver|51Testing软件测试网 ( 沪ICP备05003035号 关于我们

GMT+8, 2024-11-27 03:07 , Processed in 0.065897 second(s), 31 queries .

Powered by Discuz! X3.2

© 2001-2024 Comsenz Inc.

快速回复 返回顶部 返回列表