connie 发表于 2005-12-12 12:31:32

Testing VS development progress: different languages?

Submitted by Ainars Galvans on Fri,09/12/2005

The issue of test progress invisibility is one that I'm battling with for several years. This time I will point out absence of the specific practices (two of them) that I believe become necessary in the light of multi-layer architectures and agile development methodologies (iterative life-cycle).
I don't have a goal to suggest the best solution, only to demonstrate that we probably need to solve that.


Issue #1: Do you know code coverage tested?
A single function (user interface item) could trigger different code, created by different programmers in different time (or even code incomplete yet) to be executed. So it appears that testers and developers speak different languages (or in different dimensions) when talking about progress.

There are a lot of publications supporting testing measures by functional areas. It works fine for “old times” architecture where single code piece stands for single functional component – functional area. If we have multi-layer architecture with each layer developed by separate team/person then each our test cover code writer by each developer to some degree. The issue is that we don’t known that degree. It may appear that after the first 95% of tests executed we have only tester 5% of functionality implemented in some layer and the last 5% will cover twice as much. It could appear that defects found so far block as from testing some other layers.
Should we map testing to the code? It seems that we don’t see the need of such a practice… Although we’ve got plenty of tools to support it at leas in JAVA worlds and few of them are for free. Well there are publications about using % of code covered but this typically ends up in requiring given % to be achieved that is wrong for black-box testing I believe.

Issue #2 We don’t have quality insurance: Testing progress = current quality.
When testing is added to project schedule just as one (or several) more items along with actual deliverable creation it cause misinterpretation. There are number of publications that say you can’t plane/estimate whole testing phase you could only plan single test cycle. Number of cycles depends on defects found.
Of course testing is complex effort and it’s size depends on quality of work done by other project members and on level of acceptable quality. This is cost of the medicine tomorrow because of we don’t care for our health today. Could you plan how much $ you will spend on medicine next year? Unfortunately there is no Quality insurance (just like health insurance) in IT, there are only Quality Assurance.
Still management see a testing as quality improvement effort. It means that quality is the measure of testing progress. Managers are not so smart you think? How could one measure my (testing) job in terms of how good the others (developers) have done their job (quality implementation). Perhaps it is just our indisposition to them? James Bach in A Low Tech Testing Dashboard suggests to add quality to the testing dashboard in weekly status report. Does he think quality is testing progress measure or it is only an information that manager wants?

Do we need new practices ?
In real projects I observe moving to more agile development style, where code pieces are added one-by-one and has complex integration with each other. Management wants us to test pieces as they are added no matter if other parts (integrated with) are ready at that moment. This helps developers to improve quality today, not waiting for whole system to be ready. Or in other words we need to be able to measure parts and they - to improve parts as we measure. This way we will make test progress to be the same as quality measured progress, solving both of the issues.
They will only want to know coverage of SRS tested after some type of the code freeze happens – more or less formal moment almost at the end of iteration. This is the time when testing against SRS, use cases or acceptance tests cases are required, when bug-fixing are driven by the only goal to pass all these tests – to validate all requirements. This is time when we are able to measure 100% of the code, but only need to improve the quality to the degree required for delivery.
页: [1]
查看完整版本: Testing VS development progress: different languages?