connie 发表于 2006-1-20 09:46:27

14 Additional Best Practices for Testing Software

原文链接:http://www.cio.com/archive/111505/testing_sidebar2.html

More best practices for testing software and running your testing organization.
By Meridith Levinson

The following list of best practices for testing software and running your testing organization were gleaned from interviews with companies that have rigorous testing needs and standards.

1.Involve testers early in the development cycle. Nathan Hayward, HomeBanc Mortgage's vice president and director of quality management, says his quality assurance workers meet with business analysts and business users before developers even start writing code—while requirements are being written—to determine what requirements they ought to test and to develop test cases for each requirement.

2.Establish quality checkpoints or milestones throughout the entire development cycle. David Pride, e-Bay's vice president of quality assurance, says these milestones are one way the company fosters a culture of quality among its development and testing groups. Before coding begins, e-Bay's first milestone occurs when the QA and product development groups review requirements. The second milestone occurs before development ends, when e-Bay's product development and project management groups review the QA team's test plan to make sure it's adequate. Just before QA begins testing, the third checkpoint occurs as the development group shows QA that their code meets all functional and business requirements, that developers have tested the code in their environment and that it's now ready for QA to test.

3.Write a tech guide. "A lot of the problems that come up when you're testing software are a result of people not knowing the right way to do certain things," says Mike Fields, State Farm Insurance's technology lead for claims. To crack down on bugs that can be prevented, IT workers inside State Farm's Claims department developed a technology guide filled with practical advice, templates, documentation and how-to information on the right way to go about certain design, development and testing activities. If anyone in Claims IT has a question about the best way to approach a specific task, they can refer to the tech guide.

4.Centralize your test groups. At The Hartford's Property & Casualty Company, employees who do functional testing (that is, those who test the functionality of systems and applications, as opposed to those who do bench-testing, usability testing or integration testing) are centralized in one group. Functional testers are deployed directly to a project and then they return to the central organization when their work on a particular project is complete, according to John Lamb, The Hartford Property & Casualty's assistant vice president of technology infrastructure. Centralizing testers into one group—as opposed to staffing testers by application area—ensures that testers share best practices and lessons learned when they come off a project. If the group wasn't centralized, says Lamb, each of the testers would have their own methodologies, and communicating lessons learned from projects would be much more difficult.

5.Raise testers' awareness of their value. State Farm created a poster and website that highlighted the number of defects that testers and developers found early on in the development process and the amount of money (over a million dollars) they were saving by finding those defects sooner rather than later. Highlighting the importance of their work and the impact it has on the company improves their morale and makes them approach their job with even more diligence.

6.Don't forget about negative testing. So-called negative testing ensures that the proper error messages show up on screen when a user, say, fails to fill out required fields on a form or types in data that the application can't understand.

7.Tell programmers to chill out. More than one source CIO interviewed for this story talked about the friction that exists between programmers and testers, and how sensitive programmers can be when it comes time for quality assurance specialists, who are evaluated on their ability to find bugs, put developers' work to the test. When testers find fault with their applications, they tend to get their knickers in a twist. You can't blame them: After all, they're worried that the problems testers find with their work will reflect poorly on them and that they'll be penalized for making mistakes. While you want hardworking programmers who take pride in their work in your IT organization, you have to make them understand that the tester's role is to find fault with their work and that testers are just doing their jobs when they do so. You also have to assure them that if they are truly diligent developers who make few mistakes and learn from the mistakes they do make, you won't hold it against them in performance reviews. (For more on the bad blood that often exists between testers and developers, see the comments of John Lamb, assistant vice president of technology infrastructure at The Hartford's Property & Casualty Company.)

8.Cross-train developers and testers in each other's roles. Cross-training is an excellent way to foster understanding between testers and developers and thus improve relations between the two groups. The Hartford Property & Casualty Company's Lamb says it also leads to better quality applications because each group approaches their task with a new and broader understanding of the larger software development lifecycle.

9.Test in a locked-down environment. Don't let developers into your testing environment because they'll inevitably want to modify code they've written to improve it. If developers meddle with code while QA specialists are trying to test it, keeping track of what code has changed and what's been adequately tested becomes impossible for QA. This practice is also known as code control.

10.Analyze the impact of changes to code/make sure testers and developers are in constant communication. Test managers must speak with development managers on a regular basis to find out what changes developers have made to code after it's been tested so testers know to retest that code, since changes can impact the entire application, says Magdy Hanna, chairman of the International Institute for Software Testing. "Analyzing the impact of changes can greatly improve the reliability of software," he says.

11.Ensure that test cases are run against any code that developers have changed or added. This is called code coverage. Code coverage tools track the number of new or modified lines of code that have actually been tested and, in this manner, give you an idea of the effectiveness of your testing. Code coverage is also a way to ensure that you're actually testing the changes you made, since modifications often lead to bugs. Before State Farm began doing code coverage, its unit test cases covered approximately 34 percent of all changes to code. Since the insurance company started doing code coverage, its test cases cover between 70 and 90 percent of all changes.

12.Scan your source code for known problems. State Farm's Mike Fields says vendors sell tools that will scan source code for known problems and generate reports based on that analysis. For instance, the tools will detect and report that doing X always leads to a memory leak or assigning a variable in a particular manner is not an industry best practice. Although tools are widely available on the market, State Farm developed its own tool for scanning source code because the ones on the market weren't adequate for State Farm's needs, says Fields.

13.Identify patterns. State Farm uses a pareto tool that looks for patterns in data about defects. The tool helps them identify root causes for defects in software, such as not getting accurate enough requirements or not doing good documentation.

14.Develop a Plan B. When it comes to testing, you can never be too careful. Since there will be times when applications fail in spite of your best efforts to test and re-test, it's always a good idea to have a contingency plan in place in the event a system doesn't work the way it's supposed to when it goes into production. You need to know what you're going to do in the event that a worst case scenario takes place. Marshall Andrew, CIO of Station Casinos, determines in his contingency plans how his company can back the system in question out of production and go back to the way the company did things before it was put in place, as well as how Station Casinos will handle whatever impact the failure has on customers.

Advice from John Lamb, the assistant vice president of technology infrastructure with The Hartford's Property & Casualty Company, speaking about the bad blood that often exists between developers and testers: "Testers are rated on their ability to find bugs. Developers, meanwhile, are under pressure to get things done on time and on budget, . The application developer is expected to do some basic testing. Sometimes a QA tester will say to the developer, 'If you're testing numbers zero through five, you should have also tried negative one and six,' . That happens a lot. It's funny: I have a son who is just now working in a large corporation. He's the top developer on a project. One of his comments to me was that his boss was encouraging testers and rating them on their ability to find problems. My son's interpretation of that was, 'Isn't that bad for morale between testers and developers?' I told him testers have to be evaluated on their ability to find problems. That's their job and it shouldn't be bad for morale. If you're lazy in the QA world, you don't stand a chance." However, says Lamb, if developers and testers don't respect each other and don't communicate freely, "the success of the project comes into jeopardy."

[ 本帖最后由 connie 于 2006-1-20 09:48 编辑 ]
页: [1]
查看完整版本: 14 Additional Best Practices for Testing Software