|
5#
楼主 |
发表于 2006-2-16 11:44:49
|
只看该作者
Re: when automated testing hasn't worked out
Submitted by Mike Kelly on Mon, 06/02/2006 - 04:32.
I’ll give what I consider the two “classic” examples of automation failure. I’ve been a party to both…
Example One: Large scale automated regression test projects
My first software job was as an intern as a member of a test automation team. I was part of a team of interns, responsible for creating hundreds of automated regression tests for a rather sophisticated application. It was a lot of fun. We were all CS students, so we knew to modularize our code, implement coding standards, try not to use record and playback as much as possible, etc…. I knew nothing about testing and less about good test automation. I remember the first time someone asked me if I was familiar with Kaner’s work on the maintainability of test automation and I said, “Cem who?”
I specifically remember, after about a year and several successes I was asked to audit an automation implementation by a large consulting firm. This was daunting, I was still a $12 an hour intern and these were $150 consultants. What I found was less then encouraging. With the combined effort of three people for four months (if I remember correctly) they had created over 1,500 scripts. The first time I ran them, I ran them on a release that had supposedly just been “verified” by the same scripts. I saw a 60% failure rate. I investigated further and I couldn’t find any documentation of successful testing. After trying to debug 1,500 record and playback scripts, I finally convinced management to let me throw them away and rewrite them in a maintainable way. It took me about three weeks to get what I think was the same coverage. (I don’t really know if it was the same because their work was so horrible I couldn’t tell.)
They knew the tools. They didn’t have any sophistication to their use of the tools. Their automation was a complete waste of money by the company.
Example Two: Performance testing
It is very easy to create performance test scripts. I’ve yet to work with a performance test tool where I haven’t been able to generate a script and get it running within about two hours (and that’s at the high end). The problem with that, is that many people new to performance testing (myself included when I was new), think that it’s relatively easy to create and run a meaningful performance test. Add to that the confusion about what people mean when they say “performance test” and you have a scenario where bad test automation can do more harm then good by providing an false sense of confidence in the performance of the application under test.
On one of my first projects where I did performance testing, I didn’t understand that modeling what your users were actually doing with the application was important. So I developed a test case with what turned out to be a trivial task and ran my tests using that one script (under various loads). Needless to say, the application performed great. It wasn’t until much later in the project, when it was a lot more costly to fix due to resource constraints, that we found the blatant error in my testing and were able to get someone to help with the performance testing so we could isolate problems and fix them.
The two best resources I’ve seen that address the complexity and the sophistication required by performance testers are two series by Scott Barber. They can be found on the PerfTestPlus website, but just to illustrate the point I’ve included titles for both series below:
User Experience, not Metrics Series:
Introduction
Modeling Individual User Delays
Modeling Individual User Patterns
Modeling Groups of Users
What should I time and where do I put my timers?
What is an outlier and how do I account for one?
Consolidating Test Results
Choosing Tests and Reporting Results to Meet Stakeholders Needs
Summarizing Across Multiple Tests
Creating a Degradation Curve
Handling Authentication and Session Tracking
Scripting Conditional User Path Navigation
Working with Unrecognized Protocols
Beyond Performance Testing Series:
Introduction
A Performance Engineering Strategy
How Fast is Fast Enough?
Accounting for User Abandonment
Determine the Root Cause of Script Failures
Interpreting Scatter Charts
Identifying the Critical Bottleneck
Modifying Tests to Focus on Failure/Bottleneck Resolution
Pinpointing the Architectural Tier of the Failure/Bottleneck
Creating a Test to Exploit the Failure/Bottleneck
Collaborative Tuning
Testing and Tuning on Common Tiers
Testing and Tuning Load Balancers and Networks
Testing and Tuning Security
These articles cover what I believe to be the basics for a performance tester. In fact, these articles make up the bulk of my curriculum when I offer training courses on performance testing. |
|