In many of my posts I have alluded to the automation stack my team is building, but I have not provided any details. The next month or so of posts will remedy that: first I'll discuss the problems we are trying to solve and then I'll explain how we're going about doing that.
Those of you doing model-based testing will likely note that you don't have some of the problems I talk about. While I don't disagree I do think that many of these problems simply move into the model's implementation.
Many thanks to everyone who reviewed the whitepapers on which these posts are based: Adam, Chan, Bob, Scott, Cathy, Ross, and most especially Mike.
Test Cases Today Are Not Everything They Could Be
* Execution And Verification Are Tightly Coupled
* Multiple Paths Of Execution Cause Duplicated Verification
* Most Of Each Test Case Exercises A Small Fraction Of The Code
* Test Cases Have Intimate Knowledge Of The User Interface
* Test Cases Are Maintenance Hogs
* Test Is Back-Loaded
* Testers Are Little More Than Accountants In A Factory
So What Should A Test Case Look Like?
Please Allow Us To Introduce Ourselves
* It All Starts With User Features: The Logical Functional Model
* One Method To Rule Them All: Execution Behavior Manager
* How High? For How Long? Using Which Foot? Data Manager
* Did You? Did You Really? Loosely Coupled Comprehensive Verification
* Show Me Yours: Application Internals
* How Do I Invoke Thee? Let Me Count The Ways: The Physical Object Model
All For One And One For All: Our Complete Automation Stack
Examples of doing all this for a simple application:
* Nuts And Bolts - Introduction
* Use Your Users' Viewpoint - Logical Functional Model
* Who Ya Gonna Call? - Execution Behaviors
* A Peek Behind The Curtains - Physical Object Model
* No Guts, But Lots Of Glory - Controls abstraction layer
* Verily, 'Tis Truth - Loosely Coupled Comprehensive Verification