|
SMOKE TESTS
-----------------
Smoke tests get their name from the electronics industry. Circuits
are laid out on a "bread board" and power is applied. If anything
starts smoking, there is a problem and testing stops.
In the software industry, smoke testing is a shallow and wide approach
to the application. You "touch" all areas of the application without
getting too deep, looking for answers to basic questions like, "Can I
launch the test item at all?" "Does it open to a window?" "Do the
buttons on the window do things?" No need to get down to field
validation or business flows; if you get a "No" answer to basic
questions like these, then the application is so badly broken, there's
effectively nothing there to test.
Smoke test is also known as a "Build Verification Test" or BVT.
SANITY TESTS
------------------
"Sanity testing" and "smoke testing" are often felt to be the same
thing. Where a distinction is made, it's usually in one of two
directions. Either sanity testing is a focussed but limited form of
regression testing – narrow and deep, but cursory; or it's broad and
shallow, like smoke test, but concerned more with the possibility of
"insane behaviour" such as slowing the entire system to a crawl, or
destroying the database.
Generally, a smoke test is scripted (either using a written set of
tests or an automated test), whereas a sanity test is usually
unscripted.
REGRESSION TESTING
---------------------------
A "regression" is a degradation in behaviour or performance that
results as an unintended side-effect of a legitimate change to
software, or to its operational environment. Regressions are
alarmingly common in the maintenance environment particularly, and
represent a considerable risk because, with the complex
interconnectivity of modern software, there is no necessary apparent
relationship between the scale, location, and occasion of a change,
and the severity, location, or time of manifestation of a regression.
Regression testing reruns test cases that were previously passed by
the software under test, to see whether they now fail. Failure of a
previously-passed test indicates the existence of a regression. When
changes to software are made, risk analysis (here called "impact
analysis") can be used to narrow down the likely locations for
regressions, and regression testing may concentrate on them, but with
no guarantee that regressions will not occur in untested parts of the
system. The frequency with which regressions occur, the risk they
represent, and the practical impossibility of predicting their
locations and effects, are major reasons behind the growth in use of
test execution tools ("capture/replay" tools).
Smoke tests and sanity tests may both be considered limited forms of
regression test. A full regression test reruns *all* test cases that
have been executed to date (and that are still relevant to the current
version of the software under test).
See also <http://en.wikipedia.org/wiki/Regression_testing>
MONKEY TESTS
-------------------
A monkey test is also unscripted, but this sort of test is like a room
full of monkeys with a typewriter (or computer) placed in front of
each of them. The theory is that, given enough time, you could get
the works of Shakespeare (or some other document) out of them. This
is based on the idea that random activity can create order, or
(eventually) cover all options. "Monkey-test automation tools", using
a method called "coverage-checked random unit testing" (CRUT), permit
many thousands of input combinations to be tested very rapidly; see,
for example, http://www.csd.uwo.ca/faculty/andrews/papers/index.html.
GORILLAS and GUERILLAS
--------------------------------
"Gorilla testing" has also been used to describe testing done as if
done by "idiots randomly pounding the keyboard", typically by people
who equate exploratory testing and random testing. Some places will
also use this term to describe an intense round of testing -- quite
often redirecting all available resources to the activity. The idea
here is to test as much of the application in as short a period of
time as possible. There are no formal test cases.
Guerrilla testing, on the other hand, has often been used to refer to
tightly focused exploratory sessions that use particularly harsh
tests.
James Bach has taken exploratory testing to deeper levels and he
defines it as "An interactive process of simultaneous learning, test
design, and test execution." See
<http://www.satisfice.com/glossary.htm#Exploratory%20Testing> and
<http://www.satisfice.com/articles/et-article.pdf>.
Ad hoc tests are unscripted tests. Some would equate them to monkey
tests. Others would equate them with exploratory tests. By definition
(<http://dictionary.cambridge.org/define.asp?key=927&dict=CALD> or
<http://www.m-w.com/cgi-bin/dicti ... onary&va=ad+hoc> or
<http://www.askoxford.com/concise_oed/adhoc>) an ad hoc test is a
specific test for the purpose at hand. No consideration is made for
further re-use of test. The goal is to be an effective or efficient
test. |
|