51Testing软件测试论坛

 找回密码
 (注-册)加入51Testing

QQ登录

只需一步,快速开始

微信登录,快人一步

查看: 4977|回复: 1
打印 上一主题 下一主题

[转贴] SMOKE/SANITY/ REGRESSION/MONKEY/GORILLAS and GUERILLAS/Ad hoc TESTS

[复制链接]

该用户从未签到

跳转到指定楼层
1#
发表于 2006-10-9 14:47:10 | 只看该作者 回帖奖励 |倒序浏览 |阅读模式
SMOKE TESTS
-----------------
Smoke tests get their name from the electronics industry.  Circuits
are laid out on a "bread board" and power is applied.  If anything
starts smoking, there is a problem and testing stops.

In the software industry, smoke testing is a shallow and wide approach
to the application.  You "touch" all areas of the application without
getting too deep, looking for answers to basic questions like, "Can I
launch the test item at all?"  "Does it open to a window?"  "Do the
buttons on the window do things?"  No need to get down to field
validation or business flows; if you get a "No" answer to basic
questions like these, then the application is so badly broken, there's
effectively nothing there to test.

Smoke test is also known as a "Build Verification Test" or BVT.

SANITY TESTS
------------------
"Sanity testing" and "smoke testing" are often felt to be the same
thing.  Where a distinction is made, it's usually in one of two
directions.  Either sanity testing is a focussed but limited form of
regression testing – narrow and deep, but cursory; or it's broad and
shallow, like smoke test, but concerned more with the possibility of
"insane behaviour" such as slowing the entire system to a crawl, or
destroying the database.

Generally, a smoke test is scripted (either using a written set of
tests or an automated test), whereas a sanity test is usually
unscripted.

REGRESSION TESTING
---------------------------
A "regression" is a degradation in behaviour or performance that
results as an unintended side-effect of a legitimate change to
software, or to its operational environment.  Regressions are
alarmingly common in the maintenance environment particularly, and
represent a considerable risk because, with the complex
interconnectivity of modern software, there is no necessary apparent
relationship between the scale, location, and occasion of a change,
and the severity, location, or time of manifestation of a regression.

Regression testing reruns test cases that were previously passed by
the software under test, to see whether they now fail.  Failure of a
previously-passed test indicates the existence of a regression.  When
changes to software are made, risk analysis (here called "impact
analysis") can be used to narrow down the likely locations for
regressions, and regression testing may concentrate on them, but with
no guarantee that regressions will not occur in untested parts of the
system.  The frequency with which regressions occur, the risk they
represent, and the practical impossibility of predicting their
locations and effects, are major reasons behind the growth in use of
test execution tools ("capture/replay" tools).

Smoke tests and sanity tests may both be considered limited forms of
regression test.  A full regression test reruns *all* test cases that
have been executed to date (and that are still relevant to the current
version of the software under test).

See also <http://en.wikipedia.org/wiki/Regression_testing>

MONKEY TESTS
-------------------
A monkey test is also unscripted, but this sort of test is like a room
full of monkeys with a typewriter (or computer) placed in front of
each of them.  The theory is that, given enough time, you could get
the works of Shakespeare (or some other document) out of them.  This
is based on the idea that random activity can create order, or
(eventually) cover all options.  "Monkey-test automation tools", using
a method called "coverage-checked random unit testing" (CRUT), permit
many thousands of input combinations to be tested very rapidly; see,
for example, http://www.csd.uwo.ca/faculty/andrews/papers/index.html.

GORILLAS and GUERILLAS
--------------------------------
"Gorilla testing" has also been used to describe testing done as if
done by "idiots randomly pounding the keyboard", typically by people
who equate exploratory testing and random testing.  Some places will
also use this term to describe an intense round of testing -- quite
often redirecting all available resources to the activity.  The idea
here is to test as much of the application in as short a period of
time as possible.  There are no formal test cases.

Guerrilla testing, on the other hand, has often been used to refer to
tightly focused exploratory sessions that use particularly harsh
tests.

James Bach has taken exploratory testing to deeper levels and he
defines it as "An interactive process of simultaneous learning, test
design, and test execution." See
<http://www.satisfice.com/glossary.htm#Exploratory%20Testing> and
<http://www.satisfice.com/articles/et-article.pdf>.

Ad hoc tests are unscripted tests. Some would equate them to monkey
tests. Others would equate them with exploratory tests. By definition
(<http://dictionary.cambridge.org/define.asp?key=927&dict=CALD> or
<http://www.m-w.com/cgi-bin/dicti ... onary&va=ad+hoc> or
<http://www.askoxford.com/concise_oed/adhoc>) an ad hoc test is a
specific test for the purpose at hand. No consideration is made for
further re-use of test. The goal is to be an effective or efficient
test.
分享到:  QQ好友和群QQ好友和群 QQ空间QQ空间 腾讯微博腾讯微博 腾讯朋友腾讯朋友
收藏收藏
回复

使用道具 举报

该用户从未签到

2#
发表于 2006-10-11 17:00:30 | 只看该作者
还以为是翻译呢,呵呵,改成转贴好了。
回复 支持 反对

使用道具 举报

本版积分规则

关闭

站长推荐上一条 /1 下一条

小黑屋|手机版|Archiver|51Testing软件测试网 ( 沪ICP备05003035号 关于我们

GMT+8, 2024-5-6 00:53 , Processed in 0.062500 second(s), 27 queries .

Powered by Discuz! X3.2

© 2001-2024 Comsenz Inc.

快速回复 返回顶部 返回列表