51Testing软件测试论坛

 找回密码
 (注-册)加入51Testing

QQ登录

只需一步,快速开始

微信登录,快人一步

手机号码,快捷登录

查看: 2795|回复: 0
打印 上一主题 下一主题

Test Deliverables: Test Plan, Test Case, Defect-Fault, and Status Report

[复制链接]

该用户从未签到

跳转到指定楼层
1#
发表于 2006-1-11 10:37:24 | 只看该作者 回帖奖励 |倒序浏览 |阅读模式
By David W Johnson


There are core sets of test deliverables that are required for any testing phase: Test Plan, Test Case, Defect-Fault, and Status Report. When taken together this set of deliverables take the testing team from planning, to testing, through defect remediation and status reporting. This does not represent a definitive set of test deliverables but it will help any test organization begin the process of determining an appropriate set of deliverables. One common misconception is that these must be presented as a set of documents but there are toolsets / applications available that capture the content and intent of these deliverables without creating a document or set of documents. The goal is to capture the required content in a useful and consistent framework as concisely as possible.

Test Plan
At a minimum the Test Plan presents the test: objectives, scope, approach, assumptions, dependencies, risks, and schedule for the appropriate test phase or phases. Many test organizations will use the test plan to describe the testing phases, testing techniques, testing methods, and other general aspects of any testing effort. General information around the practice of testing should be kept in a "Best Practices" repository - Testing Standards. This avoids redundant and conflicting information being presented to the reader and keeps the Test Plan focused on the task at hand - planning the testing effort (see - "Testing and The Role of a Test Lead Test Manager").

Objectives - Mission Statement
The objective of the current testing effort needs to be clearly stated and understood by the testing team and any other organization involved in the deployment. This should not be a sweeping statement on testing the .whole application. (unless that is actually the goal), instead the primary testing objectives should relate to the purpose of the current release. If this was a Point-of-Sales system and the purpose of the current release was to provide enhanced on-line reporting functionality then the objective / mission statement could be:

"To ensure the enhanced on-line reporting functionality performs to specification and to verify any existing functionality deemed to be In-Scope."

The test objective describes the "why" of the testing effort the details of the "what" will be described in the scope portion of the test plan. Once again any general testing objectives should be documented in the "Best Practices" repository. General or common objectives for any testing effort could include: expanding the test case regression suite, documenting new requirements, automating test cases, updating existing test cases, and so on.

In Scope
The components of the system to be tested (hardware, software, middleware, etc.) need to be clearly defined as being "In Scope". This can take the form of an itemized list of the in scope: requirements, functional areas, systems, business functions, or any aspect of the system that clearly delineates the scope to the testing organization and any other organization involved in the deployment. The "What is to be tested" question should be answered by the in scope portion of the test plan - the aspects of the system that will be covered by the current testing effort.

Out of Scope
The components of the system that will not be tested also need to be clearly defined as being "Out of Scope". This does not mean that these system components will not be executed / exercised, just that test cases will not be included that specifically tests these system components. The "What is NOT to be tested" question should be answered by the out of scope portion of the test plan. Often neglected this part of the test plan begins to deal with the risk based scheduling that all test organizations must address - What parts of the system can I afford not to test? The testing approach section of the test plan should address this question.

Approach
This section defines the testing activities that will be applied against the application for the current testing phase. This addresses how testing will be accomplished against the in scope aspects of the system and any mitigating factors that may reduce the risk of leaving aspects of the system out of scope. The approach should be viewed as a "to do" list that will be fully detailed in the test schedule. The approach should clearly state which aspects of the system are to be tested and how - Backup / Recovery Testing, Compatibility/Conversion Testing, Destructive Testing, Environment Testing, Interface Testing, Parallel Testing, Procedural Testing, Regression Test, Security Testing, Storage Testing, Stress & Performance Testing, and any other testing approach that is applicable to the current testing effort. The reasoning for using any given set of approaches should be described, usually from the perspective of risk.

Assumptions
Assumptions are facts or statements and/or expectations of other teams, which the Test Team believes to be true - assumptions specific to each testing phase should be documented. These are the assumptions upon which the test approach was based - listed assumptions are also risks should they be incorrect. If any of the assumptions prove not to be true, there may be a negative impact on the testing activities. In any environment there is a common set of assumptions that apply to any given release. These common assumptions should be documented in the "Best Practices" repository, only assumptions unique to the current testing effort should be documented and perhaps those common assumptions critical to the current situation.

Dependencies
Dependencies are events or milestones that must be completed in order to proceed within any given testing activity. These are the dependencies that will be presented in the test schedule. In this section the events or milestones which are deemed critical to the testing effort should be listed and any potential impacts / risks to the testing schedule itemized.

Risks
Risks are factors that could negatively impact the testing effort. An itemized list of risks should be drawn up and their potential impact on the testing effort described. Risks that have been itemized in the Project Plan need not be repeated here unless the impact to the testing effort has not already been clearly stated.

Schedule
The test schedule defines when and by whom testing activities will be performed. The information gathered for the body of the Test Plan is used here in combination with the available resource pool to determine the test schedule. Experience from previous testing efforts along with a detailed understanding of the current testing goals will help make the test schedule as accurate as possible. There are several planning / scheduling tools available that make the plan easier to construct and maintain.

Test Case
Test Cases are the formal implementation of a test case design. The goal of any given test case or set of test cases is to detect defects in the system being tested. A Test Case should be documented in a manner that is useful for the current test cycle and any future test cycles - at a bare minimum each test case should contain: Author, Name, Description, Step, Expected Results, and Status (see - "Testing and The Role of a Test Designer Tester").

Test Case Name
The name or title should contain the essence of the test case including the functional area and purpose of the test. Using a common naming convention that groups test cases encourages reuse and help prevents duplicate test cases from occurring.

Test Case Description
The description should clearly state what sequence of business events to be exercised by the test case. The Test Case description can apply to one or more test cases; it will often take more than one test case to fully test an area of the application.

Test Case Step
Each test case step should clearly state the navigation, data, and events required to accomplish the step. Using a common descriptive approach encourages conformity and reuse. Keywords offer on of the most effective approaches to Test Case design and can be applied to both manual and automated test cases (see - "Keyword based Test Automaton").

Expected Results
The expected behavior of the system after any test case step that requires verification / validation - this could include: screen pop-ups, data updates, display changes, or any other discernable event or transaction on the system that is expected to occur when the test case step is executed.

Status
The operational status of the test case - Is it ready to be executed?

Defect-Fault
The primary purpose of testing is to detect defects in the application before it is released into production; furthermore defects are arguably the only product the testing team produces that is seen by the project team. Document defects in a manner that is useful in the defect remediation process - at a bare minimum each defect should contain: Author, Name, Description, Severity, Impacted Area, and Status (see - "Testing and The Role of a Test Designer Tester").

Defect Name
The name or title should contain the essence of the defect including the functional area and nature of the defect.

Defect Description
The description should clearly state what sequence of events leads to the defect and when possible a screen snapshot or printout of the error.

How to replicate
The defect description should provide sufficient detail for the triage team and the developer fixing the defect to duplicate the defect.

Defect severity
The severity assigned to a defect is dependent on: phase of testing, impact of the defect on the testing effort, and the Risk the defect would present to the business if the defect was rolled-out into production.

Impacted area
The Impacted area can be referenced by functional component or functional area of the system - often both are used.

Status Report
A test organization and members of the testing team will be called upon to create Status Reports on a daily, weekly, monthly, and project bases. The content of any status report should remain focused on the testing objective, scope, and schedule milestones currently being addressed. It is useful to state each of these at the beginning of each status report and then publish the achievements or goals accomplished during the current reporting period and those that will be accomplished during the next reporting period. Any known risks that will directly impact the testing effort need to be itemized here, especially any "showstoppers" that will prevent any further testing of one or more aspects of the system.

Reporting Period
The period covered in the current status report with references to any previous status reports that should be reviewed.

Mission Statement
The objective of the current testing effort needs to be clearly stated and understood by the testing team and any other organization involved in the deployment.

Current Scope
The components of the system being tested (hardware, software, middleware, etc.) need to be clearly defined as being "In Scope" and any related components that are not being tested need to be clearly itemized as "Out of Scope".

Schedule Milestones
Any schedule milestones being worked on during the current reporting period need to be listed and their current status clearly stated. Milestones that were scheduled but not addressed during the current reporting period need to be raised as Risks.

Risks
Risks are factors that could negatively impact the current testing effort. An itemized list of risks that are currently impacting the testing effort should be drawn up and their impact on the testing effort described.





--------------------------------------------------------------------------------


David W Johnson, A Senior Computer Systems Analyst with over 20 years of experience in Information Technology across several industries having played key roles in business needs analysis, software design, software development, testing, training, implementation, organizational assessments, and support of business solutions. Developed specific expertise over the past 10 years on implementing "Testware" including - test strategies, test planning, test automation, and test management solutions. Experienced in deploying immediate solutions Worldwide, that improve software quality, test efficiency, and test effectiveness. This has led to a unique combination of technical skills, business knowledge, and the ability to apply the "right solution" to meet customer needs. Contact David at DavidWJohnson@Eastlink.ca
分享到:  QQ好友和群QQ好友和群 QQ空间QQ空间 腾讯微博腾讯微博 腾讯朋友腾讯朋友
收藏收藏
回复

使用道具 举报

本版积分规则

关闭

站长推荐上一条 /1 下一条

小黑屋|手机版|Archiver|51Testing软件测试网 ( 沪ICP备05003035号 关于我们

GMT+8, 2024-11-23 08:41 , Processed in 0.071753 second(s), 27 queries .

Powered by Discuz! X3.2

© 2001-2024 Comsenz Inc.

快速回复 返回顶部 返回列表