51Testing软件测试论坛

 找回密码
 (注-册)加入51Testing

QQ登录

只需一步,快速开始

微信登录,快人一步

手机号码,快捷登录

查看: 3658|回复: 7
打印 上一主题 下一主题

[原创] 如何做测试计划?

[复制链接]

该用户从未签到

跳转到指定楼层
1#
发表于 2007-4-27 09:05:30 | 只看该作者 回帖奖励 |倒序浏览 |阅读模式
我本来是写代码的,现在领导改让我做一个系统的测试,以前我没做过这个,

所以想请教大家,如何做一个系统的测试计划?

如何开始,谢谢! 急用,盼回复。

另:这是一个Web系统,用ASP写,后台使用SQL Server 。
分享到:  QQ好友和群QQ好友和群 QQ空间QQ空间 腾讯微博腾讯微博 腾讯朋友腾讯朋友
收藏收藏
回复

使用道具 举报

  • TA的每日心情
    奋斗
    2018-2-28 18:04
  • 签到天数: 40 天

    连续签到: 1 天

    [LV.5]测试团长

    2#
    发表于 2007-4-27 12:56:45 | 只看该作者
    论坛上好多参考的材料啊。可以参考下,使用project来编写,顺便做个跟踪gatt。有利于计划的跟踪。
    回复 支持 反对

    使用道具 举报

    该用户从未签到

    3#
    发表于 2007-4-27 15:28:40 | 只看该作者

    测试计划主要写以下几方面

    测试条目、测试策略、通过/失败标准、测试环境、测试工具、人员进度安排、存在风险及对策。
    3. Test Items
    The scope of this Testing activity will include:
            BDonline release 1.0 Web site application software and supporting infrastructure
            Windows based client platforms

    The scope of this testing activity will not include:
            BDonline's documentation e.g.: Requirements & Design Specifications or User, Operations & Installation Guides
            Any other B&D Web sites or applications
            Any Legacy systems that the BDonline application integrates with (with the exception of the interface)
            Supporting operational processes such as postal confirmation of trades or customer service e.g. the telephone wait times that customers have to spent waiting for service

    6. Approach/Strategy
    The philosophy of the testing is risk-based testing, i.e. each test case will be prioritized as, High, Medium, or Low priority and then scheduled accordingly (Highest first). Exceptions to this general rule might include instances where:
    . A large number of low priority test cases can be executed using a small amount of resources
    . Scheduling conflicts arise e.g. the DBA is on vacation, thereby causing lower priority tests (that do not need her assistance) to be scheduled while she is away
    . A lower priority test is a pre-requisite of another higher priority test e.g. an expensive and high priority usability test might necessitate many of the inexpensive low priority navigational tests to have passed
    . Due to the lack of comprehensive requirements, navigational and functional tests may be scheduled first, so as to allow the testers the opportunity to gain familiarity with the Web site (thereby also allowing them to developing pseudo requirements).

    The testing will use a combination of manual and automated testing, due to the limited duration of the testing; only automated tools that are already familiar to the B&D staff or have a minimum learning curve will be used.
    Due to the short period of time allotted for test execution, the Web site’s source code will be frozen while being tested. Except for critical fixes that are blocking the testing efforts, changes will not be scheduled while a unit of code is being tested.
    Basic metrics will be kept for test effort (i.e. hours), test cases executed, and incidents. Due to the lack of available tools and time, no attempt will be made to collect more sophisticated metrics such as code coverage.
    回复 支持 反对

    使用道具 举报

    该用户从未签到

    4#
    发表于 2007-4-27 15:29:10 | 只看该作者
    7. Item Pass/Fail Criteria
    The entrance criteria's for each phase of testing must be met before the next phase can commence. Formal approval will be granted by the IS Director.
    The IS Director will retain the decision as to whether the total and/or criticality of any or all detected incidents/defects warrant the delay (or rework) of the BDonline release 1.0 Web site.

    11. Test Environments
    There are essentially two parts to the BDonline application in production: the client-side, which because the application is going to accessed over the Internet by members of the general public, B&D has little control over. And the server-side which (initially) will be comprised of a single cluster of servers residing at B&D’s corporate IS center.

    Available Client-side Environments Available
    Due to a limited budget and the pressing need complete the testing phase, B&D has decided not to purchase any additional client-side hardware, instead B&D will utilize it’s existing set of desktop and laptop machines, which currently consists of the following machine specifications:

    . “High-end PC” – B&D’s current desktop standard
    Pentium III 677Mhz, 128MB RAM, 8GB HD, 17” Color Screen (default 1024 x 768 – 16 bit color), external stereo speakers and 56.6kps Modem or 100MB Ethernet Internet connection
    typically running Windows ME or Windows 2000 Professional

    . “Mid-range laptop” – B&D’s current laptop standard
    Pentium II 333Mhz, 96MB RAM, 4GB HD, 12” Color Screen (default 800 x 600 – 16 bit color), built in stereo speakers and 56.6kps Modem or 10MB Ethernet Internet connection typically running Windows 98 SE or SR2 (with Y2K upgrades)

    . “Low-end PC” – B&D’s old desktop standard
    Pentium 100Mhz, 32MB RAM, 13GB HD, 15” Color Monitor (default 1024 x 768 – 256 color), external stereo speakers and 14.4/28.8/33.3kps Modems or 10MB Ethernet Internet connection typically running Windows 95 SE or A (with Y2K upgrades)

    . “Legacy laptop” – B&D’s old laptop standard 486DX 50Mhz, 8MB RAM, 250MB HD, 8” Mono Screen (default 640 x 480 – 256 color) and 14.4kps Modem typically running Windows 95 SE or A (with Y2K upgrades)
    Note: All PC’s allowed Windows to manage their O/S swap file and had access to a color printer.
    The following Windows based Browsers are readily available for installation on any of the client
    platforms (listed alphabetically):
    . AOL 3.0, 4.0, 5.0 and 6.0
    • Home Reader (Audio browser)
    . Lynx (Text only browser)
    . Microsoft Internet Explorer 3.0, 4.72 SP1a, 5.0 and 5.5
    . Mosaic 2.0 (a very old legacy browser)
    . Neoplanet 5.0 (Austin Powers build using MS IE 5.0, representative of the many "custom" Browsers that use MS IE as a kernel)
    . Netscape Navigator 3.0, 4.5, 4.6, 4.7 & 6.0
    . Opera 3.6, 4.0 and 5.0 (a very fast browser, partly because it strictly adheres to the W3C HTML/JavaScript standards)
    Browser settings (cache size, # of connections, font selection etc.) where possible were left unchanged i.e. the installation defaults were used for all testing. No optional Plug Ins will be installed.

    Available Server-side Environments
    In addition to the cluster of servers used for production, two functionally exact replicas of the serverside production environment will be created and maintained. The development team will use one replica for unit and integration testing, while the second replica will be reserved for system testing by the testing team. Prior to a new release being put into production, the Web application will be moved to a staging area on the production system where a final series of acceptance tests can be performed.
    While the replica systems will be functionally the same as the production environment e.g. same system software installed in the same order, with the same installation options selected etc. Due to budget constraints, the replicas will be scaled down versions of the production system (e.g. instead of several Web servers, there will only be one) and in the case of the unit/integration replica, the hardware specifications may not be exactly the same (e.g. Pentium II processors instead of dual Pentium IV’s).
    In addition, several network “file and print” servers will be made available (on a limited basis) for the testing team to use as load generators during performance tests.

    Available Testing Tools
    The following 3rd party “free” tools were available to scan the Web site and provide feedback:
    . Bobby (accessibility, performance & html syntax) – cast.org
    . Freeappraisal (performance from 35 different cities) – keynote.com
    . Scrubby (meta tag analyzer) – scrubtheweb.com
    . Site analysis (search engine ratings) – site-see.com
    . Stylet (style sheet validation) – microsoft.com
    . Tune up (performance & style checker) & gif lube (gif analyzer) – websitegarage.com
    . Websat (usability) – nist.gov
    . Web metasearch (search engine ratings) – dogpile.com
    . Webstone (performance benchmarking tool) - mindcraft.com
    . Windiff (file comparison) – microsoft.com
    . W3C validation service (html and css syntax) – w3c.org

    In addition the following “commercial” tools were available:
    . Aetgweb (pair-wise combinations) from Telcordia/Argreenhouse
    . Astra Site Manager (linkage) from Mercury Interactive
    . eTester suite (capture/reply, linkage & performance) from RSW – 100 virtual user license
    . FrontPage (spell checking) from Microsoft
    . Ghost (software configuration) from Symatec
    . KeyReadiness (large scale performance testing) from Keynote systems
    . LinkBot Enterprise (link checking, HTML compliance and performance estimates) from Watchfire
    . Prophecy (large scale performance testing) from Envive
    . WebLoad (performance) from Radview – 1000 virtual user license
    . Word (readability estimates) from Microsoft
    A manual digital stopwatch was also available.

    13. Staffing and Training Needs
    The relevant B&D managers will ensure that the staff assigned to this project are experienced with:
    . General development & testing techniques
    . B&D’s Web site development lifecycle methodology
    . All development and automated testing tools that they maybe required to use

    14. Schedule
    The following tentative schedule will hopefully be meet:
    . Test design (this document) is expected to be completed by the end of this month
    . Test execution is expected to last no more than two weeks and to start immediately after the test plans have been approved and the Web application has been hosted
    . Producing the Test Incident/Summary report is expected to be completed within 2 business days of completing the test execution phase
    A more detailed breakdown is currently being developed in MS project and will be completed before this master test plan is approved.

    15. Risks and Contingencies
    The following seeks to identify some of the more likely project risks and propose possible
    contingencies:
    . Web site becomes unavailable – Testing will be delayed until this situation is rectified - May need to recruit more staff to do the testing or reduce the number of test cases.
    . Web testing software is not available/does not work (e.g. Web site uses cookies and tool can not handle cookies) - This will delay the introduction of automated testing and result in more manual testing - May need to recruit more staff to do the testing or reduce the number of test cases.
    . Testing staff shortages/unavailability, many of the test staff are part-time and have other higher priorities, in addition no slack time is allocated for illness or vacation - May need to recruit more staff to do the testing or reduce the number of test cases.
    . A large number of defects/incidents makes it functionally impossible to run all of the test cases – As many test cases as possible will be executed, The IS Director in conjunction with other B&D Managers will ultimately make the decision as to whether the number of defects/incidents warrants delaying the implementation of the production version.
    . Not enough time to complete all test cases. If time cannot be extended, individual test cases will be skipped, starting with the lowest priority.
    回复 支持 反对

    使用道具 举报

    该用户从未签到

    5#
    发表于 2007-4-27 17:27:18 | 只看该作者
    下载几个模板参照着写,最快,最省事了,呵呵
    回复 支持 反对

    使用道具 举报

    该用户从未签到

    6#
    发表于 2007-4-27 17:29:39 | 只看该作者
    个人在写测试计划时,
    觉得有几项很重要:人员安排,时间分配,存在的风险以及解决方法,测试策略
    回复 支持 反对

    使用道具 举报

    该用户从未签到

    7#
    发表于 2007-9-7 09:26:58 | 只看该作者
    哪有模板下载呀?就想下个模板先看看.
    回复 支持 反对

    使用道具 举报

    该用户从未签到

    8#
    发表于 2007-9-28 17:04:24 | 只看该作者
    好像有个中英文对照的
    回复 支持 反对

    使用道具 举报

    本版积分规则

    关闭

    站长推荐上一条 /1 下一条

    小黑屋|手机版|Archiver|51Testing软件测试网 ( 沪ICP备05003035号 关于我们

    GMT+8, 2024-11-11 01:09 , Processed in 0.083775 second(s), 27 queries .

    Powered by Discuz! X3.2

    © 2001-2024 Comsenz Inc.

    快速回复 返回顶部 返回列表