51Testing软件测试论坛

 找回密码
 (注-册)加入51Testing

QQ登录

只需一步,快速开始

微信登录,快人一步

查看: 26250|回复: 42
打印 上一主题 下一主题

[讨论] 大家来这里列软件测试的专业术语呀

[复制链接]

该用户从未签到

跳转到指定楼层
1#
发表于 2008-7-30 14:35:43 | 只看该作者 回帖奖励 |倒序浏览 |阅读模式
软件测试常用单词:
1.静态测试:Non-Execution-Based Testing或Static testing
        代码走查:Walkthrough
代码审查:Code Inspection
技术评审:Review
2.动态测试:Execution-Based Testing
3.白盒测试:White-Box Testing
4.黑盒测试:Black-Box Testing
5.灰盒测试:Gray-Box Testing
6.软件质量保证SQA:Software Quality Assurance
7.软件开发生命周期:Software Development Life Cycle
8.冒烟测试:Smoke Test
9.回归测试:Regression Test
10.功能测试:Function Testing
11.性能测试:Performance Testing
12.压力测试:Stress Testing
13.负载测试:Volume Testing
14.易用性测试:Usability Testing
15.安装测试:Installation Testing
16.界面测试:UI Testing
17.配置测试:Configuration Testing
18.文档测试:Documentation Testing
19.兼容性测试:Compatibility Testing
20.安全性测试:Security Testing
21.恢复测试:Recovery Testing
22.单元测试:Unit Tes
23.集成测试:Integration Test
24.系统测试:System Test
25.验收测试:Acceptance Test
26.测试计划应包括:
测试对象:The Test Objectives,
测试范围: The Test  Scope,
测试策略: The Test  Strategy
测试方法: The Test  Approach,
测试过程: The test procedures,
测试环境: The Test Environment,
测试完成标准:The test Completion criteria
                                        测试用例:The Test Cases
                                        测试进度表:The Test Schedules
                                        风险:Risks
                                        Etc
27.主测试计划: a master test plan
28.需求规格说明书:The Test Specifications
29.需求分析阶段:The Requirements Phase
30.接口:Interface
31.最终用户:The End User
31.正式的测试环境:Formal Test Environment
32.确认需求:Verifying The Requirements
33.有分歧的需求:Ambiguous Requirements
34.运行和维护:Operation and Maintenance.
35.可复用性:Reusability
36.可靠性: Reliability/Availability
37.电机电子工程师协会IEEE:The Institute of Electrical and Electronics Engineers)  
38.要从以下几方面测试软件:
正确性:Correctness
实用性:Utility
性能:Performance
健壮性:Robustness
可靠性:Reliability

关于Bugzilla:
1.Bug按严重程度(Severity)分为:
Blocker,阻碍开发和/或测试工作
              Critical,死机,丢失数据,内存溢出
             Major,较大的功能缺陷
             Normal,普通的功能缺陷
            Minor,较轻的功能缺陷
Trivial,产品外观上的问题或一些不影响使用的小毛病,如菜单或对话框中的文字拼写或字体问题等等
        Enhancement,建议或意见
2.Bug按报告状态分类(Status)
   待确认的(Unconfirmed)
         新提交的(New)
        已分配的(Assigned)
   问题未解决的(Reopened)
         待返测的(Resolved)
         待归档的(Verified)
         已归档的(Closed)
3.Bug处理意见(Resolution)
                      已修改的(Fixed)
不是问题(Invalid)
                       无法修改(Wontfix)
        以后版本解决(Later)
                 保留(Remind)
                        重复(Duplicate)
                 无法重现(Worksforme)
分享到:  QQ好友和群QQ好友和群 QQ空间QQ空间 腾讯微博腾讯微博 腾讯朋友腾讯朋友
收藏收藏2
回复

使用道具 举报

该用户从未签到

2#
 楼主| 发表于 2008-7-30 14:37:05 | 只看该作者
L10N(Localization)   : 本地化
Lag time             : 延迟时间
LCSAJ:线性代码顺序和跳转(Linear Code Sequence And Jump)
LCSAJ coverage:LCSAJ覆盖      
LCSAJ testing:LCSAJ测试      
Lead time            : 前置时间
Load testing         : 负载测试
Load Testing:负载测试      
Localizability testing: 本地化能力测试
Localization testing : 本地化测试
logic analysis:逻辑分析      
logic-coverage testing:逻辑覆盖测试      
Maintainability      : 可维护性
maintainability testing:可维护性测试      
Maintenance          : 维护
Master project schedule :总体项目方案
Measurement          : 度量
Memory leak          : 内存泄漏
Migration testing    : 迁移测试
Milestone            : 里程碑
Mock up              : 模型,原型
modified condition/decision coverage:修改条件/判定覆盖      
modified condition/decision testing        :修改条件/判定测试      
modular decomposition:参考模块分解
Module testing       : 模块测试
Monkey testing       : 跳跃式测试
Monkey Testing:跳跃式测试  
mouse over:鼠标在对象之上
mouse leave:鼠标离开对象   
MTBF:平均失效间隔实际(mean time between failures)
MTP    MAIN TEST PLAN主确认计划
MTTF:平均失效时间        (mean time to failure)
MTTR:平均修复时间(mean time to repair)
multiple condition coverage:多条件覆盖      
mutation analysis:变体分析      
N/A(Not applicable)  : 不适用的
Negative Testing     : 逆向测试, 反向测试, 负面测试
negative testing:参考负面测试
Negative Testing:逆向测试/反向测试/负面测试  
off by one:缓冲溢出错误   
non-functional requirements testing:非功能需求测试
nominal load:额定负载
N-switch coverage:N切换覆盖      
N-switch testing:N切换测试      
N-transitions:N转换      
Off-the-shelf software : 套装软件
operational testing:可操作性测试      
output domain:输出域
paper audit:书面审计      
Pair Programming     :  成对编程
partition testing:分类测试      
Path coverage        :  路径覆盖
path coverage:路径覆盖      
path sensitizing:路径敏感性      
path testing:路径测试      
path:路径      
Peer review          :  同行评审
Performance          :  性能
Performance indicator:  性能(绩效)指标
Performance testing  :  性能测试
Pilot                :  试验
Pilot testing        :  引导测试
Portability          :  可移植性
portability testing:可移植性测试      
Positive testing     :  正向测试
Postcondition        :  后置条件
Precondition         :  前提条件
precondition:预置条件      
predicate data use:谓词数据使用      
predicate:谓词      
Priority             :  优先权
program instrumenter:程序插装      
progressive testing:递进测试      
Prototype            :  原型
Pseudo code          :  伪代码
pseudo-localization testing:伪本地化测试
pseudo-random:伪随机      
QC:质量控制(quality control)
Quality assurance(QA):  质量保证
Quality Control(QC)  :  质量控制
Race Condition:竞争状态      
Rational Unified Process(以下简称RUP):瑞理统一工艺
Recovery testing     :  恢复测试
recovery testing:恢复性测试      
Refactoring          :  重构
regression analysis and testing:回归分析和测试      
Regression testing   :  回归测试
Release              :  发布
Release note         :  版本说明
release:发布      
Reliability          :  可靠性
reliability assessment:可靠性评价      
reliability:可靠性      
Requirements management tool: 需求管理工具
Requirements-based testing : 基于需求的测试
Return of Investment(ROI): 投资回报率
review:评审      
Risk assessment      :  风险评估
risk:风险      
Robustness           :   强健性
Root Cause Analysis(RCA): 根本原因分析
safety critical:严格的安全性      
safety:(生命)安全性      
Sanity testing       :   健全测试
Sanity Testing:理智测试      
Schema Repository    :   模式库
Screen shot          :   抓屏、截图
SDP:软件开发计划(software development plan)
Security testing     :   安全性测试
security testing:安全性测试      
security.:(信息)安全性      
serviceability testing:可服务性测试      
Severity             :   严重性
Shipment             :   发布
simple subpath:简单子路径      
Simulation           :  模拟
Simulator            :  模拟器
SLA(Service level agreement): 服务级别协议
SLA:服务级别协议(service level agreement)
Smoke testing        :   冒烟测试
Software development plan(SDP): 软件开发计划
Software development process: 软件开发过程
software development process:软件开发过程      
software diversity:软件多样性      
software element:软件元素      
software engineering environment:软件工程环境      
software engineering:软件工程      
Software life cycle  :   软件生命周期
source code:源代码      
source statement:源语句      
Specification        : 规格说明书
specified input:指定的输入      
spiral model        :螺旋模型      
SQAP   SOFTWARE QUALITY ASSURENCE PLAN 软件质量保证计划
SQL:结构化查询语句(structured query language)
Staged Delivery:分布交付方法
state diagram:状态图      
state transition testing        :状态转换测试      
state transition:状态转换      
state:状态      
Statement coverage   : 语句覆盖
statement testing:语句测试      
statement:语句      
Static Analysis:静态分析      
Static Analyzer:静态分析器      
Static Testing:静态测试      
statistical testing:统计测试      
Stepwise refinement  : 逐步优化
storage testing:存储测试      
Stress Testing       : 压力测试
structural coverage:结构化覆盖      
structural test case design:结构化测试用例设计      
structural testing:结构化测试      
structured basis testing:结构化的基础测试      
structured design:结构化设计      
structured programming:结构化编程      
structured walkthrough:结构化走读      
stub:桩
sub-area:子域     
Summary:  总结
SVVP  SOFTWARE Vevification&Validation  PLAN: 软件验证和确认计划
symbolic evaluation:符号评价      
symbolic execution:参考符号执行
symbolic execution:符号执行      
symbolic trace:符号轨迹      
Synchronization      : 同步
Syntax testing       : 语法分析
system analysis:系统分析      
System design        : 系统设计
system integration:系统集成      
System Testing       : 系统测试
TC   TEST CASE 测试用例
TCS  TEST CASE SPECIFICATION 测试用例规格说明
TDS   TEST DESIGN SPECIFICATION 测试设计规格说明书
technical requirements testing:技术需求测试      
Test                 : 测试
test automation:测试自动化      
Test case            : 测试用例
test case design technique:测试用例设计技术      
test case suite:测试用例套     
test comparator:测试比较器      
test completion criterion:测试完成标准      
test coverage:测试覆盖      
Test design          : 测试设计
Test driver          : 测试驱动
test environment:测试环境      
test execution technique:测试执行技术      
test execution:测试执行      
test generator:测试生成器      
test harness:测试用具      
Test infrastructure  : 测试基础建设
test log:测试日志      
test measurement technique:测试度量技术
Test Metrics :测试度量      
test procedure:测试规程      
test records:测试记录      
test report:测试报告      
Test scenario        : 测试场景
Test Script.:测试脚本      
Test Specification:测试规格      
Test strategy        : 测试策略
test suite:测试套      
Test target          : 测试目标
Test ware             :  测试工具
Testability          : 可测试性
testability:可测试性      
Testing bed          : 测试平台
Testing coverage     : 测试覆盖
Testing environment  : 测试环境
Testing item         : 测试项
Testing plan         : 测试计划
Testing procedure    : 测试过程
Thread testing       : 线程测试
time sharing:时间共享      
time-boxed           : 固定时间
TIR    test incident report    测试事故报告
ToolTip:控件提示或说明
top-down testing:自顶向下测试      
TPS TEST PEOCESS SPECIFICATION 测试步骤规格说明
Traceability         : 可跟踪性
traceability analysis:跟踪性分析      
traceability matrix:跟踪矩阵      
Trade-off            : 平衡
transaction:事务/处理
transaction volume:交易量   
transform. analysis:事务分析      
trojan horse:特洛伊木马      
truth table:真值表      
TST  TEST SUMMARY REPORT 测试总结报告
Tune System : 调试系统
TW TEST WARE :测试件
Unit Testing         :单元测试      
Usability Testing:可用性测试      
Usage scenario       : 使用场景
User acceptance Test : 用户验收测试
User database        :用户数据库
User interface(UI)   : 用户界面
User profile         : 用户信息
User scenario        : 用户场景
V&V (Verification & Validation) : 验证&确认
validation           :确认      
verification         :验证      
version              :版本      
Virtual user         : 虚拟用户
volume testing:容量测试  
VSS(visual source safe) :
VTP   Verification  TEST PLAN验证测试计划
VTR  Verification TEST REPORT验证测试报告
Walkthrough          : 走读
Waterfall model      : 瀑布模型
Web testing          : 网站测试
White box testing    : 白盒测试
Work breakdown structure (WBS) : 任务分解结构
Zero bug bounce (ZBB) : 零错误反弹
回复 支持 反对

使用道具 举报

该用户从未签到

3#
 楼主| 发表于 2008-7-30 14:43:17 | 只看该作者

Software Testing 10 Rules

. Test early and test often.

2. Integrate the application development and testing life cycles. You'll get better results and you won't have to mediate between two armed camps in your IT shop.(这后半句怎么理解?)

3. Formalize a testing methodology; you'll test everything the same way and you'll get uniform results.

4. Develop a comprehensive test plan; it forms the basis for the testing methodology.

5. Use both static and dynamic testing.

6. Define your expected results.

7. Understand the business reason behind the application. You'll write a better application and better testing scripts.

8. Use multiple levels and types of testing (regression, systems, integration, stress and load).

9. Review and inspect the work, it will lower costs.

10. Don't let your programmers check their own work; they'll miss their own errors.
回复 支持 反对

使用道具 举报

该用户从未签到

4#
 楼主| 发表于 2008-7-30 14:48:45 | 只看该作者

bug cycle

What happens to a bug from start to finish.
        While attending testing seminars, I noticed that there was a gap in what was being taught. There’s a lot of theory presented, a lot of ‘why test’ classes and a lot of classes on specific techniques but nothing on a couple of practices that will go a long way towards improving the testing process in a company, specifically setting up a defect tracking system and enforcing policies and procedures to resolve those defects. Setting up these two things, more than anything else, will put a company on the road to organizing its testing and QA effort. To fill that gap, I’ve come up with the ‘Bug Life Cycle’ presentation. While I can’t claim it as my own, it is what I’ve learned over the years as a tester; many of you will find it familiar.

What is a bug?
        In computer technology, a bug is a coding error in a computer program. Myers defined it by saying that “A software error is present when the program does not do what its end user reasonably expects it to do.” (Myers, 1976.). I tell my testers if you don’t like it, it’s a bug.
        Over the years, my colleagues and I have decided that there are as many definitions for the term “bug” as there are testers. “There can never be an absolute definition for bugs, nor an absolute determination of their existence. The extent to which a program has bugs is measured by the extent to which it fails to be useful. This is a fundamentally human measure.” (Beizer, 1984.). For a more definitive list of many types of bugs refer to Software Testing by Cem Kaner, et. al., pages 363-432.

Who can report a bug?
        Anyone who can figure out that the software isn’t working properly can report a bug. The more people who critique a product, the better it’s going to be. However, here’s a short list of people expected to report bugs:
Testers / QA personnel
Developers
Technical Support
Beta sites
End users
Sales and marketing staff (especially when interacting with customers).

When do you report a bug?
        When you find it! When in doubt, write it up. Waiting means that you’ll forget to write it altogether or important details about the bug will be forgotten. Writing it now also gives you a ‘scratch pad’ to make notes on as you do more investigation and work on the bug.
        Also, writing the bug when you find it makes that information instantly available to everyone. You don’t have to run around the building telling everyone about the bug; a simple phone call or email will alert everyone that the bug exists. Additionally, the information about the bug doesn’t change or get forgotten with every telling of the story.

Bugs are tracked in a database
        The easiest way to keep track of defect reports is in a database. Paper is an ok way to record defect reports on but pieces of paper can get lost or destroyed; a database is more reliable and can be backed up on a regular basis.
        You can purchase many commercially available defect tracking databases or you can build your own. It’s up to you. I’ve always built my own with something small like Microsoft Access or SQL Server. The decision then was that it was cheaper to build and maintain it on site than it was to purchase it. You’ll have to run the numbers for your situation when you make that decision.
        The rule of thumb is one and only one defect per report (or record) when writing a bug report. If more than one defect is put into a report, the human tendency is to deal with the first problem and forget the rest of them. Also, defects are not always fixed at the same time. With one defect per report, as the defects get fixed, they will be tested individually instead of in a group where the chance that a defect is overlooked or forgotten is greater.
        You may hear the term “bugfile” used by people when referring to a defect database. The name bugfile is a slang term from the old WordPerfect Corporation. “Bugs” were first logged into a flat file database called DataPerfect; a file of bugs, hence the word “bugfile”.

A good bug reports include the following items:
        Put the Reporter’s Name on the bug. If there are questions we need to know who originated this report.
        Specify the Build or Version number of the code being worked on. Is this the shipping version or a build done in-house for testing and development? Some bugs may only occur in the shipping version; if this is the case, the version number is a crucial piece of information.
        Specify the Feature or Specification or part of the code. This facilitates assigning the bug to a developer assigned to that part of the product.
        Include a Brief Description of what the problem is. For example, “Fatal error when printing landscape.” is a good description; short and to the point.
        List Details including how to duplicate the bug and any other relevant data or clues about the bug. Start with how the computer and software is setup. List each and every step (don’t leave any out) to produce the bug. Sometimes a minor detail can make all the difference in duplicating or not duplicating a bug. For example, using the keyboard versus using the mouse may product very different results when duplicating a bug.
        If the status isn’t ‘Submitted’ by default, change it to Submitted. This is a flag to the bug verifier that a new bug has been created and needs to be verified and assigned.

Things to remember…
        Keep the text of the bug impersonal. Bug reports will be read by a variety of people including those outside the department and even the company. Please don’t insult people’s ancestors or the company they work for or the state they live in or make any other impulsive or insensitive comment. Be careful with humorous remarks; one person’s humor is another person’s insult. Keep the writing professional.
        Be as specific as possible in describing the current state of the bug along with the steps to get into that state. Don’t make assumptions that the reader of the bug will be in the same frame of mind as you are. Please don’t make people guess where you are or how you got into that situation. Not everyone is thinking along the same lines as you are.

Rating Bugs
        While it is important to know how many bugs are in a product, it is even more useful to know how many of those bugs are severe, ship stopping bugs compared to the number of inconvenient bugs. To aid in assessing the state of the product and to prioritize bug fixes, bugs are ranked. The easiest way to rank or rate bugs is to assign each bug a severity rating and a likelihood rating. This assignment is done by the bug reporter when the bug is created. The bug’s rating is a combination of the severity and likelihood ratings.
回复 支持 反对

使用道具 举报

该用户从未签到

5#
 楼主| 发表于 2008-7-30 14:50:49 | 只看该作者

bug cycle

Severity
        The severity tells the reader of the bug how bad the problem is. Or in other words, say what the results of the bug are. Here’s a common list for judging the severity of bugs. There is sometimes disagreement about how bad a bug is. This list takes the guess work out of assigning a severity to bugs.

Rating        Value
Blue screen        1
Loss without a work around        2
Loss with a work around        3
Inconvenient        4
Enhancement        5

Likelihood
        Put yourself in the average user’s place. How likely is a user to encounter this bug? While the tester may encounter this bug every day with every build, if the user isn’t likely to see it, how bad can the bug be?
Rating        Value
Always        1
Usually        2
Sometimes        3
Rarely        4
Never        5

Severity * Likelihood = Rating
        Computing the rating of a bug is done by multiplying the numeric value given to the severity and likelihood status’. Do the math by hand or let your defect tracker do it for you.
        The trick is to remember that the lower the number, the more severe the bug is. The highest rating is a 25 (5 X 5), the lowest is 1 (1 X 1). The bug with a 1 rating should be fixed first while the bug with a 25 rating may never get fixed.
        Looking at a list of these bugs ordered by rating means the most important ones will be at the top of the list to be dealt with first. Sorting bugs this way also lets management know whether the product is ready to ship or not. If the number of severe (1) bugs is zero, the product can ship. If there are any severe bugs, then bug fixing must continue.

Other useful information
         Who’s the bug Assigned to; who’s going to be responsible for the bug and do the work on the bug?
        What Platform was the bug found on – Windows, Linux, etc. Is the bug specific to one platform or does it occur on all platforms?
        What Product was the bug found in? If your company is doing multiple products this is a good way to track those products.
        What Company would be concerned about this bug? If your company is working with multiple companies either as an OEM or as customer this is a good way to track that information.
        Whatever else you want or need to keep track of. Some of these fields will also have value to marketing and sales. It’s a useful way to track information about companies and clients.

An example of a bug report:

Figure 1 The status tells us the state of the bug. The severity tells us how bad the bug is. The likelihood tells us how often the average user will see the bug. The Assigned To field tells us who is responsible for resolving the bug. The Feature tells us what part of the product the bug is in. The Problem Description gives a brief (very brief) summation of the problem. This description is used when compiling reports or lists of bugs. The details tell us the current setup or situation, the steps to duplicate the problem and any other essential information that will allow someone else to duplicate the bug. Once the bug is submitted, this field cannot be changed. Any additional information will go in the notes field.  The notes field contains any discussions about the bug. For example, when and why the status was changed and by whom; additional information about how to duplicate the bug that will aid in resolving the bug; and opinions about the bug. This is a “free-forum” area where people should feel free to express their opinions without censure or criticism. Once comments are placed on a bug, they cannot be changed or deleted by anyone else or even the author. The comments, like the details, stand. Anyone reading the bug after the fact should be able to understand not only what the bug was and how bad a bug it was but also how it was resolved and why it was resolved that way.

Examples of poorly written bugs:
        Please keep in mind that I didn’t make any of these up!
        “My computer crashed.” We are sorry for your loss, can you give us more information?
        “It’s kinda “sucky”.” This one violates all sorts of rules. What’s kind of “sucky”. For that matter, define “sucky.”. Better yet, don’t use the work “sucky” it’s not in the dictionary and most certainly not in good taste.
        “It don’t.” This bug doesn’t provide enough information. What don’t? Don’t what?
        “Product needs a “speel” checker.” It goes without saying that “spelling counts”!

Now we have a bug…
        The first step is Verification. A bug verifier searches the database looking for all ‘Submitted’ bugs assigned to him. He then duplicates the bug by following the steps listed in the details section of the bug. If the bug is reproduced and has all the proper information, the assigned to field is changed to the appropriate person who will be fixing the bug. If the bug is not written clearly, is missing some steps or can’t be reproduced, it will be sent back to the bug reporter for additional work.
        The Assigned To field contains the name of the person responsible for that area of the product or code. It is important to note that from this point onward, the developer’s name stays on the bug. Why? There are usually more developers than there are testers. Developers have a set of features to work on. Developers look at bugs from a stand point of “what is assigned to me?”. Testers have multiple sets of features to test. Testers look at bugs from a stand point of “what needs to be tested?”; testers may also change what features they are assigned to test. Because of the different way testers and developers work, developers sort bugs by the Assigned To field and testers sort bugs by the Status field. Leaving the developer’s name on the bug also makes it easier to send the bug back to the developer for more work. The tester simply changes the status field to Verified and it automatically goes back to the developer.

The Developer works on the bug…
        The first thing the developer does is give the bug a ‘In Progress’ status indicating that he has seen the bug and is aware that it his responsibility to resolve. The developer works on the bug and based on his conclusions assigns a status to the bug indicating what the next step should be.
        Remember, the developer does NOT change the Assigned To field. His name stays on the bug so if the bug has to go back to him, it will make back onto his list. This procedure ensures that bugs don’t fall between the cracks.
        The following is a list of status’ that a developer can assign to a bug.

Fixed
        The Fixed status indicates that a change was made to the code and will be available in the next build. Testers search the database on a daily basis looking for all Fixed status bugs. Then the bug reporter or tester assigned to the feature retests the bug duplicating the original circumstances. If the bug is fixed and it is now working properly, another test with slightly different circumstances is performed to confirm the fix. If the bug passes both tests, it gets a Tested status.
        If the bug doesn’t pass the test, the bug is given a Verified status and sent back to the developer. Notice here that since the bug’s Assigned To field has retained the developer’s name, it’s an easy process for the tester to send the bug back by simply changing the status to Submitted.

Duplicate
        The Duplicate status bug is the same as a previously reported bug. Sometimes only the developer or person looking at the code can tell that the bug is a duplicate. It’s not always obvious from the surface. A note indicating the previous bug number is placed on the duplicate bug. A note is also placed on the original bug indicating that a duplicate bug exists. When the original bug is fixed and tested, the duplicate bug will be tested also. If the bug really is a duplicate of previous bug then the when the previous bug is fixed, the duplicate bug will also be fixed. If this the case then both bugs get a Tested status.
        If the duplicate is still a bug, while the original bug is working properly, the duplicate bug is no longer has a duplicate status. It gets a Submitted status and is sent back to the developer. This is a “fail-safe” built into the bug life cycle. It’s a check and balance that prevents legitimate bugs from being swept under the carpet or falling between the cracks.
        A note of warning. Writing lots of duplicate bugs will get a tester a reputation for being an “airhead”. It pays to set time aside daily to read all the new bugs written the previous day.

Resolved
        Resolved means that the problem has been taken care of but no code has been changed. For example, bugs can be resolved by getting new device drivers or third party software. Resolved bugs are tested to make sure that the problem really has been resolved with the new situation. If the problem no longer occurs, the bug gets a Tested status. If the Resolved bug still occurs, it is sent back to the developer with a Submitted status.
回复 支持 反对

使用道具 举报

该用户从未签到

6#
 楼主| 发表于 2008-7-30 14:52:10 | 只看该作者
Need More Information
        Need More Information or “NMI” indicates that the bug verifier or developer does not have enough information to duplicate or fix the bug; for example, the steps to duplicate the bug may be unclear or incomplete. The developer changes the status to ‘Need More Information’ and includes a question or comments to the reporter of the bug. This status is a flag to the bug reporter to supply the necessary information or a demonstration of the problem. After updating the bug information (in the Notes field), the status is put back to Verified so the developer can continue working on the bug. If the bug reporter can not duplicate the bug, it is given a Can’t Duplicate status along with a note indicating the circumstances.
        The only person who can put “Can’t Duplicate” on a bug is the person who reported it (or the person testing it). The developer can NOT use this status, he must put Need More Information on it to give the bug reporter a chance to work on the bug.
        This is another example of a “fail-safe” built into the database. It is vital at this stage that the bug be given a second chance. The developer should never give a bug a ‘Can’t Duplicate’ status. The bug reporter needs an opportunity to clarify or add information to the bug or to retire it.

Working as Designed
        The developer has examined the bug, the product requirements and the design documents and determined that the bug is not a bug, it is Working as Designed. What the product or code is doing is intentional as per the design. Or as someone more aptly pointed out it’s “working as coded”! It’s doing exactly what the code said to do.
        This bug can go several directions after being assigned this status. If the tester agrees with the status, then the status stands and the bug is finished. The bug may be sent to documentation for inclusion in help files and the manual. If the tester disagrees with the status then the bug can be appealed by putting a Submitted status on it to send the bug back through the process again. The tester should include in the notes a reason why, although it is Working as Designed, it should be changed now. The bug may also be sent back to the design committee so that the design can be improved.
        This is a dangerous status. It’s an easy way to sweep bugs under the carpet by giving them this status. It’s up to the bug reporter to make sure the bug doesn’t get forgotten in this manner. Product managers may also review lists of bugs recently assigned Working as Designed.

Enhancement
        Enhancement means that while the suggested change is great idea because of technical reasons, time constraints or other factors, it won’t be incorporated into the code until the next version of the product.
This status may also be appealed by changing the status to Submitted and adding a note specifying why it should be fixed now.

Defer
        Defer is almost the same status as Enhancement. This status implies that the cost of fixing the bug is too great given the benefits that it would produce. If the fix is a one liner to one file that doesn’t influence other files, it might be ok to fix the bug. On the other hand, if the fix will force the rebuild of many files which would force the re-testing of the product and there’s no time left to test the fix before shipping the product, then the fix would be unacceptable and the bug would get a Defer status. To appeal the status, send it back through the process again by putting a Submitted status on it and add a note saying why it should be fixed now.

Not to be Fixed
        You may see the Not to be Fixed status although I don’t recommend making this status available for use. There may be extenuating circumstances where a bug will not be fixed because of technology, time constraints, a risk of destabilizing the code or other factors. A better status to use is Not to be Fixed. To appeal the status, send it back through the process again by putting a Submitted status on it and add a note saying why it should be fixed now.
        This is similar to the Working as Designed status in that its use can be dangerous. Be on the watch for this one. Sometimes developers call this status “You can’t make me”.

Tested
        The Tested status is used only by testers on Fixed, Resolved and Duplicate bugs. This status is a “end of the road” status indicating that the bug has reached the end of its life cycle.

Pending
        The Pending status is used only by testers on Fixed bugs when a bug cannot be immediately tested. The tester may be waiting on hardware, device drivers, a build or additional information necessary to test the bug. When the necessary items have been obtained, the bug status is changed back to Fixed and then it is tested. Make sure the testing part isn’t skipped.

Can’t Duplicate
        This status is used only by the bug reporter; developers or managers cannot use this status. If a bug isn’t reproducible by the assigned developer or bug verifier the bug reporter needs a chance to clarify or add to the bug. There may be a hardware setup or situation or particular way of producing a bug that is peculiar to only this computer or bug reporter and he needs a chance to explain what the circumstances are. Limiting this status to bug reporters only prevents bugs from slipping between the cracks and not getting fixed.

The product has shipped, what happens next?
        First of all, to ship a product the number of bugs rated 5 or less, bugs with a Fixed, Resolved, Pending or Need More Information status must be zero. We’ll assume that the product has shipped and all these bugs have been taken care of. This means the bugfile is full of bugs that have reached the end of their life cycle. Proper database maintenance takes place at this point. Archiving or hiding all these bugs will make the database easier to use and read.
        All bugs with a Tested or Can’t Duplicate bugs are archived. This means that the records are either removed and placed in an archive database or flagged to be hidden from the current view of the database. Never delete any bug records; it may be necessary to do some historical research in the bugfile (‘What did we ship when?’ or ‘Why did we ship with this bug?’).
        Enhancement and Defer bugs are either moved to the new bugfile or retained in the current bugfile. The status of these bugs is then changed back to Verified.

Advanced defect database techniques
        There are things you can do with your database to make it more than just a to-do list for developers and testers. The bugfile is basically raw data, sorting and filtering it makes the data  information that is useful in the decision making process done by management.  A couple of these things to do with the data is to create Reports and Customized views.

Reports
        The data in the defect database is not very useful until it is sorted and presented in a organized fashion, then it becomes information. For example, sorting by developer, the information becomes a ‘to do’ list sorted by rating. Sorting by status lets the reader know how many bugs are submitted or in progress; i.e. how many bugs are currently being worked on? By feature – how many open bugs are there for a particular feature? What feature needs more work and what feature is stable? Sorting by product is useful when more than one product is being worked on simultaneously.
        Be aware that there are certain metrics or reports that should not be used. If you use these reports you will destroy the credibility of your bug file and it will be reduced to a “laundry-list” for developers. One of these reports is “how many bugs did a tester report” and the other is “how many bugs did a developer fix”. Neither one of these has any useful purpose except to beat up people uselessly. See Software Testing by Cem Kaner, et. al.
回复 支持 反对

使用道具 举报

该用户从未签到

7#
 楼主| 发表于 2008-7-30 14:53:04 | 只看该作者

bug cycle

Examples of Reports
        The Product Manager wants to see a list of bugs to be fixed before shipping that is sorted by rating to determine how ready the product is for shipping. He needs to know what work is left to be done so that an appropriate schedule can be set. The Test Lead needs a list of bugs to be tested before shipping, sorted by tester to make sure all the bugs get tested in the allotted amount of time. The Development Lead has a list of bugs to be resolved before shipping, sorted by developer so that adjustments in work load can be made to ensure that everything is taken care of by the deadline. Technical Support likes a list of bugs that were fixed and not fixed in the product before shipping so they can adequately prepare to support the product. Technical support can resolve customer problems faster by knowing what has been fixed in a product release (the new release or upgrade will fix that problem) and what bugs are still in the product.
        A defect database that has all these fields built into it is able to sort defect data and make it useful information.

Customized views of the database
        If your defect tracking database supports it, you can limit the information seen based on who the person logging in is. For example, the database that I built with SQL Server 7 was web browser based. Each person was assigned a job category such as ‘Tester’, ‘Developer’ or ‘Manager’. The web page that displayed was then the web page designed for that job category. The Developer would see Submitted and In Progress bugs assigned to him while Testers would see all Fixed, NMI, Pending, Resolved, and Duplicate bugs regardless of who is assigned to the bug. The Product Manager view sees all bugs assigned to him, all newly reported bugs, and newly updated bugs.
        The reason for this customized view is that each job views the defect database in a different way based on their needs and the work that has to be accomplished. Testers don’t care who the bug is assigned to, just what the status is. Testers want to see all the bugs, not just bugs written by them, with a ‘Fixed’ status so the bugs can be tested with the latest build. Developers only care about what current and active bugs are assigned to them; developers aren’t concerned about any other person’s bugs.  Product managers are concerned that bugs don’t get ‘swept under the carpet’ with Enhancement or Defer status’ so managers want to see all bugs with that particular status.
References
Software Testing by Cem Kaner, et.al.
Software Testing in the Real World by Edward Kit
Software Test Documentation (IEEE 829-1983)
The Complete Guide to Software Testing by Bill Hetzel
The Art of Software Testing by Glenford J. Myers
Software Testing and Quality Assurance by Boris Beizer
Software Quality: Analysis and Guidelines for Success by Capers Jones
Software Testing Techniques by Boris Beizer
Software Testing by Ron Patton (2000, Sams, ISBN 0672319837). See chapters 18 and 19.
“Organize Your Problem Tracking System Cleaning up your bug database can be as easy as organizing your sock drawer” By Barry Mirrer.
回复 支持 反对

使用道具 举报

该用户从未签到

8#
 楼主| 发表于 2008-7-30 15:04:34 | 只看该作者

Data Structure

数据结构基本英语词汇
这是本人手工整理和录入的,虽然都是简单的词汇,但我想还是有朋友会要的
数据结构基本英语词汇
数据抽象 data abstraction
数据元素 data element
数据对象 data object
数据项 data item
数据类型 data type
抽象数据类型 abstract data type
逻辑结构 logical structure
物理结构 phyical structure
线性结构 linear structure
非线性结构 nonlinear structure
基本数据类型 atomic data type
固定聚合数据类型 fixed-aggregate data type
可变聚合数据类型 variable-aggregate data type
线性表 linear list
栈 stack
队列 queue
串 string
数组 array
树 tree
图 grabh
查找,线索 searching
更新 updating
排序(分类) sorting
插入 insertion
删除 deletion
前趋 predecessor
后继 successor
直接前趋 immediate predecessor
直接后继 immediate successor
双端列表 deque(double-ended queue)
循环队列 cirular queue
指针 pointer
先进先出表(队列)first-in first-out list
后进先出表(队列)last-in first-out list
栈底 bottom
栈定 top
压入 push
弹出 pop
队头 front
队尾 rear
上溢 overflow
下溢 underflow
数组 array
矩阵 matrix
多维数组 multi-dimentional array
以行为主的顺序分配 row major order
以列为主的顺序分配 column major order
三角矩阵 truangular matrix
对称矩阵 symmetric matrix
稀疏矩阵 sparse matrix
转置矩阵 transposed matrix
链表 linked list
线性链表 linear linked list
单链表 single linked list
多重链表 multilinked list
循环链表 circular linked list
双向链表 doubly linked list
十字链表 orthogonal list
广义表 generalized list
链 link
指针域 pointer field
链域 link field
头结点 head node
头指针 head pointer
尾指针 tail pointer
串 string
空白(空格)串 blank string
空串(零串)null string
子串 substring
树 tree
子树 subtree
森林 forest
根 root
叶子 leaf
结点 node
深度 depth
层次 level
双亲 parents
孩子 children
兄弟 brother
祖先 ancestor
子孙 descentdant
二叉树 binary tree
平衡二叉树 banlanced binary tree
满二叉树 full binary tree
完全二叉树 complete binary tree
遍历二叉树 traversing binary tree
二叉排序树 binary sort tree
二叉查找树 binary search tree
线索二叉树 threaded binary tree
哈夫曼树 Huffman tree
有序数 ordered tree
无序数 unordered tree
判定树 decision tree
双链树 doubly linked tree
数字查找树 digital search tree
树的遍历 traversal of tree
先序遍历 preorder traversal
中序遍历 inorder traversal
后序遍历 postorder traversal
图 graph
子图 subgraph
有向图 digraph(directed graph)
无向图 undigraph(undirected graph)
完全图 complete graph
连通图 connected graph
非连通图 unconnected graph
强连通图 strongly connected graph
弱连通图 weakly connected graph
加权图 weighted graph
有向无环图 directed acyclic graph
稀疏图 spares graph
稠密图 dense graph
重连通图 biconnected graph
二部图 bipartite graph
边 edge
顶点 vertex
弧 arc
路径 path
回路(环)cycle
弧头 head
弧尾 tail
源点 source
终点 destination
汇点 sink
权 weight
连接点 articulation point
初始结点 initial node
终端结点 terminal node
相邻边 adjacent edge
相邻顶点 adjacent vertex
关联边 incident edge
入度 indegree
出度 outdegree
最短路径 shortest path
有序对 ordered pair
无序对 unordered pair
简单路径 simple path
简单回路 simple cycle
连通分量 connected component
邻接矩阵 adjacency matrix
邻接表 adjacency list
邻接多重表 adjacency multilist
遍历图 traversing graph
生成树 spanning tree
最小(代价)生成树 minimum(cost)spanning tree
生成森林 spanning forest
拓扑排序 topological sort
偏序 partical order
拓扑有序 topological order
AOV网 activity on vertex network
AOE网 activity on edge network
关键路径 critical path
匹配 matching
最大匹配 maximum matching
增广路径 augmenting path
增广路径图 augmenting path graph
查找 searching
线性查找(顺序查找)linear search (sequential search)
二分查找 binary search
分块查找 block search
散列查找 hash search
平均查找长度 average search length
散列表 hash table
散列函数 hash funticion
直接定址法 immediately allocating method
数字分析法 digital analysis method
平方取中法 mid-square method
折叠法 folding method
除法 division method
随机数法 random number method
排序 sort
内部排序 internal sort
外部排序 external sort
插入排序 insertion sort
随小增量排序 diminishing increment sort
选择排序 selection sort
堆排序 heap sort
快速排序 quick sort
归并排序 merge sort
基数排序 radix sort
外部排序 external sort
平衡归并排序 balance merging sort
二路平衡归并排序 balance two-way merging sort
多步归并排序 ployphase merging sort
置换选择排序 replacement selection sort
文件 file
主文件 master file
顺序文件 sequential file
索引文件 indexed file
索引顺序文件 indexed sequential file
索引非顺序文件 indexed non-sequential file
直接存取文件 direct access file
多重链表文件 multilist file
倒排文件 inverted file
目录结构 directory structure
树型索引 tree index
回复 支持 反对

使用道具 举报

该用户从未签到

9#
 楼主| 发表于 2008-7-30 15:30:24 | 只看该作者
软件测试词汇中英文对照(一)


Acceptance testing : 验收测试
Acceptance Testing:可接受性测试
Accessibility test : 软体适用性测试
actual outcome:实际结果      
Ad hoc testing     : 随机测试
Algorithm analysis : 算法分析
algorithm:算法      
Alpha testing      : α测试
analysis:分析      
anomaly:异常      
application software:应用软件      
Application under test (AUT) : 所测试的应用程序
Architecture       : 构架
Artifact           : 工件
ASQ:自动化软件质量(Automated Software Quality)
Assertion checking : 断言检查
Association        : 关联
Audit              : 审计
audit trail:审计跟踪      
Automated Testing:自动化测试      
Backus-Naur Form:BNF范式      
baseline:基线      
Basic Block:基本块      
basis test set:基本测试集      
Behaviour          : 行为
Bench test         : 基准测试
benchmark:标杆/指标/基准      
Best practise      : 最佳实践
Beta testing       : β测试
Black Box Testing:黑盒测试      
Blocking bug       : 阻碍性错误
Bottom-up testing  : 自底向上测试
boundary value coverage:边界值覆盖      
boundary value testing:边界值测试      
Boundary values    : 边界值
Boundry Value Analysis:边界值分析      
branch condition combination coverage:分支条件组合覆盖      
branch condition combination testing:分支条件组合测试      
branch condition coverage:分支条件覆盖      
branch condition testing:分支条件测试
branch condition:分支条件
Branch coverage    : 分支覆盖
branch outcome:分支结果      
branch point:分支点      
branch testing:分支测试      
branch:分支      
Breadth Testing:广度测试      
Brute force testing: 强力测试
Buddy test         : 合伙测试
Buffer             : 缓冲
Bug                : 错误
Bug bash           : 错误大扫除
bug fix            :  错误修正
Bug report         : 错误报告
Bug tracking system: 错误跟踪系统
bug:缺陷
Build              : 工作版本(内部小版本)
Build Verfication tests(BVTs): 版本验证测试
Build-in           : 内置
Capability Maturity Model (CMM):   能力成熟度模型
Capability Maturity Model Integration (CMMI): 能力成熟度模型整合
capture/playback tool:捕获/回放工具      
Capture/Replay Tool:捕获/回放工具      
CASE:计算机辅助软件工程(computer aided software engineering)
CAST:计算机辅助测试      
cause-effect graph:因果图      
certification        :证明      
change control:变更控制      
Change Management  :变更管理
Change Request     :变更请求
Character Set      : 字符集
Check In           :检入
Check Out          :检出
Closeout           : 收尾
code audit        :代码审计      
Code coverage      : 代码覆盖
Code Inspection:代码检视      
Code page          : 代码页
Code rule          : 编码规范
Code sytle         : 编码风格
Code Walkthrough:代码走读      
code-based testing:基于代码的测试      
coding standards:编程规范      
Common sense       : 常识
Compatibility Testing:兼容性测试      
complete path testing        :完全路径测试      
completeness:完整性      
complexity        :复杂性      
Component testing     : 组件测试
Component:组件      
computation data use:计算数据使用      
computer system security:计算机系统安全性      
Concurrency user      : 并发用户
Condition coverage    : 条件覆盖
condition coverage:条件覆盖      
condition outcome:条件结果      
condition:条件      
configuration control:配置控制      
Configuration item    : 配置项
configuration management:配置管理      
Configuration testing : 配置测试
conformance criterion: 一致性标准      
Conformance Testing: 一致性测试      
consistency        : 一致性      
consistency checker: 一致性检查器      
Control flow graph    : 控制流程图
control flow graph:控制流图      
control flow:控制流      
conversion testing:转换测试      
Core team             : 核心小组
corrective maintenance:故障检修      
correctness        :正确性      
coverage        :覆盖率      
coverage item:覆盖项      
crash:崩溃      
criticality analysis:关键性分析      
criticality:关键性      
CRM(change request management): 变更需求管理
Customer-focused mindset : 客户为中心的理念体系
Cyclomatic complexity : 圈复杂度
data corruption:数据污染      
data definition C-use pair:数据定义C-use使用对      
data definition P-use coverage:数据定义P-use覆盖      
data definition P-use pair:数据定义P-use使用对      
data definition:数据定义      
data definition-use coverage:数据定义使用覆盖      
data definition-use pair        :数据定义使用对      
data definition-use testing:数据定义使用测试      
data dictionary:数据字典      
Data Flow Analysis    : 数据流分析
data flow analysis:数据流分析      
data flow coverage:数据流覆盖      
data flow diagram:数据流图      
data flow testing:数据流测试      
data integrity:数据完整性      
data use:数据使用      
data validation:数据确认      
dead code:死代码      
Debug                 : 调试
Debugging:调试      
Decision condition:判定条件      
Decision coverage     : 判定覆盖
decision coverage:判定覆盖      
decision outcome:判定结果      
decision table:判定表      
decision:判定      
Defect                : 缺陷
defect density        : 缺陷密度
Defect Tracking       :缺陷跟踪
Deployment            : 部署
Depth Testing:深度测试
design for sustainability :可延续性的设计     
design of experiments:实验设计      
design-based testing:基于设计的测试      
Desk checking         : 桌前检查
desk checking:桌面检查   
Determine Usage Model : 确定应用模型  
Determine Potential Risks : 确定潜在风险
diagnostic:诊断      
DIF(decimation in frequency) : 按频率抽取
dirty testing:肮脏测试      
disaster recovery:灾难恢复      
DIT (decimation in time): 按时间抽取
documentation testing        :文档测试      
domain testing:域测试      
domain:域      
DTP  DETAIL TEST PLAN详细确认测试计划
Dynamic analysis      : 动态分析
dynamic analysis:动态分析      
Dynamic Testing:动态测试      
embedded software:嵌入式软件      
emulator:仿真      
End-to-End testing:端到端测试      
Enhanced Request      :增强请求
entity relationship diagram:实体关系图  
Encryption Source Code Base: 加密算法源代码库   
Entry criteria        : 准入条件
entry point        :入口点      
Envisioning Phase     :  构想阶段
Equivalence class     : 等价类
Equivalence Class:等价类      
equivalence partition coverage:等价划分覆盖      
Equivalence partition testing : 等价划分测试
equivalence partition testing:参考等价划分测试
equivalence partition testing:等价划分测试      
Equivalence Partitioning:等价划分      
Error                 :  错误
Error guessing        :  错误猜测
error seeding:错误播种/错误插值      
error:错误      
Event-driven          :  事件驱动
Exception handlers    :  异常处理器
exception:异常/例外      
executable statement:可执行语句      
Exhaustive Testing:穷尽测试      
exit point:出口点      
expected outcome:期望结果      
Exploratory testing   :  探索性测试
Failure              : 失效
Fault                : 故障
fault:故障      
feasible path:可达路径      
feature testing:特性测试      
Field testing        : 现场测试
FMEA:失效模型效果分析(Failure Modes and Effects Analysis)
FMECA:失效模型效果关键性分析(Failure Modes and Effects Criticality Analysis)
Framework            : 框架
FTA:故障树分析(Fault Tree Analysis)
functional decomposition:功能分解      
Functional Specification        :功能规格说明书      
Functional testing   : 功能测试
Functional Testing:功能测试      
G11N(Globalization)  : 全球化
Gap analysis         : 差距分析
Garbage characters   : 乱码字符
glass box testing:玻璃盒测试      
Glass-box testing    : 白箱测试或白盒测试
Glossary             : 术语表
GUI(Graphical User Interface): 图形用户界面
Hard-coding          : 硬编码
Hotfix               : 热补丁
I18N(Internationalization): 国际化
Identify Exploratory Tests – 识别探索性测试
IEEE:美国电子与电器工程师学会(Institute of Electrical and Electronic Engineers)
Incident      事故   
Incremental testing  : 渐增测试
incremental testing:渐增测试      
infeasible path:不可达路径      
input domain:输入域      
Inspection           : 审查
inspection:检视      
installability testing:可安装性测试      
Installing testing   : 安装测试
instrumentation:插装      
instrumenter:插装器      
Integration          :集成
Integration testing  : 集成测试
interface            : 接口
interface analysis:接口分析      
interface testing:接口测试      
interface:接口      
invalid inputs:无效输入      
isolation testing:孤立测试      
Issue                : 问题
Iteration            : 迭代
Iterative development: 迭代开发
job control language:工作控制语言      
Job:工作      
Key concepts         : 关键概念
Key Process Area     : 关键过程区域
Keyword driven testing : 关键字驱动测试
Kick-off meeting     : 动会议
回复 支持 反对

使用道具 举报

该用户从未签到

10#
 楼主| 发表于 2008-8-1 10:49:52 | 只看该作者
回复 支持 反对

使用道具 举报

该用户从未签到

11#
 楼主| 发表于 2008-8-1 10:50:37 | 只看该作者
回复 支持 反对

使用道具 举报

该用户从未签到

12#
 楼主| 发表于 2008-8-1 11:20:42 | 只看该作者
回复 支持 反对

使用道具 举报

该用户从未签到

13#
 楼主| 发表于 2008-8-4 10:54:31 | 只看该作者
GUI Testing Checklist

Section 1 - Windows Compliance Standards

1.1. Application
1.2. For Each Window in the Application
1.3. Text Boxes
1.4. Option (Radio buttons)
1.5. Check Boxes
1.6. Command Buttons
1.7. Drop Down List Boxes
1.8. Combo Boxes
1.9. List Boxes
Section 2 - Tester's Screen Validation Checklist

2.1. Aesthetic Conditions
2.2. Validation Conditions
2.3. Navigation Conditions
2.4. Usability Conditions
2.5. Data Integrity Conditions
2.6. Modes (Editable Read-only) Conditions
2.7. General Conditions
2.8. Specific Field Tests
2.8.1. Date Field Checks
2.8.2. Numeric Fields
2.8.3. Alpha Field Checks
Section 3 - Validation Testing - Standard Actions

3.1. On every Screen
3.2. Shortcut keys / Hot Keys
3.3. Control Shortcut Keys
1. Windows Compliance Testing

For Each Application

Start the application by double clicking on its icon. The loading message should show the application name, version number, and a bigger pictorial representation of the icon.

No login is necessary.

The main window of the application should have the same caption as the caption of the icon in Program Manager.

Closing the application should result in an "Are you Sure" message box.

Attempt to start application twice. This should not be allowed - you should be returned to main Window.

Try to start the application twice as it is loading.

On each window, if the application is busy, then the hour glass should be displayed. If there is no hour glass (e.g. alpha access enquiries) then some enquiry in progress message should be displayed.

All screens should have a Help button, F1 should work doing the same.

For Each Window in the Application

If the window has a minimize button, click it.



The window should return to an icon on the bottom of the screen.

This icon should correspond to the original icon under Program Manager.

Double click the icon to return the window to its original size.

The window caption for every application should have the name of the application and the window name - especially the error messages. These should be checked for spelling, English and clarity, especially on the top of the screen. Check if the title of the window does make sense.

If the screen has a control menu, then use all ungreyed options. (see below)



Check all text on window for spelling/tense and grammar

Use TAB to move focus around the window.

Use SHIFT+TAB to move focus backwards.

Tab order should be left to right, and up to down within a group box on the screen. All controls should get focus - indicated by dotted box, or cursor. Tabbing to an entry field with text in it should highlight the entire text in the field.

The text in the micro help line should change. Check for spelling, clarity and non-updateable etc.

If a field is disabled (greyed) then it should not get focus. It should not be possible to select it with either the mouse or by using TAB. Try this for every greyed control.

Never updateable fields should be displayed with black text on a grey background with a black label.

All text should be left-justified, followed by a colon tight to it.

In a field that may or may not be updateable, the label text and contents changes from black to grey depending on the current status.

List boxes are always white background with black text whether they are disabled or not. All others are grey.

In general, do not use goto screens, use gosub, i.e. if a button causes another screen to be displayed, the screen should not hide the first screen.

When returning, return to the first screen cleanly i.e. no other screens/applications should appear.

In general, double-clicking is not essential. In general, everything can be done using both the mouse and the keyboard.

All tab buttons should have a distinct letter.

Text Boxes

Program Filename:

Move the mouse cursor over all enterable text boxes. The cursor should change from arrow to insert bar. If it doesn't then the text in the box should be grey or non-updateable.

Enter text into the box

Try to overflow the text by typing to many characters - should be stopped Check the field width with capitals W.

Enter invalid characters - Letters in amount fields, try strange characters like + , - * etc. in All fields.

SHIFT and arrow should select characters. Selection should also be possible with mouse. Double click should select all text in box.

Option (Radio Buttons)



The left and right arrows should move 'ON' selection. So should up and down. Select with the mouse by clicking.

Check Boxes



Clicking with the mouse on the box, or on the text should SET/UNSET the box. SPACE should do the same.

Command Buttons



If the command button leads to another screen, and if the user can enter or change details on the other screen, then the text on the button should be followed by three dots.

All buttons except for OK and Cancel should have a letter access to them. This is indicated by a letter underlined in the button text. The button should be activated by pressing ALT+letter. Make sure that there is no duplication.

Click each button once with the mouse - This should activate

Tab to each button - Press SPACE - This should activate

Tab to each button - Press RETURN - This should activate

The above are VERY IMPORTANT, and should be done for EVERY command button.

Tab to another type of control (not a command button). One button on the screen should be default (indicated by a thick black border). Pressing the return key: in ANY CASE no command button control should activate it.

If there is a Cancel button on the screen, then pressing <Esc> should activate it.

If pressing the command button results in uncorrectable data e.g. closing an action step, there should be a message phrased positively with Yes/No answers where Yes results in the completion of the action.

Drop Down List Boxes



Pressing the arrow should give list of options. This list may be scrollable. You should not be able to type text in the box.

Pressing a letter should bring you to the first item in the list with that start with that letter. Pressing 慍trl - F4?should open/drop down the list box.

Spacing should be compatible with the existing windows spacing (word etc.). Items should be in alphabetical order with the exception of blank/none which is at the top or the bottom of the list box.

Drop down with the item selected should be display the list with the selected item on the top.

Make sure that only one space appears, you shouldn't have a blank line at the bottom.

Combo Boxes



Should allow text to be entered. Clicking the arrow should allow user to choose from list

List Boxes



Should allow a single selection to be chosen, by clicking with the mouse, or using the up and down arrow keys.

Pressing a letter should take you to the first item in the list starting with that letter.

If there is a 'View' or 'Open' button beside the list box then double clicking on a line in the list box, should act in the same way as selecting and item in the list box, then clicking the command button.

Force the scroll bar to appear, make sure all the data can be seen in the box.


2. Screen Validation Checklist
回复 支持 反对

使用道具 举报

该用户从未签到

14#
 楼主| 发表于 2008-8-4 10:56:26 | 只看该作者
Aesthetic Conditions:

Is the general screen background the correct colour?
Are the field prompts the correct color?
Are the field backgrounds the correct color?
In read-only mode, are the field prompts the correct color?
In read-only mode, are the field backgrounds the correct color?
Are all the screen prompts specified in the correct screen font?
Is the text in all fields specified in the correct screen font?
Are all the field prompts aligned perfectly on the screen?
Are all the field edit boxes aligned perfectly on the screen?
Are all groupboxes aligned correctly on the screen?
Should the screen be resizable?
Should the screen be minimisable?
Are all the field prompts spelt correctly?
Are all character or alpha-numeric fields left justified? This is the default unless otherwise specified.
Are all numeric fields right justified? This is the default unless otherwise specified.
Is all the microhelp text spelt correctly on this screen?
Is all the error message text spelt correctly on this screen?
Is all user input captured in UPPER case or lower case consistently?
Where the database requires a value (other than null) then this should be defaulted into fields. The user must either enter an alternative valid value or leave the default value intact.
Assure that all windows have a consistent look and feel.
Assure that all dialog boxes have a consistent look and feel.
Validation Conditions:

Does a failure of validation on every field cause a sensible user error message?
Is the user required to fix entries which have failed validation tests?
Have any fields got multiple validation rules and if so are all rules being applied?
If the user enters an invalid value and clicks on the OK button (i.e. does not TAB off the field) is the invalid entry identified and highlighted correctly with an error message?
Is validation consistently applied at screen level unless specifically required at field level?
For all numeric fields check whether negative numbers can and should be able to be entered.
For all numeric fields check the minimum and maximum values and also some mid-range values allowable?
For all character/alphanumeric fields check the field to ensure that there is a character limit specified and that this limit is exactly correct for the specified database size?
Do all mandatory fields require user input?
If any of the database columns don抰 allow null values then the corresponding screen fields must be mandatory. (If any field which initially was mandatory has become optional then check whether null values are allowed in this field.)
Navigation Conditions:

Can the screen be accessed correctly from the menu?
Can the screen be accessed correctly from the toolbar?
Can the screen be accessed correctly by double clicking on a list control on the previous screen?
Can all screens accessible via buttons on this screen be accessed correctly?
Can all screens accessible by double clicking on a list control be accessed correctly?
Is the screen modal, is the user prevented from accessing other functions when this screen is active and is this correct?
Can a number of instances of this screen be opened at the same time and is this correct?
Usability Conditions:

Are all the dropdowns on this screen sorted correctly? Alphabetic sorting is the default unless otherwise specified.
Is all date entry required in the correct format?
Have all pushbuttons on the screen been given appropriate shortcut keys?
Do the shortcut keys work correctly?
Have the menu options which apply to your screen got fast keys associated and should they have?
Does the TAB order specified on the screen go in sequence from top left to bottom right? This is the default unless otherwise specified.
Are all read-only fields avoided in the TAB sequence?
Are all disabled fields avoided in the TAB sequence?
Can the cursor be placed in the microhelp text box by clicking on the text box with the mouse?
Can the cursor be placed in read-only fields by clicking in the field with the mouse?
Is the cursor positioned in the first input field or control when the screen is opened?
Is there a default button specified on the screen?
Does the default button work correctly?
When an error message occurs does the focus return to the field in error when the user cancels it?
When the user Alt+Tab抯 to another application does this have any impact on the screen upon return to the application?
Do all the fields edit boxes indicate the number of characters they will hold by their length? e.g. a 30 character field should be a lot longer
Data Integrity Conditions:

Is the data saved when the window is closed by double clicking on the close box?
Check the maximum field lengths to ensure that there are no truncated characters?
Where the database requires a value (other than null) then this should be defaulted into fields. The user must either enter an alternative valid value or leave the default value intact.
Check maximum and minimum field values for numeric fields?
If numeric fields accept negative values can these be stored correctly on the database and does it make sense for the field to accept negative numbers?
If a set of radio buttons represent a fixed set of values such as A, B and C then what happens if a blank value is retrieved from the database? (In some situations rows can be created on the database by other functions which are not screen based and thus the required initial values can be incorrect.)
If a particular set of data is saved to the database check that each value gets saved fully to the database. Beware of truncation (of strings) and rounding of numeric values.
Modes (Editable Read-only) Conditions:

Are the screen and field colors adjusted correctly for read-only mode?
Should a read-only mode be provided for this screen?
Are all fields and controls disabled in read-only mode?
Can the screen be accessed from the previous screen/menu/toolbar in read-only mode?
Can all screens available from this screen be accessed in read-only mode?
Check that no validation is performed in read-only mode.
回复 支持 反对

使用道具 举报

该用户从未签到

15#
 楼主| 发表于 2008-8-4 10:56:52 | 只看该作者
General Conditions:

Assure the existence of the "Help" menu.
Assure that the proper commands and options are in each menu.
Assure that all buttons on all tool bars have a corresponding key commands.
Assure that each menu command has an alternative (hot-key) key sequence which will invoke it where appropriate.
In drop down list boxes, ensure that the names are not abbreviations / cut short
In drop down list boxes, assure that the list and each entry in the list can be accessed via appropriate key / hot key combinations.
Ensure that duplicate hot keys do not exist on each screen
Ensure the proper usage of the escape key (which is to undo any changes that have been made) and generates a caution message "Changes will be lost - Continue yes/no"
Assure that the cancel button functions the same as the escape key.
Assure that the Cancel button operates as a Close button when changes have been made that cannot be undone.
Assure that only command buttons which are used by a particular window, or in a particular dialog box, are present. - I.e. make sure they don抰 work on the screen behind the current screen.
When a command button is used sometimes and not at other times, assure that it is grayed out when it should not be used.
Assure that OK and Cancel buttons are grouped separately from other command buttons.
Assure that command button names are not abbreviations.
Assure that all field labels/names are not technical labels, but rather are names meaningful to system users.
Assure that command buttons are all of similar size and shape, and same font and font size.
Assure that each command button can be accessed via a hot key combination.
Assure that command buttons in the same window/dialog box do not have duplicate hot keys.
Assure that each window/dialog box has a clearly marked default value (command button, or other object) which is invoked when the Enter key is pressed - and NOT the Cancel or Close button
Assure that focus is set to an object/button which makes sense according to the function of the window/dialog box.
Assure that all option buttons (and radio buttons) names are not abbreviations.
Assure that option button names are not technical labels, but rather are names meaningful to system users.
If hot keys are used to access option buttons, assure that duplicate hot keys do not exist in the same window/dialog box.
Assure that option box names are not abbreviations.
Assure that option boxes, option buttons, and command buttons are logically grouped together in clearly demarcated areas "Group Box".
Assure that the Tab key sequence which traverses the screens does so in a logical way.
Assure consistency of mouse actions across windows.
Assure that the color red is not used to highlight active objects (many individuals are red-green color blind).
Assure that the user will have control of the desktop with respect to general color and highlighting (the application should not dictate the desktop background characteristics).
Assure that the screen/window does not have a cluttered appearance.
Ctrl + F6 opens next tab within tabbed window.
Shift + Ctrl + F6 opens previous tab within tabbed window.
Tabbing will open next tab within tabbed window if on last field of current tab.
Tabbing will go onto the 'Continue' button if on last field of last tab within tabbed window.
Tabbing will go onto the next editable field in the window.
Banner style and size and display exact same as existing windows.
If 8 or less options in a list box, display all options on open of list box - should be no need to scroll.
Errors on continue will cause user to be returned to the tab and the focus should be on the field causing the error. (I.e. the tab is opened, highlighting the field with the error on it).
Pressing continue while on the first tab of a tabbed window (assuming all fields filled correctly) will not open all the tabs.
On open of tab focus will be on first editable field.
All fonts to be the same
Alt+F4 will close the tabbed window and return you to main screen or previous screen (as appropriate), generating "changes will be lost" message if necessary.
Microhelp text for every enabled field and button
Ensure all fields are disabled in read-only mode.
Progress messages on load of tabbed screens.
Return operates continue.
If retrieve on load of tabbed window fails window should not open.
Specific Field Tests


Date Field Checks
Assure that leap years are validated correctly & do not cause errors/miscalculations
Assure that month code 00 and 13 are validated correctly & do not cause
errors/miscalculations
Assure that 00 and 13 are reported as errors
Assure that day values 00 and 32 are validated correctly & do not cause
errors/miscalculations
Assure that Feb. 28, 29, 30 are validated correctly & do not cause errors/
miscalculations
Assure that Feb. 30 is reported as an error
Assure that century change is validated correctly & does not cause errors/
miscalculations
Assure that out of cycle dates are validated correctly & do not cause
errors/miscalculations
Numeric Fields
Assure that lowest and highest values are handled correctly
Assure that invalid values are logged and reported
Assure that valid values are handles by the correct procedure
Assure that numeric fields with a blank in position 1 are processed or reported as an
error
Assure that fields with a blank in the last position are processed or reported as an error
an error
Assure that both + and - values are correctly processed
Assure that division by zero does not occur
Include value zero in all calculations
Include at least one in-range value

Include maximum and minimum range values
Include out of range values above the maximum and below the minimum
Assure that upper and lower values in ranges are handled correctly


Alpha Field Checks
Use blank and non-blank data
Include lowest and highest values
Include invalid characters & symbols
Include valid characters
Include data items with first position blank
Include data items with last position blank
回复 支持 反对

使用道具 举报

  • TA的每日心情
    开心
    2017-9-20 12:50
  • 签到天数: 2 天

    连续签到: 1 天

    [LV.1]测试小兵

    16#
    发表于 2008-8-5 09:05:53 | 只看该作者
    太辛苦啦楼主,不能让它沉,大家来顶啊
    回复 支持 反对

    使用道具 举报

    该用户从未签到

    17#
     楼主| 发表于 2008-8-6 10:42:04 | 只看该作者
    回复 支持 反对

    使用道具 举报

    该用户从未签到

    18#
    发表于 2008-8-6 11:13:33 | 只看该作者

    支持一下楼主

    我以后做到每天来楼主这里报到。。
    回复 支持 反对

    使用道具 举报

    该用户从未签到

    19#
    发表于 2008-8-11 11:31:44 | 只看该作者

    好东西

    都是好东西啊。
    自己找得花多少时间啊!
    谢了。
    回复 支持 反对

    使用道具 举报

    该用户从未签到

    20#
    发表于 2008-8-11 16:36:45 | 只看该作者
    回复 支持 反对

    使用道具 举报

    本版积分规则

    关闭

    站长推荐上一条 /1 下一条

    小黑屋|手机版|Archiver|51Testing软件测试网 ( 沪ICP备05003035号 关于我们

    GMT+8, 2024-5-3 11:56 , Processed in 0.082836 second(s), 26 queries .

    Powered by Discuz! X3.2

    © 2001-2024 Comsenz Inc.

    快速回复 返回顶部 返回列表