|
3#
楼主 |
发表于 2005-12-28 13:55:40
|
只看该作者
2. Introduction
2. Introduction
Software Life Cycle
The software life cycle typically includes the following: requirements analysis, design, coding, testing, installation and maintenance. In between, there can be a requirement to provide Operations and support activities for the product.
Requirements Analysis. Software organizations provide solutions to customer requirements by developing appropriate software that best suits their specifications. Thus, the life of software starts with origin of requirements. Very often, these requirements are vague, emergent and always subject to change.
Analysis is performed to - To conduct in depth analysis of the proposed project, To evaluate for technical feasibility, To discover how to partition the system, To identify which areas of the requirements need to be elaborated from the customer, To identify the impact of changes to the requirements, To identify which requirements should be allocated to which components.
Design and Specifications. The outcome of requirements analysis is the requirements specification. Using this, the overall design for the intended software is developed.
Activities in this phase - Perform Architectural Design for the software, Design Database (If applicable), Design User Interfaces, Select or Develop Algorithms (If Applicable), Perform Detailed Design.
Coding. The development process tends to run iteratively through these phases rather than linearly; several models (spiral, waterfall etc.) have been proposed to describe this process.
Activities in this phase - Create Test Data, Create Source, Generate Object Code, Create Operating Documentation, Plan Integration, Perform Integration.
Testing. The process of using the developed system with the intent to find errors. Defects/flaws/bugs found at this stage will be sent back to the developer for a fix and have to be re-tested. This phase is iterative as long as the bugs are fixed to meet the requirements.
Activities in this phase - Plan Verification and Validation, Execute Verification and validation Tasks, Collect and Analyze Metric Data, Plan Testing, Develop Test Requirements, Execute Tests.
Installation. The so developed and tested software will finally need to be installed at the client place. Careful planning has to be done to avoid problems to the user after installation is done.
Activities in this phase - Plan Installation, Distribution of Software, Installation of Software, Accept Software in Operational Environment.
Operation and Support. Support activities are usually performed by the organization that developed the software. Both the parties usually decide on these activities before the system is developed.
Activities in this phase - Operate the System, Provide Technical Assistance and Consulting, Maintain Support Request Log.
Maintenance. The process does not stop once it is completely implemented and installed at user place; this phase undertakes development of new features, enhancements etc.
Activities in this phase - Reapplying Software Life Cycle.
Various Life Cycle Models
The way you approach a particular application for testing greatly depends on the life cycle model it follows. This is because, each life cycle model places emphasis on different aspects of the software i.e. certain models provide good scope and time for testing whereas some others don’t. So, the number of test cases developed, features covered, time spent on each issue depends on the life cycle model the application follows.
No matter what the life cycle model is, every application undergoes the same phases described above as its life cycle.
Following are a few software life cycle models, their advantages and disadvantages.
Waterfall Model
Strengths:
•Emphasizes completion of one phase before moving on
•Emphasises early planning, customer input, and design
•Emphasises testing as an integral part of the life cycle •Provides quality gates at each life cycle phase
Weakness:
•Depends on capturing and freezing requirements early in the life cycle
•Depends on separating requirements from design
•Feedback is only from testing phase to any previous stage
•Not feasible in some organizations
•Emphasises products rather than processes
Prototyping Model
Strengths:
•Requirements can be set earlier and more reliably
•Requirements can be communicated more clearly and completelybetween developers and clients
•Requirements and design options can be investigated quickly and with low cost
•More requirements and design faults are caught early
Weakness:
•Requires a prototyping tool and expertise in using it – a cost for the development organisation
•The prototype may become the production system
Spiral Model
Strengths:
•It promotes reuse of existing software in early stages of development
•Allows quality objectives to be formulated during development
•Provides preparation for eventual evolution of the software product
•Eliminates errors and unattractive alternatives early.
•It balances resource expenditure.
•Doesn’t involve separate approaches for software development and software maintenance.
•Provides a viable framework for integrated Hardware-software system development.
Weakness:
•This process needs or usually associated with Rapid Application Development, which is very difficult practically.
•The process is more difficult to manage and needs a very different approach as opposed to the waterfall model (Waterfall model has management techniques like GANTT charts to assess)
Software Testing Life Cycle
Software Testing Life Cycle consist of six (generic) phases: 1) Planning, 2) Analysis, 3) Design, 4) Construction, 5) Testing Cycles, 6) Final Testing and Implementation and 7) Post Implementation. Each phase in the life cycle is described with the respective activities.
Planning. Planning High Level Test plan, QA plan (quality goals), identify – reporting procedures, problem classification, acceptance criteria, databases for testing, measurement criteria (defect quantities/severity level and defect origin), project metrics and finally begin the schedule for project testing. Also, plan to maintain all test cases (manual or automated) in a database.
Analysis. Involves activities that - develop functional validation based on Business Requirements (writing test cases basing on these details), develop test case format (time estimates and priority assignments), develop test cycles (matrices and timelines), identify test cases to be automated (if applicable), define area of stress and performance testing, plan the test cycles required for the project and regression testing, define procedures for data maintenance (backup, restore, validation), review documentation.
Design. Activities in the design phase - Revise test plan based on changes, revise test cycle matrices and timelines, verify that test plan and cases are in a database or requisite, continue to write test cases and add new ones based on changes, develop Risk Assessment Criteria, formalize details for Stress and Performance testing, finalize test cycles (number of test case per cycle based on time estimates per test case and priority), finalize the Test Plan, (estimate resources to support development in unit testing).
Construction (Unit Testing Phase). Complete all plans, complete Test Cycle matrices and timelines, complete all test cases (manual), begin Stress and Performance testing, test the automated testing system and fix bugs, (support development in unit testing), run QA acceptance test suite to certify software is ready to turn over to QA.
Test Cycle(s) / Bug Fixes (Re-Testing/System Testing Phase). Run the test cases (front and back end), bug reporting, verification, revise/add test cases as required.
Final Testing and Implementation (Code Freeze Phase). Execution of all front end test cases - manual and automated, execution of all back end test cases - manual and automated, execute all Stress and Performance tests, provide on-going defect tracking metrics, provide on-going complexity and design metrics, update estimates for test cases and test plans, document test cycles, regression testing, and update accordingly.
Post Implementation. Post implementation evaluation meeting can be conducted to review entire project. Activities in this phase - Prepare final Defect Report and associated metrics, identify strategies to prevent similar problems in future project, automation team - 1) Review test cases to evaluate other cases to be automated for regression testing, 2) Clean up automated test cases and variables, and 3) Review process of integrating results from automated testing in with results from manual testing.
What is a bug? Why do bugs occur?
A software bug may be defined as a coding error that causes an unexpected defect, fault, flaw, or imperfection in a computer program. In other words, if a program does not perform as intended, it is most likely a bug.
There are bugs in software due to unclear or constantly changing requirements, software complexity, programming errors, timelines, errors in bug tracking, communication gap, documentation errors, deviation from standards etc.
· Unclear software requirements are due to miscommunication as to what the software should or shouldn’t do. In many occasions, the customer may not be completely clear as to how the product should ultimately function. This is especially true when the software is a developed for a completely new product. Such cases usually lead to a lot of misinterpretations from any or both sides.
· Constantly changing software requirements cause a lot of confusion and pressure both on the development and testing teams. Often, a new feature added or existing feature removed can be linked to the other modules or components in the software. Overlooking such issues causes bugs.
· Also, fixing a bug in one part/component of the software might arise another in a different or same component. Lack of foresight in anticipating such issues can cause serious problems and increase in bug count. This is one of the major issues because of which bugs occur since developers are very often subject to pressure related to timelines; frequently changing requirements, increase in the number of bugs etc.
· Designing and re-designing, UI interfaces, integration of modules, database management all these add to the complexity of the software and the system as a whole.
· Fundamental problems with software design and architecture can cause problems in programming. Developed software is prone to error as programmers can make mistakes too. As a tester you can check for, data reference/declaration errors, control flow errors, parameter errors, input/output errors etc.
· Rescheduling of resources, re-doing or discarding already completed work, changes in hardware/software requirements can affect the software too. Assigning a new developer to the project in midway can cause bugs. This is possible if proper coding standards have not been followed, improper code documentation, ineffective knowledge transfer etc. Discarding a portion of the existing code might just leave its trail behind in other parts of the software; overlooking or not eliminating such code can cause bugs. Serious bugs can especially occur with larger projects, as it gets tougher to identify the problem area.
· Programmers usually tend to rush as the deadline approaches closer. This is the time when most of the bugs occur. It is possible that you will be able to spot bugs of all types and severity.
· Complexity in keeping track of all the bugs can again cause bugs by itself. This gets harder when a bug has a very complex life cycle i.e. when the number of times it has been closed, re-opened, not accepted, ignored etc goes on increasing.
Bug Life Cycle
Bug Life Cycle starts with an unintentional software bug/behavior and ends when the assigned developer fixes the bug. A bug when found should be communicated and assigned to a developer that can fix it. Once fixed, the problem area should be re-tested. Also, confirmation should be made to verify if the fix did not create problems elsewhere. In most of the cases, the life cycle gets very complicated and difficult to track making it imperative to have a bug/defect tracking system in place.
See Chapter 7 – Defect Tracking
Following are the different phases of a Bug Life Cycle:
Open: A bug is in Open state when a tester identifies a problem area
Accepted: The bug is then assigned to a developer for a fix. The developer then accepts if valid.
Not Accepted/Won’t fix: If the developer considers the bug as low level or does not accept it as a bug, thus pushing it into Not Accepted/Won’t fix state.
Such bugs will be assigned to the project manager who will decide if the bug needs a fix. If it needs, then assigns it back to the developer, and if it doesn’t, then assigns it back to the tester who will have to close the bug.
Pending: A bug accepted by the developer may not be fixed immediately. In such cases, it can be put under Pending state.
Fixed: Programmer will fix the bug and resolves it as Fixed.
Close: The fixed bug will be assigned to the tester who will put it in the Close state.
Re-Open: Fixed bugs can be re-opened by the testers in case the fix produces problems elsewhere.
Cost of fixing bugs
Costs are logarithmic; they increase in size tenfold as the time increases. A bug found and fixed during the early stages – requirements or product spec stage can be fixed by a brief interaction with the concerned and might cost next to nothing.
During coding, a swiftly spotted mistake may take only very less effort to fix. During integration testing, it costs the paperwork of a bug report and a formally documented fix, as well as the delay and expense of a re-test.
During system testing it costs even more time and may delay delivery. Finally, during operations it may cause anything from a nuisance to a system failure, possibly with catastrophic as an aircraft or an emergency service.
When can testing be stopped/reduced?
It is difficult to determine when exactly to stop testing. Here are a few common factors that help you decide when you can stop or reduce testing:
· Deadlines (release deadlines, testing deadlines, etc.)
· Test cases completed with certain percentage passed
· Test budget depleted
· Coverage of code/functionality/requirements reaches a specified point
· Bug rate falls below a certain level
· Beta or alpha testing period ends |
|