51Testing软件测试论坛

 找回密码
 (注-册)加入51Testing

QQ登录

只需一步,快速开始

微信登录,快人一步

手机号码,快捷登录

查看: 2832|回复: 0
打印 上一主题 下一主题

[转贴] BUILDING SIMULATION SOFTWARE TESTING

[复制链接]

该用户从未签到

跳转到指定楼层
1#
发表于 2007-12-27 19:31:18 | 只看该作者 回帖奖励 |倒序浏览 |阅读模式
BUILDING SIMULATION SOFTWARE TESTING

Typical Test Types

For building energy simulation software, the types of tests available include:

Analytical Tests - Analytical tests compare results to mathematical solutions for simple cases.

Comparative Tests - Comparative tests compares a program to itself or to other simulation programs.  

Sensitivity Tests - Sensitivity tests compare results to a baseline case and exhaustively test the functioning of every modeling input, including weather data for a full range of climate zones.  

Full Code Tests - Full code tests are designed to exercise all lines of code by exercising combinations of inputs and tracking which lines of code have been executed.  

Range Tests - Range tests check the operation of the code over the complete range of valid inputs. The tests will also go beyond all valid ranges to ensure that adequate error messages are generated.  

Empirical Tests - Empirical tests compare results to experimental data. In many respects, these have proven to be the most difficult type of tests to do. It is important that high-quality data be used as the basis for comparison along with complete and accurate information for developing a simulation model that represents the test building or module as closely as possible.

Black Box Testing

Black box testing that is also known as functional or behavioral testing. This is testing a pre- release or release version of the program and trying various inputs looking for incorrect outputs or program crashes. It is one of the best ways to diagnose bugs before users discover them. Given the complexity of many software packages, much thought must be devoted to optimizing the testing process. Ideally only one test will be performed for each possible set of software conditions. Of all possible sets of software conditions there are groupings of conditions that if all tested would be testing the same code and reveal the same bugs (or hopefully the lack of bugs). These groups of conditions are considered an equivalent class or each is called an equivalent class partition.


Acceptance Testing

A range of simple buildings should be simulated with the building simulation program undergoing testing automatically prior to all other tests. These should be considered an acceptance test intended to weed out unstable versions of the software that would not be fruitful to use for further debugging. Acceptance tests are often automated and may be provided to the programmers as a way to reduce the number of versions submitted for testing.

Regression Testing

Tests are performed multiple times, including once after each major source code change. The changes are usually due to inconsistencies found during the testing, and to implementing new features and are likely to contain problems. The first tests to be run with each new version are considered regression tests and compare the results before and after the series of tests. The results of the regression tests are all concatenated into a text file and are compared to a text file prepared using the identical method on the previous version. The comparison will be performed using a standard text file comparison utility and will report any new differences. The inconsistencies that have been fixed by the development team should be easy to identify and confirm that the fixes have been made correctly. The series of regression tests should consist of automated quick-to-perform tests and all tests that have previously found errors. The likelihood of an old error creeping back into the code when new ones are fixed is high enough to justify keeping tests in the regression suite that have been long since fixed.

Release Tests

Special consideration must be made just prior to a public release of the program to ensure that all bugs that were intended to be fixed were actually fixed. A release test is the most comprehensive automated test that, in large part, consists of a mixture of previously failed-and-fixed tests and tests that have always passed. It is critical that prior to any form of public release that known problems are specifically identified. The public release should include a “readme” file that describes all known problems at the time of release, and it is usually the tester that is responsible for compiling this list of problems and any work-arounds that exist. Release tests should include virus checking of the final installation package. Too many cases of distribution of viruses have been reported to not take this additional precaution. One type of release test that also needs to be performed prior to a final public release is a comparison of all features actually working reliably with prepared literature. It is crucial the literature reflect all design decision made during development and testing.

Beta Tests

Other than the developers and testers, most software developed is not used by anyone else until beta tests commence. A common practice is to recruit a group of beta testers who are knowledgeable users of similar products. The beta test group should be sent the program in executable form for their target environment and should include documentation. They should be warned extensively that the product still has bugs and should not be used for production purposes. It is unrealistic to expect the beta tester to contribute more than half a day of testing per week. They should be informed of a clear path to make reports on bugs, general problems, and possible enhancements. This should include both e-mail and telephone. Often the lead tester is responsible for managing the feedback from the beta testers. The lead tester works with the beta testers to verify any reported bugs by trying to reproduce them. It is not uncommon for the beta test support to require full time effort, especially if the beta tester list is above 20 people. Beta testers should get new versions of the program not more often than once every other week otherwise the effort in installing and uninstalling the program is most of the time spent. At times, a critical bug is found and beta testing needs to be halted, so all beta testers should be available by e-mail. Beta testers have a few different motivations for volunteering to be a beta tester and each needs to be catered to:



Full Code Tests

Full code tests are designed to exercise all lines of code by exercising combinations of inputs and tracking which lines of code have been executed. This is a glass box testing technique and must be performed by the programmer using software designed to aid in the testing process. Many of the range tests may also be appropriate for performing these tests. Full code tests are no panacea since they cannot be as comprehensive as full logic flow tests. Full logic flow tests test every possible set of preceding conditions that have been executed prior to a particular portion of the code and, by definition, require an almost infinite amount of effort.

Documentation Tests

Comparing how the program operates and the documentation describing the program is often left to the tester. This is a crucial step. Even though many people don’t read the documentation, one can expect that any inconsistencies will be costly and embarrassing to fix.

Comparative Tests

Comparative tests compare a program to itself or to other simulation programs. This type of testing accomplishes results on two different levels, both validation and debugging. From a validation perspective, comparative tests will show if the software is computing solutions that are reasonable compared to similar programs. This is a very powerful method of assessment, but it is no substitute for determining if the program is absolutely correct since it may be just as equally incorrect as the benchmark program or programs. The biggest strength of comparative testing is the ability to compare any cases that two or more programs can both model. This is much more flexible than analytical tests when only specific solutions exist for simple models, and much more flexible than empirical tests when only specific data sets have been collected for usually a very narrow band of operation. Comparative testing is also useful for field-by-field input debugging. Complex programs have so many inputs and outputs that the results are often difficult to interpret. To ascertain if a given test passes or fails, engineering judgment or hand calculations are often needed. Field by field comparative testing eliminates any calculational requirements for the subset of fields that are equivalent in two or more simulation programs. The equivalent fields are exercised using equivalent inputs and relevant outputs are directly compared.

The most common comparative tests for building energy simulation programs are BESTEST and ASHRAE's Standard 140.

Analytical Tests

Analytical tests compare results to mathematical solutions for simple cases.

Empirical Tests

Empirical tests compare results to experimental data. In many respects, these have proven to be the most difficult type of tests to do. It is important that high-quality data be used as the basis for comparison along with complete and accurate information for developing a simulation model that represents the test building or module as closely as possible.

Range Tests

Range tests check the operation of the code over the complete range of valid inputs. The tests will also go beyond all valid ranges to ensure that adequate error messages are generated.

Sensitivity Tests

Sensitivity tests compare results to a baseline case and exhaustively test the functioning of every modeling input, including weather data for a full range of climate zones.

Executable Tests

Executable Tests interrupt and restart the program, including tests that remove selective binary or input files, looking for graceful program stops with appropriate error messages. Executable Tests also include:
分享到:  QQ好友和群QQ好友和群 QQ空间QQ空间 腾讯微博腾讯微博 腾讯朋友腾讯朋友
收藏收藏
回复

使用道具 举报

本版积分规则

关闭

站长推荐上一条 /1 下一条

小黑屋|手机版|Archiver|51Testing软件测试网 ( 沪ICP备05003035号 关于我们

GMT+8, 2024-11-24 19:26 , Processed in 0.076705 second(s), 28 queries .

Powered by Discuz! X3.2

© 2001-2024 Comsenz Inc.

快速回复 返回顶部 返回列表