Software Test Automation Myths and Facts
Software Test AutomationMyths and FactsIntroduction
Today software test automation is becoming more and more popular in both C/S and web environment. As the requirements keep changing (mostly new requirements are getting introduced on daily basis) constantly and the testing window is getting smaller and smaller everyday, the managers are realizing a greater need for test automation. This is good news for us (people who do test automation). But, I am afraid this is the only good news.
Myths & Facts
A number of articles and books are written on different aspects of Software Test Automation. “Test Automation Snake Oil” by, James Bach is an excellent article on some of the myths of automation. I like to discuss some of these myths and will try to point out the facts about these myths. I also like to discuss some of my observations and hopefully point out possible solutions. These are based on my experience with a number of automation projects I was involved with.
- Find more bugs: Some QA managers think that by doing automation they should be able to find more bugs. It’s a myth. Let’s think about it for a minute. The process of automation involves a set of written test cases. In most places the test cases are written by test engineers who are familiar with the application they are testing. The test cases are then given to the automation engineers. In most cases the automation engineers are not very familiar with the test cases they are automating. From test cases to test scripts, automation does not add anything in the process to find more bugs. The test scripts will work only as good as the test cases when comes to finding bugs. So, it’s the test cases that find bugs (or don’t find bugs), not the test scripts.
- Eliminate or reduce manual testers: In order to justify automation, some point out that they should be able to eliminate or reduce the number of manual testers in the long run and thus save money in the process. Absolutely not true. Elimination or reduction of manual testers is not any of the objectives of test automation. Here is why – as I have pointed out earlier that the test scripts are only as good as the test cases and the test cases are written primarily by manual testers. They are the ones who know the application inside out. If the word gets out (it usually does) that the number of manual testers will be reduced by introducing automation then, most if not all manual testers will walk out the door and quality will go with them as well.
Observations
I have met a number of QA managers who are frustrated with their automation. According to them the tool is not doing what it is supposed to do. Here is a true story, the client (I had the opportunity to work with them for some time) found out that the tool they have just bought does not support the application they are testing (I am not making it up). How can this happen! – It does happen more often than one would think. I will get back on this when I discuss possible solutions. A manager of one of the major telecom companies that I had a recent interview with told me that after three years and more than a million dollar he is still struggling with automation. This is pretty sad and I get the feeling that he is not alone.
Solutions/Suggestions
Let’s discuss some of the reasons for this frustration and some of the solutions to this problem.
- Unrealistic expectations: Most managers have their first encounter with any automation tool when they look at the demo and everything looks nice and simple.
But everything is not so nice and simple when you try to use the tool with your application. The vendors will only tell you the things you want to hear (how easy to use, how simple to set up, how it will save time and money, how it will help you find more bugs etc.). This builds a false set of hopes and expectations.
- Lack of planning: A great deal of planning is required from selection to implementation of the tool. “Evaluating Tools” by Elisabeth Hendrickson is a very good article on step by step process of selecting a tool. She talks about “Tool Audience” as one of the steps. This would be an ideal way to select a tool. It may not happen in every place because of the everyday workload of the people involved. But the participation of the users in the process is very important, because they are the ones who will use the tool day in and day out. I am almost certain that what happened to one of my clients (the tool they have bought did not support the application they were testing) would not have happened if the users were involved in the selection process.
- Lack of a process: Lack of a process may also contribute to failure of automation. Most places do have some kind of process in place. In most cases (although it differs from place to place) developers write code against a set of requirements. If the requirement does not call for a change in GUI then, there should not be any change in GUI. But if the GUI keep changing constantly from one release to another without any requirement for that change then, there is a problem in the process. You may have the best tool and the best (for your environment) architecture is in place and you will still have problems with your automation because of a faulty process.
Conclusion
I think there is a need to educate QA managers about the benefits and limitations of automation. There is a need to separate the facts from the fictions. But here is the problem, in most cases consultants are brought in to fix problems of prior attempts instead of initial setup. At this point the managers have already learned (painfully) the pitfalls of automation. In order to avoid this painful experience I would recommend (most automation engineers will agree with me) to spend more time up front doing research about the styles and techniques of automation and find an architecture that fits the environment. There is no doubt that automation adds a great value to overall QA process but, short of knowledge and understanding about automation and lack of planning can also cause a nightmare.
软件测试自动化神话和事实
原 著: M.N. Alam 翻译:hayerk
IMI Systems Inc.
Dallas, TX
介绍
目前软件测试自动化在C/S结构和Web环境中变得越来越流行了。随着软件需求的持续变化和测试窗口越来越小,项目经理发现测试自动化越来越有必要。这当然对我们这些从事测试自动化工作的人来说是好消息,然而,我担心这仅仅只是一个消息。
神话和事实
在软件测试自动化的各个方面有大量的文章和书籍, James Bach的“Test Automation Snake Oil”是关于测试自动化神话的佼佼者。我想讨论一下这些神话,并打算指出这些神话后面的事实。我还会讨论一些我的观察结果,期望能给出一些可能的解决方案。这些观察结果和解决方案是基于我从事的若干个测试自动化项目的经验。
-查找更多的缺陷:一些质量管理人员认为通过自动化工作他们能找到更多缺陷。这是一个神话。让我们稍微想一下这个问题。测试自动化过程包含创建测试用例集。在大多数情况下,这些测试用例是熟悉所测程序的测试工程师写的。接下来这些测试用例会交给自动化工程师。在大多数情况下,自动化工程师不熟悉他们将要执行的测试用例。从测试用例到测试脚本,在这个过程中自动化没有为找到更多缺陷提供任何帮助。测试脚本只有和测试用例一样好时才在发现缺陷方面有效。因此,是测试用例发现缺陷(或未能发现缺陷),而不是测试脚本。
-降低甚至消除手工测试者:为了评价测试自动化工作,有些人指出自动化长远来看应该能降低甚至消除手工测试,从而在过程中能节省成本。这个看法绝对错误。减少和消除手工测试不是测试自动化的目标。这跟我前面的论述是同一个道理:测试脚本最多只能跟测试用例一样好,而测试用例是手工创建的。手工测试者才是懂得程序的人,如果“手工测试者数量因为测试自动化而降低”的论调横行,那么大量手工测试者会离开,随之而去的是软件质量。
观察
我见过了很多有过失败的自动化经历的质量经理。对他们来说,工具没有做应该做的事情。这里有一个真实的故事,我曾经跟一个顾客一起共事,这个顾客发现他们才买的工具不能支持他们所要测试的程序。居然发生了这种情况!事实是,这种情况的发生比我们所能想象的还要多。当我下面讨论可能的解决方案时还会回到这个案例来。前一段时间我采访了一个大型电信公司的经理,他告诉我虽然三年里花了上百万美元,他还在自动化方面苦苦挣扎。这很可悲,更可悲的是我认为还有很多人跟他一样在挣扎。
解决方案/建议
让我们讨论一下造成这些沮丧的原因,并寻找一些解决方案。
-不切实际的期望:大多数经理在第一次看到自动化工具的演示版本的时候,这些版本都是完美和简单的。但是当他们尝试在工作中试用这些工具的时候,事情就不是这么完美和简单了。自动化工具的销售人员只会讲些你想听到的话(工具多么容易使用,安装如何简单,如何能节省时间和金钱,如何有助于找到更多缺陷,等等)。这样造成了对工具的虚假的期望值。
-缺乏计划:从工具的选择到实施需要大量的计划。Elisabeth Hendrickson的“Evaluating Tools”是一篇关于选择工具的很好的文章。在她提出的若干步骤中,“工具审计”是其中重要的一步。这可以成为选择工具的一种良好方式。用户的参与在工具审计中是非常重要的,因为他们才是天天要和工具打交道的人。我很相信如果用户在工具选择过程中充分参与,我的那个顾客遇到的麻烦本不会发生。
-过程缺乏:过程缺乏可能也会引起自动化失败。很多公司都有固定的过程。大多数情况下,程序员根据需求编写代码。如果需求不要求GUI变更,那么程序中的GUI就不应该有任何变更。但是如果GUI随着版本不停变化,而需求没有变化,这样的过程就存在问题。如果有这样不完善的过程,就算你有最好的工具和最好的架构,自动化工作还是会遇到很多困难。
结论:
我认为质量保证经理有必要学习一下自动化的好处和局限。但是问题在于,咨询顾问很多时候是被召来解决前面工作中的问题,而不是从一开始就发生作用。为了避免痛苦的经历,我建议质量经理多花一些时间来对自动化的类型和技术进行调研,找到一个适合具体环境将的构架。毫无疑问,自动化工作对总体质量过程有很大的帮助,但是如果对自动化过程如果没有足够的理解,如果没有自动化计划,那么自动化也会成为一个噩梦。 好贴!
不过神话这个词可以意译一下,换一个说话会好一点。 Mark.
页:
[1]