|
3#
楼主 |
发表于 2006-11-14 19:14:56
|
只看该作者
So, going back to one of the previous points, one thing we can probably all agree on: it entirely depends on how you view a test. But are we saying the result of the test determines whether it was a positive or negative test? If so, many would disagree with that, indicating that it is the thinking behind the test that should be positive or negative. In actuality, most experienced testers do not think in terms of positive or negative, they think in terms of "what can I do to establish the level of risk?" However, to this point, I would argue that if that is truly how the tester thinks of things then all concepts of positive/negative go right out of the window (as I think they mostly should anyway). Obviously you could classify the test design in terms of negative or positive, but to some extent that is irrelevant. However, without getting into that, I am not sure we are saying that the result of the test determines positivity or negativity. What I said earlier, relative to my example, was that "in both cases, these were good results because they showed you what the application was doing and you were able to determine if it was working correctly or not." If the application was behaving correctly or incorrectly, you still determined what the application was actually doing and, as such, those are good results. Thus the result tells you about the application and that is good (without recourse to terms like positive and negative). If the result tells you nothing about how the application is functioning that is, obviously, bad (and, again, this is without recourse to positive or negative).
We can apply the term "effective" to these types of test cases and we can say that all test cases, positive or negative, should be effective. But what about the idea of relying on the thinking behind the test? This kind of concept is just a little too vague for me because people's thinking can be more or less different, even on this issue, which can often depend on what people have been taught regarding these concepts. As I showed, you can transform a postive test mentality into a negative test mentality just by thinking about the results of the test differently. And if negative testing is just about "disrupting a module" (the Devil's Advocate position), even a positive test can do that if there is a fault. However I am being a little flip because with the notion of the thinking behind the test, obviously someone here would be talking about intent. The intent is to disrupt the module so as to cause a fault fault and that would constitute a negative test (by the Devil's Advocate position) while a positive test would not be trying to disrupt the module - even though disruption might occur (again, by the Devil's Advocate position). The key differentiator is the intent. I could sort of buy that but, then again, boundary testing is an attempt to disrupt modules because you are seeing if the system can handle the boundary violation. This can also happen with results. As I said: "Your negative test can turn into a positive test just be shifting the emphasis of what you are looking for." That sort of speaks to the intention of what you are hoping to find but also how you view the problem. If the disruption you tried to cause in the module is, in fact, handled by the code then you will get a positive test result - an error message of some sort.
Now I want to keep on this point because, again, some people state that negative testing is about exercising boundary conditions. Some were taught that this is not negative testing; rather that this is testing invalid inputs, which are positive tests - so it depends how you were taught. And figure that a boundary condition, if not handled by the code logic, will potentially severely disrupt the module - which is the point of negative testing according to some views of it. However, that is not the intent here according to some. And yet while that was not the intent, that might be the result. That is why the distinction, for me, blurs. But here is where the crux of the point is for me: you can generlaly forget all about intent of test case design for the moment and look at the distinction of what the result is in terms of a "positive result" (the application showed me an error when it should have) and a "negative result" (the application did not show me an error when it should have). The latter is definitely a more negative connotation than the former, regardless of the intent of the tester during design of the test case and that is important to realize because sometimes our intentions for tests are changed by the reality of what exists and what happens as a result of running the tests. So, in the case of intent for the situation of the application not showing an error when it was supposed to, this is simply a matter of writing "negative test cases" (if we stick with the term for a moment) that will generate conditions that should, in turn, generate error messages.
But the point is that the intent of the test case is to see if the application does not, in fact, generate that error message. In other words, you are looking for a negative result. But, then again, we can say: "Okay, now I will look that the application does generate the error message that it should." Well, in that case, we are really just running the negative test case! Either way the result is that the error either will or will not show up and thus the result is, at least to some extent, determining the nature of the test case (in terms of negative or positive connotation). If the error does not show up, the invalid input might break the module. So is the breakdown this:
P: Not showing error when not supposed to
N: Not showing error when supposed to
P: Showing error when supposed to
N: Showing error when not supposed to
I think the one thing we have to consider is the viewpoint: hinges on the idea of "negative testing" being looked at as forcing the module to do something it was not designed to do. However, if the module was never designed to do the thing you are trying, then your testing is of an interesting sort because, after all, you know nothing exists to handle it. So the real question should not be: "What happens when I do this?" but rather "Why have we not designed this to handle this situation?" Let us say that something is designed to handle the "module disruption" you are proposed to test. In that case, you are actually positively testing the code that handles that situation. To a strict degree, forcing a module to do something it was not designed to do suggests that this is something your average user can do. In other words, your average user could potentially use the application in such a fashion that the negative test case you are putting forth could be emulated by the user. However, if that is the case, design should be in place to mitigate that problem. And, again, you are then positively testing.
Now, one can argue, "Well, it is possible that the user can try something that there simply is no way to design around." Okay. But then I ask: "Like what?" If there is no way you can design around it or even design something to watch for the event, or have the system account for it, how do you write a valid test case for that? I mean, you can write a test case that breaks the application by disrupting the module but -- you already knew that was going to happen. However, this is not as cut and dry as I am sure anyone reading this could point out. After all, in some cases maybe you are not sure that what you are writing as a test case will be disruptive. Ah, but that is the rub. We just defined "negative testing" as trying to disrupt the module. Whether we succeed or not is a different issue (and speaks to the result), but that was the intent. We are trying to do something that is outside the bounds of design and thus it is not so much a matter of testing for disruption as it is testing for the effects of that disruption. If the effects could be mitigated, that must be some sort of design that is mitigating them and then you are positively testing that mitigating influence. |
|