skinapi 发表于 2005-12-31 17:48:38

Write Maintainable Unit Tests That Will Save You Time And Tears

链接参见http://msdn.microsoft.com/msdnmag/issues/06/01/UnitTesting/default.aspx
There's a lot of talk these days about unit testing and how one should go about writing unit tests for their applications under different scenarios (for starters, see my June 2005 MSDN®Magazine article on testing your data layer, available at Know Thy Code: Simplify Data Layer Unit Testing using Enterprise Services). That means there are a lot of developers who say to themselves (and to their teams) "Hey, we should start writing tests, too!" And so they begin writing unit test upon unit test until they reach a point where the tests themselves become a problem. Perhaps maintaining them is too hard and takes too long, or they are not readable enough to make sense, or maybe they have bugs.

It is at that point that developers are forced to make a tough decision: dedicate precious time to improving their tests or ignore the problem, effectively throwing away their hard work. The cause of this problem is simply inexperience writing unit tests.

In this article, I'll try to bring you some of the most important practices I've learned over the years while developing and consulting, and while training developers. These tips should help you write effective, maintainable, and robust unit tests. And I hope this advice helps you to avoid huge amounts of wasted time and effort.


The Truth About Unit Testing

In this section I'll outline some of the most common beliefs about the benefits to be gained by using many unit tests and explain why these beliefs are not always necessarily true. And then I'll help you make these beliefs hold true for your projects.

Tracking Bugs is EasierWell, not necessarily. How do you know that your tests are correct? That they fail when something actually breaks down? How do you know that you're covering enough code in your tests to ensure that if anything is wrong in your production code, that some test, somewhere, will break?

What happens if you have bugs in your unit tests? You'll suddenly start getting a lot of false positives—essentially, a bug will be found, but the problem won't be in your code under test. Your test's logic has a bug and therefore the test fails. These bugs are the most annoying and hardest to find because you're usually looking in the wrong place, checking your application instead of checking your tests. In Section I, I'll show you how to ensure that having a lot of unit tests does in fact make tracking bugs easier.

Code is Easier to MaintainConsidering the last point, you're probably inclined to think this belief isn't necessarily true either. And you're right. Let's say that for each logical method in your code you have at least one test method. (Realistically, you'll probably have even more.) In projects with good test coverage, as much as 60 percent of the code can be unit tests. Now consider that the tests have to be maintained as well. What happens if you have 20 tests against a complex logical method and you add a parameter to the method? The tests won't compile. The same thing happens when you change constructors of classes. Suddenly you find yourself needing to change a lot of tests just to make sure your application still works. And that takes lots of time.

For this belief to be true, you need to make sure your tests are easy to maintain. Write them while keeping the DRY rule in mind: Don't Repeat Yourself. I'll look at this issue more closely later.

Code is More UnderstandableThis is a benefit of unit tests that people don't usually expect at first. Think about changing code (say, a specific class or a method) in a project you've never seen before. How do you approach the code? You probably go around all the project code looking for places where this specific class or method is being used. Not surprisingly, unit tests are a great place to find such examples. And, when written correctly, unit tests can provide a handy set of API documentation for the project, easing the process of documenting and understanding code behavior written by old and new developers on the team.

However, this is only true if the tests are readable and understandable, a rule that many unit test developers don't follow. I'll expand on this belief, and show you how to write readable unit tests in the Readable Tests section of this article.

Back to Contents

Test the Right Thing

One of the most common mistakes made by newcomers to Test Driven Development (TDD) is that they often confuse the "Fail first" requirement with "Fail by testing something illogical." For example, you might start with a method requirement with the spec:

' returns the sum of the two numbers
Function Sum(ByVal a As Integer, ByVal b As Integer) As Integer

So you might write a failing test like this:
<TestMethod()> _
Public Sub Sum_AddsOneAndTwo()
    Dim result As Integer = Sum(1, 2)
    Assert.AreEqual(4, result, "bad sum");
End Sub

While at first glance this approach might look like a good way to write a failing test, it totally misses the point of why you initially set out to write a failing test.

A failing test proves that there is something wrong with the production code. That is, the test should pass when the feature you're testing is done. With the current example, however, the test will fail even if the production code is complete because the test is not logically correct. Making it pass requires a change to the test itself—not a change to the production code. (Knowing when a change to the production code is required is the intent with test-first programming.) In short, this test does not reflect the end result you'd like when the production code is complete; thus it is not a good test.

A good test in TDD will require you to change the code to make it work as desired, rather than forcing you to reflect on the current reality or a desired result that is not logical with the requirements—for example, where 1+1 returns 0 just to fail the test. The example shown earlier is similar to this situation. In practice, the tests should reflect the results you'd expect if the current requirement would have worked. Then you can adjust the reality of your code to pass the test.

As a rule, a passing test should never be removed because passing tests serve as the regression tests for maintenance work. They are there to ensure that when you change code, you don't break any existing functionality that's already working. This is also why you shouldn't change a passing test unless the change is merely to make it more readable (in other words, refactoring the test).

When a Test Fails IncorrectlySometimes you might encounter failing tests even though the change you made to the code was absolutely reasonable. This usually means you've encountered conflicting requirements. Commonly, this is when a new requirement (a changed feature) conflicts with an old requirement that may no longer be valid. There are two possible routes to go here:

Delete the failing test after verifying that it is no longer valid—essentially since the old requirement is either invalid or is tested elsewhere.
Change the old test so you test the new requirement (essentially using a new test), and test the old requirement under new settings (the test logic stays the same, but the initialization function may change).

Sometimes a test is still valid even if it uses invalid techniques to accomplish its task. For example, say you have a Person class with method Foo that behaves a certain way, and it is tested via Test X. Years later, another requirement comes along and the method's logic is enhanced to throw an exception when a new initialization feature in the object is missing. Suddenly, Test X fails, even though it has nothing to do with the method other than its use of the same class in order to test its functionality. The test fails because it is missing some initialization step before calling the method.

This doesn't mean you should remove Test X. You'd lose testing of some important functionality that should still work assuming correct initialization takes place. Instead, you might want to change the creation of the class to be initialized properly in your old test so that you can keep using it for its intended purpose.

Of course, if you have 200 tests failing just because they use that old constructor, you've got a problem maintaining your tests. This is why you should always remove duplication in your tests just as you should in production code.

Test Coverage and Testing AnglesHow do you know if you have good coverage for your new code? Try removing a line or a constraint check. If all tests still pass, you don't have enough code coverage and you probably need to add another unit test.

The best way to make sure you are adding the correct test is to not uncomment that line or check until you produce a test that fails until you do uncomment it. This may be hard, but if you can't think of a way to make this code fail, you probably don't have a good reason for writing that line of code in the first place.

You never know when the next developer will try to play with your code. He may try to optimize it or wrongly delete some essential line. If you don't have a test that will fail, other developers may never know they made a mistake.

You might also want to try replacing various usages of parameters that are passed into your method with constants. For example, take a look at this method:

Public Function Sum(ByVal x As Integer, ByVal y As Integer, _
      ByVal allowNegatives As Boolean) As Integer
    If Not allowNegatives Then Throw New Exception()
    Return x + y
End Function

You can mess with the code to test for coverage. Here are some variations on how to test for it:
' Try this...
If Not True Then ' replace flag with const
    If x < 0 OrElse y < 0 Then Throw New Exception()
End If

' Or this...
If Not allowNegatives Then
    ' replace check with const
    If False OrElse y < 0 Then Throw New Exception()
End If


If all the tests still pass, you're missing a test. Another red flag is when you have only one test that checks for various equality to values, such as with the following:

Assert.AreEqual(3, retval)

Seeing this only once (in one test) in relation to some method usually means you can safely return 3 as a value and all the tests for this method will still pass. This, of course, means you're missing a test. If you're doing code reviews on unit tests, this is an easy one to look out for.

Make sure that your tests are written as simply as possible. A unit test should not contain an if switch or any other logical statements. If you do find yourself writing something like a logical statement in your test, there's a good chance you're testing more than one thing. In doing so, you're making your test harder to read and maintain, while increasing the possibility of bugs. The KISS principle (Keep it simple, stupid) plays a large role in unit tests, as well as in production code. Keep your tests simple and you just might find bugs in your production code rather than in your unit tests.

Making Tests Easy to RunIf your tests aren't easy to run, people won't trust them. Your application will most likely have two different kinds of tests:

Tests that can run smoothly without any configuration. (With this sort of test, I could go to any machine, get the latest version of your code and tests from source control, and run them all without a hitch.)
Tests that need some configuration before they can be run.
The first type is what you're after. The second type is what you often end up with, especially if you're new to unit tests. If you find yourself with tests that have special needs, that's okay for now. But it is important that you separate the two groups of tests so they can be run individually.

The idea is that any developer should be able to make a change and run some tests without having to do any special configurations to enable the tests. If there are some tests that need special attention before running, the developer needs to know about them, so she can spend time enabling those tests. Because a lot of developers are lazy by nature (not you, of course), you should assume that they won't do the necessary configurations. Instead, they'll let the tests fail because they have better things to do.

When people let tests fail, they begin to think they can't trust the tests. It's hard to tell if that test might have caught a real bug this time or if it is just firing another false positive. The developers may not even understand why the tests are failing in the first place. Once they don't trust your tests, the developers will stop running them. This, in turn, will result in undiscovered bugs, and this will start you down a dark path. Bugs lead to frustration. Frustration leads to anger. Anger leads to the Dark Side.

To avoid the Dark Side, make sure you always have a group of tests that are ready to go—tests that will always run safely and can always be trusted. Put the tests that belong in the configuration-challenged group in a different folder, tree, or project with specific instructions about what needs to be done to run them. By doing this, developers will have tests they can run (and trust) without investing time configuring them. And when they do have time and desire, they can configure and run the more involved tests.

Back to Contents

Creating Maintainable Tests

Try to avoid testing private/protected members. This issue can get a little religious for some people, but I strongly believe that 99 percent of the time you can fully test a class by writing unit tests against its public interfaces alone. Testing private members can make your tests more brittle if some internal aspect of the class being tested changes. You should be able to test private functionality by invoking some public functionality elsewhere in the code. Testing only public members leads to tests that can withstand constant code refactorings and internal implementation changes, while still making sure the overall functionality stays the same.

Reuse your creation, manipulation, and assertion code when possible. Don't create instances of classes directly inside a unit test. If you see the word "new" in front of any class that is not part of the unit test framework, you should consider putting that creation code in a special factory method that creates the object instance for you. You can then reuse that method to get fresh instances of your class in other tests. This helps to keep the tests maintainable across time and guards your tests from unforeseen changes to the code under test. As an example, Figure 1 shows a couple of simple tests that use a Calc class.

Suppose you have 20, or maybe even 100, tests against the Calc class, all looking surprisingly similar to these. Now a design change forces you to remove the default Calc constructor and use a different constructor that takes some parameters. Immediately, all your tests break. You might be able to fix this using a simple find and replace—or you might not. The main issue is that you'll waste valuable time fixing your tests. This isn't the case, though, if you use a factory method to create Calc instances in your test classes, as shown in Figure 2.

I've made a couple of changes to the tests to make them more maintainable. First, I moved the creation code into a reusable factory method. This means I would only have to change one simple method to make all the tests in this test class work with a new constructor. Another simple solution for the creation problem is to move the creation into the <TestInitialize()> method of the test class. Unfortunately, this works well only when you want to reuse an object as a local class variable in many tests. If you only use it for some of the tests (partially relevant members), you might as well instantiate them in the test itself to make it more readable.

By the way, notice that I've named the method Factory_CreateDefaultCalc. I like to name any helper methods in my test class with special prefixes so that I know what they are used for. This can help with readability.

My second change was to reuse the assertion code in the test by moving this code into a verification method. A verification method is a reusable method in your test class that contains an Assert statement but that can take different inputs and verify something on them. You use verification methods when you are asserting the same thing over and over again with varying inputs or initial state. The nice thing about this is that even though the Assert is located in a different method, if the Assert fails you'll still get an assert exception and the original calling test will be shown in the test failure output window.

I'm also sending in the Calc instance instead of using a local variable, so I know I always send an instance that has been initialized by the calling test. You may want to do the same thing when changing object state—for instance, when configuring specific objects under test or objects that will be sent to the tests using specific Configure_XX methods. Those methods should explain what they configure an object to be used for. The code in Figure 3 shows an example of this.

This test has a lot of setup code that deals with adding initial state to the LoginManager object, which is a member in this test class. There is certainly some repetition here. Figure 4 shows how this example looks after refactoring out the initialization code.

The revised tests are much more readable and maintainable. Just be careful not to refactor your tests so much that they end up being a single, unreadable line of code. Note that I could have also used a Verify_XX method here, but that's not really what I set out to illustrate here.

Avoid Dependencies Between TestsA test should be able to stand on its own. It should not rely on any other test, nor should it depend on tests being run in a specific order. I should be able to take all the tests you've written, run all or just some of them, in any order, and know that they will always behave the same. If you don't enforce this rule, you will end up with tests that only behave as expected when run in specific situations. This, of course, is problematic when you're under a deadline and you want to make sure you didn't introduce any new bugs into the system. You may get confused and think that there's something wrong with your code when, in reality, the problem is simply the order in which your tests are running. As a result, you may start to lose faith in your tests and write less and less of them. This is a long and slippery road.

If you call out from one test to another test, you create a dependency between them. You essentially test two things in one test (I'll explain why this is a problem in the next section). If, on the other hand, you have Test B and it depends on a state created by Test A, you fall into the "ordering" trap. If you or someone else were to change Test A, Test B would break and you wouldn't know why. Troubleshooting this failure can steal a lot of time.

Using <TestInitialize()> and <TestCleanup()> methods is essential to obtaining better test isolation. Make sure that your test always uses fresh, new instances of objects under test, and that all state is known in advance and will always be the same no matter where or when your test is run.

Avoid Multiple Asserts in a Single Unit TestConsider Assert failures as symptoms of a disease and Asserts as indication points or blood checks for the body of the software. The more symptoms you can find, the easier the disease will be to diagnose and treat. If you have multiple Asserts in one test, only the first failing Assert will reveal itself as failed by throwing an exception. Consider the test illustrated in the following code:

<TestMethod()> _
Public Sub Sum_AnyParamBiggerThan1000IsNotSummed()
    Assert.AreEqual(3, Sum(1001, 1, 2)
    Assert.AreEqual(3, Sum(1, 1001, 2) ' Assert fails
    Assert.AreEqual(3, Sum(1, 2, 1001) ' This line never executes
End Sub

You lose sight of other possible symptoms from that line onwards. After a failure, subsequent Asserts aren't executed. These unused Asserts could provide valuable data (or symptoms) that would help you quickly narrow your focus and discover the underlying problem. So running multiple Asserts in a single test adds complexity with little value. Additional Asserts should be run in separate, self-contained unit tests so that you have a good opportunity to see what fails.

Back to Contents

Creating Readable Tests

If you've written unit tests before, do all of your unit tests have a good message on the Assert line? Probably not. Most developers don't bother writing a good Assert message because they are more concerned with writing the test.

Assume you're the new developer on a team and you're trying to read a unit test. Like this one:

<TestMethod()> _
Public Sub TestCalcParseNegative()
    Dim c As New Calc
    Assert.AreEqual(1000, c.Parse("-1, -1000")
End Sub

As a simple exercise, see if you understand the usage case of the Parse method of Calc in this case. You probably have a good guess, but this could easily be any number of usage cases that output the result of 1000:
Return the largest negative number in the group as a positive
Ignore the first number if negative and return sum of rest as positive
Return the numbers multiplied by each other
Now consider this small change in the unit test:
<TestMethod()> _
Public Sub Parse_NegativeFirstNum_ReturnsSumOfTheRestAsPositive()
    Dim c As New Calc
    Dim parsedSumResult As Integer = c.Parse("-1", "-1000")
    Const SUM_WITH_IGNORED_FIRST_NUM As Integer = 1000
    Assert.AreEqual(SUM_WITH_IGNORED_FIRST_NUM, parsedSumResult)
End Sub

Isn't this much easier to understand? When the Assert message is gone, the best place to express intent is in the test name. If you use it wisely, you'll find you don't need to read the test code to understand what the code tests. In fact, you often won't need to write any comments at all because the code, such as with this example, is self documenting.

The name contains three parts: the name of the method under test (Parse), the state or rule under test (sending in a string with a first negative number), and the expected output or behavior (the sum of the rest of the numbers are returned as a positive). Notice that I removed the words Test and Calc from the name. I already know this is a test by the attribute so there's no need to repeat this info. I also know this is a test on the Calc class because test classes are usually written for one specific class (this class would probably have been called CalcTests).

The name is long, but who cares? It reads much like a sentence in standard English and makes it easy for a newcomer to understand the test. More so, when this test fails, I'll know what the problem is and maybe even without debugging the code.

Notice that I've gone ahead and separated the actual act of parsing from the act of asserting on the result by creating a result variable on a different line. There are at least two reasons for this. First, you can assign a readable name to a variable that contains the result, which makes your Assert line very understandable and easy to read. Second, the invocation against the object under test may be very long and might make your Assert line stretch all the way beyond the edge of the screen, forcing the test reader to scroll to the right. Personally, that's one of the things I find most annoying.

I use a lot of constants in my tests to make sure my Asserts read like a book. In the previous example, you could read the Assert to say "make sure that the parsed sum is equal to the sum with the first number ignored." Good naming for your variables can sometimes make up for a badly named test.

Of course, sometimes an Assert message is the best way to convey intent in a unit test. A good Assert message should always explain either what should have happened or what happened and why it's wrong. For example, "Parse should have ignored the first number if it is a negative," "Parse did not ignore the first negative number," and "X called object Y even though flag was false" are all useful Assert messages that clearly describe the resulting situation.

Back to Contents

Avoid Partially Relevant Code in Your Setup Method

A <TestInitialize()> method is a great place to instantiate member variables that will be used by your tests. All your tests. Avoid variables that are only used by some of the tests. Those can be local variables within the test itself. If you create partially relevant instances as class members simply to avoid duplication of creation in the tests, you should use factory methods as explained earlier in this article. Using partially relevant variables makes both your code and the setup method less readable. Only variables to be used in each and every test should be member variables and used in the <TestInitialize()> method.

Figure 5 shows a class that has two member variables for testing, but one of them (cxNum) is only partially used. Figure 6 shows how you might replace the code in the tests to make it more readable.

Back to Contents

Parting Words

As you can see, writing unit tests is not a trivial task. Approached correctly, unit tests can yield amazing results for developer productivity and quality of code. It can help you to create applications with far fewer errors, while also giving other developers insight into your code. But it takes a commitment upfront, making sure to follow some simple rules. When approached poorly, unit tests can achieve the opposite results, stealing valuable time and complicating the testing process.

snail2011 发表于 2006-1-4 11:15:55

The unit test is belog to developers in our company, so, I'M poor in this .
页: [1]
查看完整版本: Write Maintainable Unit Tests That Will Save You Time And Tears