TechEd 2008 notes: How not to write a unit test

How not to write a unit test
Roy Osherove

All the things no one ever told you about unit testing.

Will have two parts: presentation-like (what really works), and interactive (questions, prioritized).

Early questions:

  • Data access
  • Legacy code (what not to do)
  • Duplication between unit tests and functional tests
  • Testing non-.NET code, e.g. ASP.NET
  • Testing other languages, e.g. F#, IronRuby)
  • Unit tests and refactoring
  • Testing UI
  • How do you mock the world?
  • How important are tools? Mocking tools, refactoring, etc. Can you write unit tests with just what VS provides?
  • Did you bring your guitar? — No. Wanted as much time for information as possible.
  • Where did you get your T-shirt? — Was being given away at another conference.

A unit test is a test of a small functional piece of code

  • If a method returns a boolean, you probably want at least two tests

Unit testing makes your developer lives easier

  • Easier to find bugs.
    • That’s the common line. But not necessarily — e.g. if your test has bugs, or if you’re testing the wrong things
    • If you can’t trust your tests to find bugs (and especially if you don’t know you can’t trust them), then the opposite may be true — you may be confident you don’t have bugs when you do.
    • If you don’t trust them, then you won’t run them, they’ll get stale, and your investment in writing them was wasted.
  • Easier to maintain.
    • But 1,000 tests = 1,000 tests to maintain
    • Change a constructor on a class with 50 tests — if you didn’t remove enough duplication in the tests, it will take longer than you think to maintain the tests
    • We will look at ways to make tests more maintainable
  • Easier to understand
    • Unit tests are (micro-level) use cases for a class. If they’re understandable and readable, you can use them as behavior documentation.
    • Most devs give really bad names to tests. That’s not on purpose.
    • Tests need to be understandable for this to be true.
  • Easier to develop
    • When even one of the above is not true, this one isn’t true.

Make tests trustworthy

  • Or people won’t run them
  • Or people will still debug for confidence

Test the right thing

  • Some people who are starting with test-driven development will write something like:
public void Sum()
    int result = calculator.Sum(1, 2);
    Assert.AreEqual(4, result, "bad sum");
  • Maybe not the best way to start with a failing test
  • People don’t understand why you want to make the test fail
  • Test needs to test that something in the real world is true: should reflect the required reality
  • Good test fails when it should, passes when it should. Should pass without changing it later. The only way to make the test pass should be changing production code.
  • If you do TDD, do test reviews.
    • Test review won’t show you the fail-first. But you can ask, “So can you show me the test failing?”

Removing/Changing Tests

  • Don’t remove the test as soon as it starts passing
    • If it’s a requirement today, chances are it’ll still be a requirement tomorrow
    • Duplicate tests are OK to remove
    • Can refactor a test: better name, more maintainability
  • When can a test fail?
    • Production bug — right reason (don’t touch test)
    • Test bug (fix test, do something to production code to make the corrected test fail, watch it fail, fix production code and watch it pass)
      • Happens a lot with tests other people wrote (or with tests you don’t remember writing)
    • Semantics of using the class have changed (fix/refactor)
      • E.g., adding an Initialize method that you have to call before you use the class
      • Why did they make that change without refactoring the tests?
        • Make a shared method on the test class that instantiates and Initializes
    • Feature conflict
      • You wrote a new test that’s now passing, but the change made an old test fail
      • Go to the customer and say, “Which of these requirements do you want to keep?”
      • Remove whichever one is now obsolete

Assuring code coverage

  • Maybe unorthodox, but Roy doesn’t like to use code-coverage tools
    • 100% code coverage doesn’t mean anything. Finds the exceptions, but doesn’t prove the logic.
  • Better: change production code and see what happens
  • Make params into consts
  • Remove “if” checks — or make into consts (if (true)). Will a test fail? If not, you don’t have good coverage.
  • Do just enough of these kinds of tweaks to make sure the test is okay.
  • Test reviews are still valuable if you do pair programming, just maybe less often. Good to bring in someone else who didn’t write the code, with an objective eye.
  • Quick test review of yesterday’s code at end of stand-up meeting?

Avoid test logic

  • No ifs, switches or cases
    • Yes, there are always exceptions, but it should be very rare
    • Probably only in testing infrastructure
    • Most of the time, there are better ways to test it
    • Sometimes people write conditionals when they should be writing two tests
    • Don’t repeat the algorithm you’re testing in the test. That’s overspecifying the test.
  • Only create, configure, act and assert
  • No random numbers, no threads
  • Test logic == test bugs
  • Fail first also assures your test is correct

Make it easy to run

  • Integration vs. Unit tests
  • Configuration vs. ClickOnce
  • Laziness is key
  • Should be able to check out, run all the unit tests with one click, and have them pass.
  • Might need to do configuration for the integration tests, so separate them out.
  • Never check in with failing tests. If you do, you’re telling people it’s okay to have a failing test.
  • Don’t write a lot of tests to begin with, and have them all failing until you finish everything. If you do that, you can’t check in (see previous point). Write one test at a time, make it pass, check in, repeat.

Creating maintainable tests

  • Avoid testing private/protected members.
    • This makes your test less brittle. You’re more committed to public APIs than private APIs.
    • Testing only publics makes you think about the design and usability of a feature.
    • Publics are probably feature interactions, rather than helpers.
    • Testing privates is overspecification. You’re tying yourself to a specific implementation, so it’s brittle, and makes it hard to change the algorithm later.
    • Sometimes there’s no choice; be pragmatic.
  • Re-use test code (Create, Manipulate, Assert) — most powerful thing you can do to make tests more maintainable
  • Enforce test isolation
  • Avoid multiple asserts

Re-use test code

  • Most common types:
    • make_XX
      • MakeDefaultAnalyzer()
      • May have others: one already initialized, with specific parameters, etc.
    • init_XX
      • Once you’ve already created it, initialize it into a specific state
    • verify_XX
      • May invoke a method, then do an assert on the result. Pulling out common code.
  • Suggestion: by default, the word new should not appear in your test methods.
    • As soon as you have two or more tests that create the same object, you should refactor the new out into a make method.

Suggestion: don’t call the method directly from Assert.AreEqual(...). Introduce a temp variable instead. (This relates back to the 3A test pattern.)

Aside: Test structure

  • One possibility: Each project, e.g. Demo.Logan, has tests Demo.Logan.Tests. Benefit: they’re next to each other in Solution Manager.
  • Test files: one file per tested class?
    • That’s a good way to do it. Aim for a convention like MyClassTests so it’s easy to find.
    • If you have multiple test classes for one test, make multiple classes.
    • Consider nested classes: making a MyClassTests, and putting nested classes, one per feature. Make the nested classes be the TestFixtures.
      • Be careful of readability, though.
      • Roy said his preference would be to keep one test class per source file, rather than using nested classes to put them all in one file.
      • Decide for yourself whether you’d prefer one class per file, or all tests for one class in one place.

Enforce test isolation

  • No dependency between tests!
  • If you run into an unintended dependency between tests, prepare for a long day or two to track it down
  • Don’t run a test from another test!
  • You should be able to run one test alone…
  • …or all of your tests together…
  • …in any order.
  • Otherwise, leads to nasty “what was that?” bugs
  • Almost like finding a multithreading problem
  • The technical solution (once you find the problem) is easy. That’s why God created SetUp and TearDown. Roll back any state that your test changed.

Avoid multiple asserts

  • Like having multiple tests
  • But the first assert that fails — kills the others. If one test fails, the others still run. Asserts aren’t built that way.
  • Exception: testing one big logical thing. Might have three or four asserts on the same object. That’s possible, and doesn’t necessarily hurt.
  • Consider replacing multiple asserts with comparing two objects (and also overriding ToString so you can see how they’re different when the test fails).
    • My experience: this doesn’t work well when the objects get really big and complicated. Could work well for small objects, though.
  • Hard to name
  • You won’t get the big picture (just some symptoms)

Don’t over-specify

  • Interaction testing is risky
  • Stubs -> rule. Mocks -> exception.
  • Mocks can make you over-specify.
  • “Should I be able to use a stub instead of a mock?”
    • A stub is something you never assert against.
  • There’s only one time when you have to use mocks: when A calls a void method on B, and you have no way to later observe what B was asked to do. Then you have to mock B and verify that it was called with the right parameter.
    • If the other class does return a value, then you can test what your class did with that result. You’re testing your class, after all, not that other object — that’s why you’re faking it.


  • If you do another test that tests basically the same thing, but with different parameters, he suggests appending “2” to the end of the test name. But that’s assuming you already have a really good naming convention for the base test name! (Remember the serial killer.)
  • Bad: Assert.AreEqual(1003, calc.Parse("-1"));
  • Better:
int parseResult = Calc.Parse(NEGATIVE_ILLEGAL_NUMBER);
Assert.AreEqual(NEGATIVE_PARSE_RETURN_CODE, parseResult);
  • If you can send any kind of number, and the specific value you pass doesn’t matter, either use a constant, or use the simplest input that could possibly work (e.g. 1).

Separate Assert from Action

  • Previous example
  • Assert call is less cluttered

Leave a Reply

Your email address will not be published. Required fields are marked *