We’re hiring

We’re looking for a good programmer, to join our agile team in Omaha, work with great people in our awesome bullpen, and play with our acres of magnetic whiteboard.

(This job is already posted on the “Jobs” sidebar, but since I have a vested interest in this one — i.e., I’d be working with you — I figured I’d call it out a little more.)

Delphi and/or C# experience would help, but we’re really just interested in someone who’s a good programmer. If you can write solid, maintainable OO code, we’d like to hear from you. If you, furthermore, live and breathe refactoring, grok single-responsibility principle in fullness, have a keen nose for code smells (and test smells), and can’t imagine writing code without automated tests, we’d absolutely love to hear from you.

A few highlights:

  • You’d be working in our open bullpen (those pictures are a bit dated — we have flat-panel LCDs now). Just about the whole team works in the bullpen, so if you have a question — whether for another programmer, QA, or one of the customers — you can usually just lean back and yell.
  • Dual monitors on every development workstation.
  • Fast PCs (4 GB battery-backed RAMdisks, anyone?).
  • Subversion for revision control. Atomic commits kick ass.
  • Management has a clue (programmers help make the release plan, rather than being held to an unrealistic plan made by a non-programmer).
  • Development is done via pair programming (two programmers at one keyboard), so there’s always someone to brainstorm with, knowledge spreads quickly, and code reviews aren’t after-the-fact.
  • Between QA, and our four automated-build machines running over 10,000 tests every time code is committed, feedback is usually pretty quick — it’s rare for a bug to go unnoticed for more than a few days or, more typically, a few minutes, so you don’t have to re-familiarize yourself with the code again.

The job posting goes into a fair bit of detail about the position, and about our team. If you have further questions about the job, feel free to post ’em here, and I’ll answer what I can. If you want to apply, see the links at the bottom of the job post.

If it’s important enough to comment, it’s important enough to test

Several people disagreed when I said you should try to write code that doesn’t need comments. I’ll be addressing some of their thoughts — sometimes by saying they were right, but sometimes with constructive suggestions.

I’ll start with a couple of people who suggested that comments should try to explain code that was put in to fix a bug, or to explain the “tricky code” that you sometimes have to write.

Those are both scenarios where a comment isn’t enough. You need an automated test.

Tricky code

The best way to deal with tricky code is by not writing it. But I’ll assume you’re dealing with a case where some manner of tricky code is indeed the best way to get the job done.

Suppose you’re writing a Delphi constructor, and for some reason you need to put some code before the call to inherited Create. (That certainly qualifies as tricky.) Assuming there’s a reason why that’s necessary (and not just sloppiness), you should have an automated test that proves the resulting behavior is correct… something that will pass with the tricky code, and will fail without it. A test that would break if someone else (or you, for that matter) later misunderstood the importance of that ordering, and “fixed” it by moving the code after inherited Create. A test that would pass if someone found a less tricky way to accomplish the same thing.

Bug fixes

If you’re fixing a bug, you should definitely be writing an automated test, if at all possible. That way, if someone later makes a change that breaks the same thing again, you’ll know. As Brian Marick said (and we had posted on our wall for a while), “No bug should be hard to find the second time.”

Then look again

Once you’ve got your automated test, take another look. Is the comment still necessary?

Depending on your audience, maybe it is. As someone else pointed out, if you’re a library vendor, then comments have a lot of value.

But I often find that, with the test in place, the comment can just go away.

Installing Subversion 1.4 as a Windows service

I just upgraded the command-line Subversion on my home laptop. The way you install Subversion as a service has changed since I wrote the Mere-Moments Guide to installing a Subversion server on Windows, so it’s time for me to blog the details (so I can find them if I need them again).

If you had an older SVN service

First, if you used to have Subversion pre-1.4 running as a service (either via the SVN 1-Click Setup installer, or by following the steps in the Mere-Moments Guide), you’ll need to uninstall it:

  • Open a command prompt.
  • Change into your Subversion bin directory (probably C:\Program Files\Subversion\bin).
  • Type svnservice -remove
  • It should report that it’s stopping (may take a while) and uninstalling the service.
  • Delete svnservice.exe.
  • Uninstall your old version of Subversion (may not be necessary, but I did).

Installing the new Subversion service

Starting with Subversion 1.4, there’s built-in support for running as a Windows service — but not for actually creating a Windows service.

I recommend downloading the Subversion installer directly from the Subversion downloads page on tigris.

Then, go read Configuring svnserve to Run as a Windows Service.

If you have Windows 2000

The above instructions should work on Windows XP and later. But if you have Windows 2000, you need to download sc.exe from Microsoft’s Resource Kit.

I did the research so you don’t have to. Here’s Microsoft’s download for sc.exe for Windows 2000.

Avoiding sentinel values

A “sentinel” is any value that means something special. For example, Math.SameValue takes an epsilon parameter, but zero means “pick an epsilon for me”. So a value of zero doesn’t actually mean zero.

Sentinel values, I have come to realize, are a code smell. They’re saying, “here’s a parameter that has more than one responsibility.”

Last night, I was working on epsilons in DUnitAssertions. TValue, my universal value type, can compare two numbers with an epsilon, and it follows the SameValue convention of “zero means ‘pick for me'”. But I wanted to add another option:

Specify.That(Foo, Should.Equal(45.0).Exactly); // no epsilon

Really, this should pass zero as its epsilon, but that’s already spoken for. So I picked another sentinel (NaN) to mean “exact comparison”, and started writing the tests and making them pass.

It actually took a few minutes before I realized how ridiculous this was. I mean, the body of the method had three completely different code paths, depending on the sentinel values and nothing else. Hello? Polymorphism!

So I’m replacing the epsilon with a comparer. I’ve made an IValueComparer interface with a CompareExtendeds method, and it’s going to have several different implementations. One is the default, which picks an epsilon for you (and never takes an epsilon parameter at all). One is the epsilon-based comparer, which takes an epsilon in its constructor. I probably don’t even need another class for the “exact” comparer, since an epsilon of (really and truly) zero will serve quite nicely. And the sentinel logic will all go away.

I’ll even put another method on IValueComparer for comparing strings, and have a class that does case-insensitive comparisons. I’d been wondering how I was going to plumb the case-sensitivity option into the comparison, and now I know. (Since no single call to Specify.That will compare both a string and a float, this doesn’t pose any duplicate-inheritance-hierarchy problems.) And this will address that nagging doubt I’ve felt all along about passing a numeric epsilon even when I’m comparing strings. That’ll go away entirely; I’ll just be passing a comparer.

Now that I think about it, this is exactly the Strategy pattern. I’m refactoring to patterns and not even realizing it. Cool!

One thing that is nice about sentinel values — as opposed to, say, making several well-named methods — is that sentinel values are easy to plumb through several layers of code. But a strategy object has the same benefit, and it’s more expressive.

So when you see a sentinel value, ask yourself whether a strategy would be better. You might be really pleased with the result.

Footnote: Almost five years ago, I wrote a huge YAGNI at work. It was a function that divided one number by another. That’s it, really. But it took three, count ’em, three optional parameters (that’s five total parameters to divide two numbers), and those three optional parameters were overloaded to bursting with sentinel values. I pulled out all the stops: not just positive and negative infinity, but NaN as well, all had special meanings. Meanings that you could kind of puzzle out, sure, but they were sentinels nonetheless. But it was worth it (so I thought at the time) because this thing was the ultimate in flexibility. It could clamp its results to a range, it could throw exceptions or not, it could return special values when you divided by zero, it even made Julienne fries.

How many lines of code does it take to divide two numbers? Forty-nine. All but nine lines of that was just there to check for sentinels. Those other nine lines were the ones that set the function’s return value.

I looked today. Total number of places that actually called this function? One. Another utility function in the same unit, which I had made less readable when I introduced my YAGNI to it five years ago.

So I did some spring cleaning. And that unit is shorter now.

DUnitAssertions goes behavioral

The more I look at dSpec, the more I like its syntax. Should.Equal and Should.Be.AtLeast are much better than my Tis.EqualTo and Tis.GreaterThanOrEqualTo.

And I like the behavior-driven bent: encouraging the coder to think in terms of specifications (“specify that it should do this”) rather than tests (“test to see if it does this”). I was actually leaning a bit in that direction already — that’s why I picked Expect instead of Check or Assert — but I like Specify better yet.

So I’m stealing it.

None of these are implemented yet (I still need to rip out the operator overloads and do a bit more refactoring before I can move on to this), but here’s how I see the new syntax for DUnitAssertions*:

// All specifications support messages:
Specify.That(Foo, Should.Equal(45), 'Frequency');

// All specifications support Should.Not:
Specify.That(Foo, Should.Not.Equal(45));

// Floating-point comparisons support "ToWithin":
Specify.That(Foo, Should.Equal(45.0)); // default epsilon
Specify.That(Foo, Should.Equal(45.0).ToWithin(0.00001));
Specify.That(Foo, Should.Equal(45.0).Exactly); // no epsilon

// Equality and inequality:
Specify.That(Foo, Should.Equal(45));
Specify.That(Foo, Should.Be.AtLeast(45));
Specify.That(Foo, Should.Be.AtMost(45));
Specify.That(Foo, Should.Be.From(45).To(48).Inclusive);
Specify.That(Foo, Should.Be.From(45).To(48).Exclusive);
Specify.That(Foo, Should.Be.GreaterThan(45));
Specify.That(Foo, Should.Be.LessThan(45));

// Type:
Specify.That(Foo, Should.Be.OfType(TComponent));
Specify.That(Foo, Should.DescendFrom(TComponent));
Specify.That(Foo, Should.Implement(IBar));

// String:
Specify.That(Foo, Should.Contain('Bar'));
Specify.That(Foo, Should.StartWith('Bar'));
Specify.That(Foo, Should.EndWith('Bar'));

// All string comparisons support IgnoringCase:
Specify.That(Foo, Should.Equal('Bar').IgnoringCase);
Specify.That(Foo, Should.Contain('Bar').IgnoringCase);
Specify.That(Foo, Should.StartWith('Bar').IgnoringCase);
Specify.That(Foo, Should.EndWith('Bar').IgnoringCase);

// Numeric:
Specify.That(Foo, Should.Be.Composite);
Specify.That(Foo, Should.Be.Even);
Specify.That(Foo, Should.Be.Negative);
Specify.That(Foo, Should.Be.Odd);
Specify.That(Foo, Should.Be.Positive);
Specify.That(Foo, Should.Be.Prime);

// Specific values:
Specify.That(Foo, Should.Be.False);
Specify.That(Foo, Should.Be.Nil);
Specify.That(Foo, Should.Be.True);
Specify.That(Foo, Should.Be.Zero);

* If it’s going to be for behavior-driven development instead of test-driven development, then it really needs a new name now…

NUnitLite: Embedded, readable tests

I’m about to write some unit tests for my Lua-based game engine, and decided it was time to take a look at NUnitLite. Mainly because I want to play with its self-descriptive and, above all, forwards assertions:

Assert.That(actual, Is.EqualTo(expected));

// Output:
1) TestFoo (Tyler.Tests.TestTest.TestFoo)
Expected: 5
But was:  4

(I’ve ranted about xUnit’s backwards assertions before, so I won’t bore you with that again. Suffice to say, this is much, much cooler than NUnit.)

So I downloaded it and tried it out, and found that there are some gotchas. Nothing show-stopping yet, but some things that it would’ve been really nice to know about before I started. So I’ll share.

Getting NUnitLite

As of today, NUnitLite has not shipped any releases. So you’ll have to download the latest code from the NUnitLite source repository. You can download any changeset, but you probably want the latest (the topmost row in the grid — changeset 11989 when I downloaded). Click the disk icon in the left column to download a ZIP.

There’s a solution file in NUnitLite\src, but don’t bother opening it. It contains tests for NUnitLite itself, and those tests are NUnit tests — so they won’t even compile unless NUnit is wherever they expect it to be.

What you want is the NUnitLite\src\NUnitLite directory and project. Copy that into the solution directory where you want to start using NUnitLite, and add it to your solution. (NUnitLite is meant to be embedded, so you compile it in your solution just like all your other projects.)

Making NUnitLite return an exit code

Out of the box, NUnitLite doesn’t return a nonzero exit code when tests fail. This makes it completely worthless for any kind of automation. What good are unit tests if they don’t fail the automated build?

Fortunately, this is pretty easy to get around. You can do either of two things: modify the NUnitLite source code, or run the test-runner object yourself. I chose the latter, for now.

See below for some code you can put in your program’s Main() method to run NUnitLite properly, including return of a nonzero exit code when tests fail.

Quick overview of NUnitLite

As with most test frameworks, you define “test fixture” classes and put test methods on them. Setup and teardown are supported.

You can choose to make your test-fixture classes descend from the NUnitLite.Framework.TestCase class. This doesn’t seem to gain you much, though, and TestCase doesn’t have a default constructor, so you’d have to add an explicit constructor to your test fixture. Personally, I don’t bother descending from TestCase.

Test fixtures: Your test fixture must have the [TestFixture] attribute, whether or not it descends from TestCase. There’s no restriction on the class’s visibility, although I always make them public out of habit.

SetUp and TearDown: If you’re descending from TestCase, you can just override the SetUp and TearDown methods. Otherwise, make a method and tag it with the [SetUp] or [TearDown] attribute.

Test methods: Like DUnit, NUnitLite recognizes tests by name. Any public method whose name starts with Test (case-insensitive) is considered to be a test method. No more cluttering your code with all those stupid [Test] attributes! Yay! (Pity they didn’t do the same thing for SetUp and TearDown, but, oh well.)

Assertions: In addition to the uber-cool Assert.That() assertions, NUnitLite also supports the old NUnit-style Assert.AreEqual() assertions.

Setting up an NUnitLite test project

Create a new console application. Replace its Main() method with either of the following blocks of code.

Make sure that you’ve added the NUnitLite project to your solution. Then in your new console application, add a reference to NUnitLite (using the Projects tab of the Add Reference dialog). Also add a reference to the project containing the code you want to test.

Then you need to write the Main() method that starts up your app and runs the tests. I’ll present two different code blocks that both accomplish this; choose the one that suits your needs.

Non-automatable. If you will always rely on visual inspection to see if the tests passed or failed (i.e., you’re always running the tests manually, always looking at the output, and never automating anything), you can use this simple Main() routine:

using NUnitLite.Runner;

namespace <your namespace here>
public class Program
public static void Main(string[] args)

Automatable. If you actually care about making your tests automatable, you should use this Main() routine instead. It will return an exit code of 1 if any of the tests fail (or if any other type of exception occurs), so you can easily plug it into any style of automated or continuous build, and it will fail the build when the tests fail. (See below for a way to make it fail the compile right in Visual Studio.)

using System;
using System.Reflection;
using NUnitLite.Framework;
using NUnitLite.Runner;

namespace <your namespace here>
public class Program
public static int Main()
CommandLineOptions options = new CommandLineOptions();
ConsoleUI runner = new ConsoleUI(options, Console.Out);
TestResult result = runner.Run(Assembly.GetExecutingAssembly());
if (result.IsFailure || result.IsError)
return 1;
return 0;
catch (Exception ex)
return 1;

A sample NUnitLite test fixture

using NUnitLite.Framework;

namespace Tyler.Tests
public class AdderTests
private Adder _adder;

public void SetUp()
_adder = new Adder();

public void TestZeroes()
Assert.That(_adder.Add(0, 0), Is.EqualTo(0));
public void TestTwoPlusTwo()
Assert.That(_adder.Add(2, 2), Is.EqualTo(4));

Failing the compile when the tests fail

If you want, you can set up your tests to run automatically every time you compile in Visual Studio, and to fail the compile if tests fail. Here’s how:

  • Make sure you used the “Automatable” Main() method (see above).

  • In Solution Explorer, right-click on your project, and go to Properties.

  • Go to the “Build Events” tab. In the “Post-build event command line” box, type the name of your test EXE. In my case, my project is called Tyler.Tests.csproj, so I just type Tyler.Tests for my post-build event.

Save everything and build. Now, every time you build your test project, your test project will automatically run, and if it returns a nonzero exit code, Visual Studio will show “The command ‘Tyler.Tests’ exited with code 1” in the Error List window. Then you can press Ctrl+Alt+O to view the Output window, where you can see the test output, and see which tests failed and with what messages.

Amusing side note: When I first tried hooking this in as a post-build event, I actually put it in as a pre-build event by mistake. And then I just couldn’t figure out why the test kept failing even after I fixed the code…

Writing a video game

I’m spending my spare time writing a video game. Not for the first time, although more successfully this time than in the past, because I’m writing features instead of frameworks.

The features I’ve worked on so far are all features that I can see at work. Things like:

  • Make a little guy in the middle of the screen.
  • Make a room for him to stand in.
  • Make him walk around, a square at a time (no smooth animation).
  • Add more rooms, and portals for moving between them.
  • Make the hero move smoothly from one square to the next.
  • Make him move his feet as he walks.
  • Add a wandering NPC to one of the rooms.

But some of the features I’ve written, while visible, don’t contribute to gameplay. I have an NPC wandering around one of the rooms, but I don’t know why he’s there. He doesn’t contribute anything to the game yet. So I’ve shifted my focus. My current list of features (several pages long) is basically divided into two categories: plot-driven and geek-driven.

The plot-driven features are those that are doable in a reasonable amount of time, and that add visible value to the project, i.e., end-users could see the program doing something it didn’t used to do. I.e., user stories:

  • Make a throne room with a king.
  • Make a world map with a castle and a cave.
  • Make a cave with a bad guy and a princess.
  • Have the king ask you to rescue his daughter.
  • Show a fight screen when you reach the bad guy.

The geek-driven features are the shiny toys. They’re the intriguing challenges for me as a coder. But I have no idea when, or whether, some of them would actually become visible in the app, which means they’re sucky user stories:

  • Support bridges you can either walk across, or walk under (i.e., different walkable “levels” within the map).
  • Make rooms that you can’t see into until you walk into them (as seen in Final Fantasy I).
  • Load and save maps to/from XML.
  • Make a map viewer/editor app.
  • Implement coroutines for the hero, NPCs, and cutscenes.

What I haven’t figured out is how to balance my time between the plot-driven and the geek-driven features. The plot-driven features are essential; if I don’t keep making visible progress, I’m going to lose interest. But the geek-driven features are… well, shiny. If I don’t spend at least some time working on them, I’m going to lose interest!

My tentative plan is to spend about half my time on plot, and half my time on geek. We’ll see how it goes.

On using Rake.run_tests

Okay, an update to my previous Rake post. Rake does have some nice facilities for running tests, but the code I posted isn’t quite the right way to do it.

When you use rake/runtest, you should put the require inside your test task (inside each test task, if you have more than one). Like so:

task :test do
  require 'rake/runtest'
  Rake.run_tests "**/*tests.rb"

require 'rake/runtest'
task :test do
  Rake.run_tests "**/*tests.rb"

The reason: the WRONG version will always (indirectly) require ‘test/unit’, which will always run your tests, even if you go through a code path that doesn’t register any.

Here’s what it looks like if you have the WRONG version and run rake -T (list all the interesting targets to the console, but don’t run any of them). Note that it’s running the tests, even though there aren’t any to run.

C:\svn\dgrok-r>rake -T
(in C:/svn/dgrok-r)
rake all  # Runs everything
Loaded suite c:/Program Files/ruby/bin/rake

Finished in 0.0 seconds.

0 tests, 0 assertions, 0 failures, 0 errors


Here’s what it looks like if you use the RIGHT code above. It still runs the tests when you execute the “test” target, but when you’re not doing anything that involves test running, it doesn’t give you all that nonsense about how it ran 0 tests in 0.0 seconds:

C:\svn\dgrok-r>rake -T
(in C:/svn/dgrok-r)
rake all  # Runs everything


The coolness that is Rake

Rake is cool enough as a build tool, but wait until you check out its library.

First off, if you use make or ant, you owe it to yourself to look into Rake. The syntax is so much more readable than either, and you have a full-fledged programming language right there in your build script. The docs have a concise introduction to Rakefiles; Martin Fowler has a longer article called “Using the Rake Build Language“.

But wait! There’s more!

So Rake’s documentation is pretty sketchy. Mention is made of some cool libraries that ship with it, but no details. So I decided to RTFS, and I found much of coolness.

Coolness #1: All the coolness comes right out of the box. You don’t even need to require anything from your rakefile (unless you’re doing really fancy stuff). Just write a rakefile containing some tasks, run rake, and you’re off and running.

Coolness #2: They add an ext() method to the String class. Treats the string as a filename; returns a new filename with the extension changed to whatever you specify. Equivalent to Delphi’s ChangeFileExt() or .NET’s Path.ChangeExtension. Something that’s sorely lacking from Ruby out of the box.

Coolness #3: A built-in command to shell out to Ruby. They have a method to shell out, and another one specifically to shell out to another Ruby process.

When you want to shell out (e.g. to run your compiler), you should use the sh() and ruby() methods, which will automatically fail the task if the process returns a nonzero exit code. (By default, the value returned by a task block is ignored; if you want to fail the task, you need to do it explicitly with fail() or raise or some such, or call something like sh() that does it for you.)

rule '.rb' => ['.y'] do |t|
  sh "racc", t.source, "-o", t.name
task :test do
  ruby "test_all.rb"

Coolness #4: Built-in support for running unit tests. I already wrote a test_all.rb, but if I’d looked at Rake, I wouldn’t have had to. This works just as well:

[Note: It turns out that this sample code isn’t the right way to do it; see this post for corrected code. Still the same number of lines, just better-behaved.]

require 'rake/runtest'
task :test do
  Rake.run_tests "**/*tests.rb"

If you follow their convention of having your tests in test/test*.rb, you don’t need to pass an argument to Rake.run_tests at all. I like my tests to be next to the files they’re testing, so the above example is what I prefer.

There’s also a TestTask if you want fancier options, like modifying your $LOAD_PATH before running the test. (TestTask actually launches a new Ruby process via the ruby() method; Rake.run_tests runs them in-process.)

Coolness #5: FileList.

FileList is a list of files, much like an array. But instead of giving it individual filenames, you can give it patterns, like ‘lib/**/*’, and it comes back as an array of matching filenames. (It can take both include and exclude patterns.) It lazy-initializes its contents, so if you create a FileList but then don’t run the task that uses it, it’s basically free. By default, it ignores CVS and .svn directories.

But coolest of all, FileList has shorthand methods: sub(), gsub(), ext(), egrep(). Want to return a new copy of a FileList with all the extensions changed? No sweat. Want to grep through every file in a FileList, and do something with each file whose contents match a given regex? egrep() to the rescue. I may never again have to write six whole lines of code to recursively grep through a directory tree of source files.

Assertions the right way around (in Ruby, anyway)

I’ve always hated the common xUnit convention of writing things like AssertEquals(expectedValue, actualValue). Who came up with the idea of doing things backwards?

Your tests have to say things like “assert that 5 is what you get when you add 2 and 3”. It’s stilted and awkward (unless, perhaps, you’re trying to write test poetry). And it puts the effects before the causes. “I should get this. Oh, by the way, here’s what I’m doing.” Answer first, then question. I mean, what is this, Jeopardy?

This isn’t how things are done in real life. If I put the pizza into the oven for fifteen minutes, take it out, and then preheat the oven, my dinner is not going to taste very good (unless I have something else for dinner). And if I tried to take that pizza home before I paid for it, Wal-Mart would be well within its rights to take a dim view of the situation, even if I did eventually come back and pay for it.

Why not just do it the other way around? “Assert that 2 plus 3 equals 5.” Cause first, then effect: “I should do this calculation, and here’s the result I should get.” Isn’t that nicer?

But most xUnit frameworks put the effect first, so that’s what people are used to, for good or ill. I want to write my own custom assertions that put the cause before the effect, but that would just confuse people (myself among them) who have been putting it the other way around for years.

Well… I was just tinkering in Ruby, and I found what I think is a nice way to flip the parameters, and leave a nice readable clue that that’s how they’re supposed to be:

assert_lexer "*" => [:times, "*"]

Ruby’s => syntax lets me show how things are flowing: the “*” should turn into the array [:times, “*”]. Input on the left, expected output on the right, and visibly so. Neat, huh?

For anyone not familiar with =>, it lets you lump several name => value pairs together at the end of a parameter list. They all get scooped up into a single Hash, which is passed as the method’s final parameter. So the above is equivalent to (but much prettier than):

hash = Hash.new
hash["*"] = [:times, "*"]