NUnit and Silverlight

Unit testing in Silverlight is a persnickety business. The NUnit.Framework binary is built for full .NET, so you can’t easily use it to test Silverlight assemblies. I tried a few different things, but kept running into walls.

Fortunately, smarter people have already figured it all out. Jamie Cansdale made a Silverlight NUnit project template that gets you started right. It’s intended for TestDriven.NET, but it works great with ReSharper’s test runner too. Just download and open his template, and it’ll add itself to Visual Studio. Then the next time you do New Project, there’s an extra “Silverlight NUnit Project” option available under the Visual C# > Silverlight project type. Very cool.

However, the nunit.framework assembly in Jamie’s template is from some unidentified, but old, version of NUnit. There’s no version info in the DLL, but I know it’s gotta be 2.4.x or earlier, because its Is class (from the fluent assertions — Assert.That(2 + 2, Is.EqualTo(4));) is in a different namespace, whereas I know that 2.5 moved it into the main NUnit.Framework namespace.

Since I use the fluent assertions all the time, and since I just don’t want to use an old version, I went hunting again, and found Wes McClure’s NUnit.Framework 2.5.1 for Silverlight 3. It’s only a little old — right now the latest version is 2.5.2 — and his binaries are working out quite nicely so far.

So I use Jamie’s template to create a new project, which includes a lib directory with the old version of nunit.framework.dll; then I grab Wes’s nunit.framework.dll and drop it into the lib directory, replacing the older version. And I’m good to go.

Now, back to those fiddly trig calcs… (See, there was a reason I wanted to add a test project!)

Update, Oct 10 7:30am… Intellisense works great with Wes’s assembly. Building and running are a different story. Much unexplainable behavior from Visual Studio. Short version: I couldn’t get Wes’s assembly to work with the ReSharper test runner. But Jamie’s template is working fine so far.

Continuing the browser-testing journey: more automation and Vista thumbnails

I’ve been continuing my quest to easily run my unit tests in multiple browsers. Obviously, anything that takes more than 30 seconds is worth burning a couple of weeks trying to automate.

I originally thought I could do what I wanted with a Ruby script, using Watir (for IE), FireWatir (for FireFox), and ChromeWatir (for Google Chrome). I planned to run the script on demand, e.g. from my editor’s Tools menu, to (a) launch the browsers if they weren’t already running, (b) find an existing tab that was already open to the tests, or open a new tab if needed, and (c) (re)load the test page.

Worked great — for IE. (I posted sample code last time.) But FireWatir wasn’t able to re-use existing windows/tabs. And ChromeWatir is very pre-alpha and didn’t have anywhere near the feature set I needed.

Strike the *Watirs as a solution.

So let’s step back. What steps do I really want to automate?

  1. For each browser (Chrome, FireFox, and IE):
  2. If the browser is already running, Alt+Tab to it. Otherwise, launch it.
  3. If the tests are already open, Ctrl+Tab to the correct tab. If not, Ctrl+T to open a new tab, type in the URL, and hit Enter.
  4. Press Ctrl+R (or Ctrl+Shift+R in FireFox).
  5. Watch the screen while the tests run, to see whether they all pass.

Step 2 is automatable, and so is step 4. The others pose a bit more of a problem.

Then I stumbled across a solution for step 3. All three browsers, of course, support Ctrl+Tab and Ctrl+Shift+Tab to move between open tabs. It turns out they also all support Ctrl+<tab number> to jump to a tab by position. For example, Ctrl+1 to move to the first tab. Hey, I’m running these tests in a controlled environment — I can just say that I’ll keep the tests open in the first tab in each browser!

Okay, so that takes care of 3(a), of switching to the right existing tab. What about 3(b), opening a new tab? Ah, but why should I even need to do that? When I launch the browser, I can just pass the URL on the command line. Presto — my tests open in the first tab. As long as I’m smart enough not to close that tab, I’m set.

This is looking better and better. But there’s still that pesky matter of waiting for the tests to run in each browser, before I move on to the next one.

Or is there?

See, Windows Vista has this feature where, when you hover the mouse over a taskbar button, it shows a little thumbnail of the window. And that thumbnail is live — as long as the window isn’t minimized, the thumbnail stays updated in real-time as the window repaints, even if the window is fully covered by another maximized window. And this feature has an API.

If you have a window handle, you can put a live thumbnail of that window anywhere in your application, and any size you like. You don’t even have to write timer code to keep it updated — it just happens. It’s maaaagic.

I wound up going one step better than window handles and FindWindow: I wrote code that takes the fully-qualified path to the browser EXE, and automatically finds the main window for the running process. That way, I can have one thumbnail for the 32-bit IE, and another for the 64-bit IE, even though they both use the same window class. It’s pretty slick.

I wrapped it all up into a WinForms control called WindowThumbnail. Here’s a screenshot of my app with four thumbnails: Google Chrome, FireFox, 32-bit IE, and 64-bit IE:

WindowThumbnails action shot

The code isn’t polished enough to release yet, but if you drop me a note in the contact form I can e-mail you the rough code as it stands.

I was trying to enhance this app to be able to send the Ctrl+R keystrokes to the apps — ideally without focusing the other windows — but haven’t had much luck yet (turns out some apps don’t expect to get keystrokes when they’re not the foreground window). But I’ve realized that really, I can simplify that some more. There are tools that can launch apps, change focus, send keys; I don’t need to write C# code for that. I’ve heard good things about AutoHotkey, and it looks insanely scriptable, as well as making it trivial to bind the script to a shortcut key on the keyboard.

I might well be able to get this to the point where I press one key, and my computer automatically launches the browsers, Alt+Tabs through them, sends the tab-switch and refresh keystrokes, then switches to my dashboard app where I can watch the tests scroll by on all the browser windows at once.

Man, this is what being a geek is all about.

DUnit tricks: Getting stack traces for test failures

DUnit has an option to use JCL to generate stack traces. The idea is that, whenever there’s an exception or a test failure, DUnit will show a stack trace right along with the failure message. The only problem is, it doesn’t work.

There’s no problem getting stack traces for unexpected exceptions. This is really useful; you get the address, unit name, method name, and even the line number of the line that threw an exception, together with the call stack that led to that code getting called. Immensely useful.

The problem is, you don’t get the same thing for test failures — even though Fail and CheckEquals, and even DUnitLite’s Specify.That, operate by throwing exceptions (they’ve got their own exception class, ETestFailure). You should be able to get a stack trace that shows the exact line of code that contained the failing assertion. In fact, we’re using older versions of DUnit and JCL at work, and we get stack traces just fine.

Unfortunately, stack traces for test failures are broken by default in the latest versions of DUnit and JCL. But there’s hope — notice that I said “by default”. Below, I’ll tell you how to fix that default.

Enabling DUnit stack tracing

First of all, here’s how to get DUnit to show stack traces in the first place.

You’ll need to download the source code for both DUnit and the Jedi Code Library. (Recent versions of Delphi ship with an older version of DUnit, but I only tested this with the latest version.)

Add the DUnit, JCL, JCL\Common, and JCL\Windows directories to your project’s search path.

Then make the following changes in Project > Options:

  1. On the Directories/Conditionals page, set “Conditional defines” to: USE_JEDI_JCL
  2. On the Linker page, set “Map file” to “Detailed”.

Now write a test that throws an unexpected exception, compile, and run. Here’s a contrived example, but it’ll give you an idea of what it looks like:

Screenshot of the DUnit GUI showing an exception stack trace

Enabing stack tracing for test failures

We don’t get unexpected exceptions in our tests very often. More often, it’s test failures. And when we have more than one assertion in the same test (not the ideal, but it happens a lot), sometimes it’s hard to know which assertion failed. Or rather, it’s always easy to know, if you have stack traces.

The problem is with JCL’s exclusion list. The latest version of JCL keeps a configurable list of exception types that it shouldn’t generate stack traces for. Seems like a reasonable feature. But the JCL and DUnit teams made three important design decisions, at various points in time:

  1. JCL’s exclusion list, by default, contains one class: EAbort.
  2. JCL ignores not just the classes in the exclusion list, but any of their descendant classes as well.
  3. DUnit’s ETestFailure descends from… yep, you guessed it, EAbort.

Put all three together, and stuff doesn’t work.

But, all that said, it’s easy to work around. Just add the following code to your project file, before you call one of the RunRegisteredTests routines:



And bask in the goodness that is stack traces.

We’re hiring

We’re looking for a good programmer, to join our agile team in Omaha, work with great people in our awesome bullpen, and play with our acres of magnetic whiteboard.

(This job is already posted on the “Jobs” sidebar, but since I have a vested interest in this one — i.e., I’d be working with you — I figured I’d call it out a little more.)

Delphi and/or C# experience would help, but we’re really just interested in someone who’s a good programmer. If you can write solid, maintainable OO code, we’d like to hear from you. If you, furthermore, live and breathe refactoring, grok single-responsibility principle in fullness, have a keen nose for code smells (and test smells), and can’t imagine writing code without automated tests, we’d absolutely love to hear from you.

A few highlights:

  • You’d be working in our open bullpen (those pictures are a bit dated — we have flat-panel LCDs now). Just about the whole team works in the bullpen, so if you have a question — whether for another programmer, QA, or one of the customers — you can usually just lean back and yell.
  • Dual monitors on every development workstation.
  • Fast PCs (4 GB battery-backed RAMdisks, anyone?).
  • Subversion for revision control. Atomic commits kick ass.
  • Management has a clue (programmers help make the release plan, rather than being held to an unrealistic plan made by a non-programmer).
  • Development is done via pair programming (two programmers at one keyboard), so there’s always someone to brainstorm with, knowledge spreads quickly, and code reviews aren’t after-the-fact.
  • Between QA, and our four automated-build machines running over 10,000 tests every time code is committed, feedback is usually pretty quick — it’s rare for a bug to go unnoticed for more than a few days or, more typically, a few minutes, so you don’t have to re-familiarize yourself with the code again.

The job posting goes into a fair bit of detail about the position, and about our team. If you have further questions about the job, feel free to post ’em here, and I’ll answer what I can. If you want to apply, see the links at the bottom of the job post.

If it’s important enough to comment, it’s important enough to test

Several people disagreed when I said you should try to write code that doesn’t need comments. I’ll be addressing some of their thoughts — sometimes by saying they were right, but sometimes with constructive suggestions.

I’ll start with a couple of people who suggested that comments should try to explain code that was put in to fix a bug, or to explain the “tricky code” that you sometimes have to write.

Those are both scenarios where a comment isn’t enough. You need an automated test.

Tricky code

The best way to deal with tricky code is by not writing it. But I’ll assume you’re dealing with a case where some manner of tricky code is indeed the best way to get the job done.

Suppose you’re writing a Delphi constructor, and for some reason you need to put some code before the call to inherited Create. (That certainly qualifies as tricky.) Assuming there’s a reason why that’s necessary (and not just sloppiness), you should have an automated test that proves the resulting behavior is correct… something that will pass with the tricky code, and will fail without it. A test that would break if someone else (or you, for that matter) later misunderstood the importance of that ordering, and “fixed” it by moving the code after inherited Create. A test that would pass if someone found a less tricky way to accomplish the same thing.

Bug fixes

If you’re fixing a bug, you should definitely be writing an automated test, if at all possible. That way, if someone later makes a change that breaks the same thing again, you’ll know. As Brian Marick said (and we had posted on our wall for a while), “No bug should be hard to find the second time.”

Then look again

Once you’ve got your automated test, take another look. Is the comment still necessary?

Depending on your audience, maybe it is. As someone else pointed out, if you’re a library vendor, then comments have a lot of value.

But I often find that, with the test in place, the comment can just go away.