PowerShell unit testing: mocking constructors

I started using PowerShell about a year ago (my new team uses it for both build and deployment scripts), and I keep meaning to start blogging about it.

Today’s tip: mocking constructors.

In most languages, you can’t write a unit test that intercepts constructor calls. If you have a method like this:

public void OpenBrowser() {
    var driver = new FirefoxDriver();
    // ...

then your method is tightly coupled to the FirefoxDriver class, and (without refactoring) there’s no way for your tests to call that method without creating a new instance of FirefoxDriver, and now everything is fraught with peril. This kind of tight coupling has been a problem since forever, and it’s the reason why we have the Factory pattern.

(Yes, you can use something like Microsoft Shims or TypeMock Isolator to intercept constructor calls, but those work by going outside the language and sinking their hooks deep into the .NET runtime. And you still have to return a FirefoxDriver instance; you can’t substitute something else.)

But in PowerShell, as I recently discovered, you don’t need the Factory pattern. The way to create .NET objects is to call the New-Object function. And Pester, the PowerShell unit-testing framework, can mock any function. Including New-Object.

Now, in C# code, I’m all about refactoring the code to use a factory (or a Func<T>, or dependency injection). But PowerShell is a different beast with different idioms, and I’ve found cases where mocking New-Object makes really good sense.

Here’s the basic template I work off of:

Mock New-Object {
    [PSCustomObject] @{
        Property1 = "value1"
        Property2 = 42

This tells Pester to replace the New-Object function with my own script block, which uses the [PSCustomObject] syntactic sugar to return a new object with two properties, Property1 and Property2.

The script block that you pass to Mock can also access the $TypeName and $ArgumentList parameters that were passed to New-Object, so you can do things like returning fakes for some classes but falling back to the original New-Object cmdlet for all other classes:

Mock New-Object {
    $Type = Invoke-Expression "[$TypeName]"
    if ([OpenQA.Selenium.Remote.RemoteWebDriver].IsAssignableFrom($Type)) {
        [PSCustomObject] @{}
    } else {
        & (Get-Command New-Object -CommandType Cmdlet) @Args

But I’d been using this technique for a while before I had to get that fancy. Mostly in PowerShell you don’t use New-Object a lot; arrays and hash tables and [PSCustomObject]s have their own dedicated syntax and don’t need to go through New-Object (which is fine since they don’t have any side effects you’d need to worry about mocking).

Microsoft.TeamFoundationServer.Client: getting started, and getting sprint dates

There’s a new REST API in TFS 2015. It’s completely undocumented at the moment, although Visual Studio Online’s REST API is well-documented and they probably overlap a lot.

There’s a NuGet package called Microsoft.TeamFoundationServer.Client that provides a .NET client library for the TFS REST API. Unfortunately, it’s also completely undocumented, and in this case there is no other documentation to refer to — which makes it hard to figure out where to start.

But you’re in luck: I’ve been tinkering with it a bit, and I’m going to start blogging what I’ve learned about how to use the darned thing.

Connecting to work item tracking

Unlike the old-school TFS API, there isn’t some central class you have to create and authenticate to and then query for services.

Instead, you instantiate the client for the particular service you want, and hand it an object with authentication information. Then you start calling methods. I’m guessing that each method corresponds to a single HTTP request, but I haven’t verified that.

Let’s say we want to do something with work item tracking — basically anything relating to work items, including things like saved queries, area paths and iterations, work-item edit history… all that stuff. For that, we need to instantiate WorkItemTrackingHttpClient.

using Microsoft.TeamFoundation.WorkItemTracking.WebApi;
using Microsoft.VisualStudio.Services.Common;

var client = new WorkItemTrackingHttpClient(
    new Uri("http://myserver:8080/tfs"),
    new VssCredentials(true));


VssCredentials tells the library how you’re going to authenticate. There are a bunch of constructor overloads, and I’m not clear on when you would use some of them. But here are the most obvious ways I see to authenticate:

Authenticate using my current Windows credentials

new VssCredentials(true)

This is probably simplest in most cases. Just pass true for the useDefaultCredentials constructor parameter.

Authenticate using a specific username and password

Maybe you’ve got a username and password in your web.config (stored in encrypted form, of course), and you want to use those to authenticate against TFS.

new VssCredentials(new WindowsCredential(new NetworkCredential(userName, password)))

It’s possible there’s a way to do that without chaining quite so many objects together. Or maybe not; you get used to things being unnecessarily complicated when you’re working with TFS.

Getting a list of sprints

Sometimes you need to query the work items for the current sprint. So let’s start by getting the list of sprints.

Sprints are just iterations (with some extra metadata that describes their start and finish dates), so we’ll query the iteration tree.

using Microsoft.TeamFoundation.WorkItemTracking.WebApi.Models;

var rootIterationNode = await client.GetClassificationNodeAsync(
    depth: int.MaxValue);

Each team project has its own areas and iterations, so you have to pass in the name of your team project. I assume you already know your teamProjectName and have it ready to pass in. (You can also pass in the project’s Guid if you happen to have it.)

GetClassificationNodeAsync returns a WorkItemClassificationNode:

public class WorkItemClassificationNode : WorkItemTrackingResource
    public WorkItemClassificationNode();

    public IDictionary<string, object> Attributes { get; set; }
    public IEnumerable<WorkItemClassificationNode> Children { get; set; }
    public int Id { get; set; }
    public string Name { get; set; }
    public TreeNodeStructureType StructureType { get; set; }

The method returns the root node for the iteration tree; as far as I know, the root’s Name always matches the name of the team project. Then you’ve got its children, and you can recursively walk your way through the tree.

Be careful — if a node has no children, Children is null rather than an empty list. (Makes sense for the JSON, but the API really should have smoothed out that wrinkle.)

You’ll notice that this class doesn’t have properties for StartDate and FinishDate for sprints. (I think that’s because you can also ask this method for area paths, where start- and finish-date properties would make no sense at all.) The object hides the dates inside the Attributes dictionary, under the keys startDate and finishDate; the values are DateTimes. Once again, beware: if a node has no attributes (e.g., if it represents an iteration that isn’t a sprint), the Attributes dictionary is null.

Here’s some sample code to get the start and finish dates (if present) from an iteration. If they’re present, it’s a sprint.

if (iterationNode.Attributes != null) {
    object startDateValue;
    object finishDateValue;
    iterationNode.Attributes.TryGetValue("startDate", out startDateValue);
    iterationNode.Attributes.TryGetValue("finishDate", out finishDateValue);
    var startDate = startDateValue as DateTime?;
    var finishDate = finishDateValue as DateTime?;
    if (startDate.HasValue && finishDate.HasValue) {
        // do something with startDate.Value and finishDate.Value

Or you could do like I do, and write a GetValueOrDefault extension method for Dictionary<TKey, TValue>, at which point it becomes:

if (iterationNode.Attributes != null) {
    var startDate = iterationNode.Attributes.GetValueOrDefault("startDate") as DateTime?;
    var finishDate = iterationNode.Attributes.GetValueOrDefault("finishDate") as DateTime?;
    if (startDate.HasValue && finishDate.HasValue) {
        // do something with startDate.Value and finishDate.Value

Other notes

If you want to load both iterations and area paths, you can save yourself a round-trip by calling client.GetRootNodesAsync, which returns them both in a single call (it returns a list of WorkItemClassificationNode instead of a single node). Remember to specify the depth parameter, or it’ll only return the root iteration and the root area path, without their children.

Further reading

The Visual Studio Online API Overview has a good conceptual overview of some of the terms you’ll see; for example, it explains the difference between “revisions” and “updates” when you’re looking at work item history. Worth checking out.

If I blog more about Microsoft.TeamFoundationServer.Client, you’ll be able to find it under my microsoft.teamfoundationserver.client tag.

Bootstrap grid system explained

One of my co-workers ran across an article called “The Subtle Magic Behind Why the Bootstrap 3 Grid Works“. It’s worth mentioning here, partly so I have a good place to find the link again.

The article does a very good job of laying out two things: how Bootstrap’s grid system works (for those of you, like me, who are geeks and want to know how everything works), and — far more importantly — how to use Bootstrap’s grid system correctly (for those of you, like me, who wished the Bootstrap documentation did a better job on that score).

The “how to make it work” is vital. The first time I tried to use Bootstrap’s grid system, I tried to follow the documentation, finally gave up in disgust, and just did a View Source to see what classes the documentation page actually used, and dorked around with it until I figured out how many of those I really needed. If you read this article, you won’t have to do that. It lays out, very clearly, what classes you need to put where, and what you can safely nest inside what. The illustrations are also quite helpful.

Well worth the read.

Angular street cred

Today at work, a couple of my co-workers flagged me down to help them figure out a problem with an Angular app they were working on.

Now, my only experience with Angular has involved playing with it in my spare time (and trying to write a video game with it) — whereas these are people who have been doing Angular development as their day job for the past month or so.

Of course, I did volunteer to give the team a lunch-and-learn presentation, back when they started on this project, to help introduce them to Angular. So I guess I already had some Angular street cred. Plus, they’ve known me for years — enough to know how I like to dig into the details and figure out how things tick.

But still, I thought that was pretty cool — that even in my spare time, I’ve picked up enough Angular knowledge that they called me over to help.

(And yes, I was able to help get their code working.)

ng-rpg: Trying to scroll the world map with Angular animations

I’ve talked before about how I’m writing a Final Fantasy-style roleplaying videogame in AngularJS. As is my wont, I’m actually trying to write a reusable framework that other people could use to build their own console RPGs; my game will just be a proof-of-concept for the framework.

(I’m calling dibs on the name “ng-rpg”.)

One of the most basic things that a Final Fantasy-style RPG needs to do is let you walk around the map. Angular has animation support baked in, and I had some ideas on how I could implement “walk around the map” with animation and databinding, as crazy as that sounds. But I needed to do some testing to make sure I’d get good results.

Let’s say the user hits the left arrow on the keyboard. I always show the in the center of the screen, so for the hero to move left within the map, the map needs to slide to the right. I scale the tiles based on screen size, but let’s say the map needs to move 100 pixels. Once it’s moved 100 pixels, I need to check to see whether the left arrow is still held down; if so, I kick off the animation again, and the hero keeps moving smoothly westward through the (scrolling) map.

Angular can do a bunch of this. I could use databinding to add a CSS class to the “map” div, and as long as I’ve defined the animations in my CSS, Angular would automatically kick off a nice animation to move everything 100 pixels.

I also need to know when the animation is done, so I can’t just use the built-in ngClass binding — it has no way to give a “done” notification. Instead, I need to write my own directive, and have it use the $animate service to add the class and specify a “done” callback. Then in the callback, I can decide whether I need to restart the animation, or perhaps start a different animation, or just stop. So far so good.

The question is, if the callback restarts the animation, what will the user see? Will the transition be seamless and continuous, or will there be a tiny pause before the animation restarts?

To find out, I put together this proof-of-concept plunk. It just has a box that you can move left and right using a couple of buttons, and each time you tell it to move, it does two animations in a row, so you can see whether there’s a seam between them.

The results were disappointing. With Angular 1.2, Chrome’s animation is smooth and seamless; you can’t even tell it’s chaining two animations end-to-end. But in both FireFox and IE, there’s a noticeable pause before it starts the second animation, so the animation is choppy instead of continuous. And in the Angular 1.3 beta (plunk), I haven’t had any luck getting this technique to work at all — if I remove and re-add the class in the “done” event, the animation doesn’t restart.

So it looks like pure-Angular animations won’t really work for RPG maps (though I expect they’ll be pretty slick for battle animations). But that’s okay; my next map-animation proof of concept showed more promise. Stay tuned.

Using ImageMagick for palette-swapping

A common thing in console RPGs is palette-swapped monsters, where the same image is given a different set of colors, so an Imp becomes an Imp Captain, Dark Imp, etc. without the need for brand-new artwork. I used this technique to generate the various hair colors in my Ponytail and Plain hairstyles that I posted on OpenGameArt and contributed to the Universal LPC Spritesheet.

You can do palette swaps fairly straightforwardly with the ImageMagick command line. You pass something like the following to the convert command line:

-fill #FFFF00 -opaque #999999

This replaces all light gray pixels (#999999) with eye-searing yellow (#FFFF00). Note that it’s target color first, then source. You can repeat this pair of commands as often as you like.

It gets a little odd when you get to semitransparent colors. Say your source image has some pixels that are #9C59919C (that’s ImageMagick’s oddball RGBA syntax, with the alpha at the end). If you try to replace #9C5991 with some other color, it won’t affect the semitransparent pixels — -opaque does an exact match, and #9C59919C is not an exact match for #9C5991[FF]. So you need to explicitly specify the source and target alpha with each semitransparent color:

-fill #FF00FF9C -opaque #9C59919C

Note for GIMP users: If you use ImageMagick to replace semitransparent colors, and then open the output file in GIMP, GIMP may not show the semitransparency. This seems to be a GIMP-specific bug with indexed-color images (i.e., those stored with a palette). Just switch GIMP to Image > Mode > RGB and it’ll display correctly.

Combine this with the trick we saw last time to remove the embedded create/modify dates and you get something that looks like this (though all on one line):

    -fill #FFFF00 -opaque #999999
    -fill #FF00FF9C -opaque #9C59919C
    +set date:create +set date:modify

Using ImageMagick to write PNGs without embedded create/modify dates

I’m writing some Rake tasks to generate a bunch of PNGs to check into version control, and ran into a problem: every time ImageMagick generated the files, they showed up as diffs in Git, even if the pixel content was identical. The files’ hashes were different each time I generated them.

I did some hunting, and found a discussion thread “create/modify-date written into file”, but its fix (adding +set modify-date +set create-date to the command line before the output filename) didn’t work for me — if I generated the same content twice, the two files were still different.

So I looked at the output of identify:

identify -verbose generated.png

Which spews out quite a lot of output, but in it was this:

    date:create: 2014-01-12T08:01:31-06:00
    date:modify: 2014-01-12T08:01:31-06:00

Maybe they just renamed the properties in a later version of ImageMagick? (The thread was from IM 6.4; I’m on 6.8.7.) So I tried adding +set date:create +set date:modify.

And that did it: I could generate the same file twice, and the two files were binary equal.

file 'generated.png' => ['infile.png', 'Rakefile'] do
  sh "convert infile.png[64x64+0+0] infile.png[64x64+128+0] +append " +
    "+set date:create +set date:modify generated.png"

Growl for angular-seed test-status notifications

Updated 5/4/2014 for Angular 1.2.16 and Karma 0.10.10.

When you run angular-seed’s test runner (formerly scripts\test.bat, now npm test), it runs all the tests once, and then enters watch mode. If you make a change to any of your source files (either production code or test code), it will automatically re-run the tests. This is pretty cool, but you need some way to see the test results.

If I were doing Angular development at work, I’d have dual monitors, and I could put my editor on one monitor and the test runner’s console window on the other monitor, so I’d get instant feedback. But then I’d go to look something up in the documentation, and now the console is covered up by a Web browser, so I’d have to juggle that. And in my case, Angular is just a hobby, and I don’t have dual monitors at home.

So I went looking for a way to show the build status via toast alerts. And it turned out to be pretty easy.


The Jasmine test runner has lots of extensions available, and one of them shows build status via Growl notifications.

Growl for Windows in action

Growl is a toast-alert platform for the Mac, and there’s also a Growl for Windows. I’ve included a screenshot of a couple of alerts so you can see what it looks like. Each alert sticks around for a few seconds, so I made my test fail, saved, quickly made it pass, saved, and took a screenshot.

So this is pretty cool. You’ll still have to look at the console window to see which tests failed and why, but if you expected it to pass and it did, you can just keep storming along.

Configuring angular-seed and Growl

  1. Download and install Growl for Windows. The installation is pretty simple.

  2. Go to a command prompt, cd to your project directory (the directory with the package.json in it), and type:

    npm install --save-dev karma-growl-reporter

    This will install the karma-growl-reporter Node.js module (and all its dependencies) into your project’s node_modules directory, where they need to be for Jasmine to find them.

    It will also (because of the --save-dev option) automatically modify your package.json file to say that you depend on the karma-growl-reporter package. This isn’t important now, but will be when you want to check out your code on another computer — you can rebuild your node_modules directory on that other computer by running npm install to install all the dependencies listed in your package.json.

  3. Edit your config/karma.conf.js and make the changes labeled “STEP 1” and “STEP 2”. (The below config file also includes the changes to make angular-seed run CoffeeScript tests.)

    module.exports = function(config){
        basePath : '../',
        files : [
        autoWatch : true,
        frameworks: ['jasmine'],
        browsers : ['Chrome'],
        plugins : [
                // STEP 1: Add 'karma-growl-reporter' to this list
        // STEP 2: Add the following section ("reporters") -- or if you already have
        // a "reporters" section, add 'growl' to the list:
        reporters: ['progress', 'growl'],
        junitReporter : {
          outputFile: 'test_out/unit.xml',
          suite: 'unit'

Then run npm test, and you should see a toast alert pop up, telling you your tests are passing.

Writing CoffeeScript tests with angular-seed

Updated 5/4/2014 for Angular 1.2.16 and Karma 0.10.10.

The first thing I did, after snagging a copy of angular-seed, was try to write a test in CoffeeScript. It didn’t work out of the box, so I went hunting to figure out how to make it work.

(I also found a few GitHub projects that are prepackaged angular-seed with CoffeeScript, but I didn’t see any that have been kept up-to-date with new Angular versions like angular-seed has. Maybe that’ll change when Angular 1.2 ships. Nope, they’ve pretty much been abandoned.)

Why CoffeeScript rocks for Jasmine/Mocha tests

As awesome as nested describes are, they’re even more awesome in CoffeeScript.

Jasmine tests look like this:

describe('calculator', function() {
    it('adds', function() {
        expect(calculator.add(2, 2)).toBe(4);

CoffeeScript has some nice advantages for code like this.

  1. CoffeeScript has a built-in lambda operator, ->, that replaces function() {. So that’s a little less typing, and a lot less of the line taken up by furniture.
  2. CoffeeScript doesn’t need curly braces. Instead, you indicate nested blocks by indenting them (and heck, you were doing that anyway).
  3. CoffeeScript lets you leave off the parentheses around a method’s arguments. (Unlike Ruby, you do still need the () if you’re not passing any arguments.)

Combine these, and you get code that looks like this:

describe 'calculator', ->
    it 'adds', ->
        expect(calculator.add 2, 2).toBe 4

Each line is shorter than before, so there’s less typing and a little better signal-to-noise. But you also don’t need those }); lines at the end, because you don’t need the close-curlies to end the lambdas, and you don’t need the close-parens to end the argument lists for describe and it — in both cases, the indentation suffices. Less ceremony, more essence.

Until you see it on your screen, it’s hard to appreciate just how much it improves your tests, not having all those }); lines. Many tests will consist of a single assertion, so by cutting that worthless extra line, you go from almost 33% noise-by-line-count to 0%. Plus, the entire test goes from three lines to two — so now you can fit 50% more tests on your screen. That’s a win.

The syntax is uncluttered enough that it can even be reasonable to put the whole test on one line:

describe 'calculator', ->
    it 'adds',      -> expect(calculator.add 2, 2).toBe 4
    it 'subtracts', -> expect(calculator.subtract 6, 2).toBe 4

When you’ve got a bunch of little tests that are all on the same theme, this can work really well. That would look a lot uglier if it was function() { instead of ->, and if you had to find a place for the }); as well.

And last but not least, it’s really easy to find a text editor that can do code folding based on indentation (and not as easy to find an editor that can collapse based on curly braces). I use Sublime Text, which is a bit pricey, but you can also do indentation-based code folding with the free SciTE if you put it in Python mode. So if you’ve got a whole nested describe for your calculator’s trig functions, and you want to collapse that because now you’re focusing on hex arithmetic, you just fold that whole section of your code up into a single line.

CoffeeScript tests with angular-seed

It takes some ritual to get angular-seed working with CoffeeScript, but the good news is, you only have to do it once (well, once per project).

Updated 5/4/2014: With Angular 1.2.16 and Karma 0.10.10, the “preprocessors” and “coffeePreprocessor” sections no longer need to be added, so I removed them from the instructions below. If you’re on a version where you do need them, you can copy them from the karma-coffee-preprocessor readme.

All the changes are in config/karma.conf.js. Look for the parts I tagged “STEP 1” and “STEP 2”.

module.exports = function(config){
    basePath : '../',

    files : [
      // STEP 1: Add 'test/unit/**/*.coffee' to this list:

    autoWatch : true,

    frameworks: ['jasmine'],

    browsers : ['Chrome'],

    plugins : [
            // STEP 2: Add 'karma-coffee-preprocessor' to this list:

    junitReporter : {
      outputFile: 'test_out/unit.xml',
      suite: 'unit'


Once that’s done, you’re off and running: you can drop a .coffee file in your project’s test/unit directory, and the test runner will pick up all the tests in it and run them.

Note that, at least with the current versions of angular-seed and jasmine, you’ll have to stop and restart scripts\test.bat before it’ll pick up the new test file. (This is true any time you add a new test file; it’s not CoffeeScript-specific.) Watch mode apparently doesn’t extend to watching for new test files. Just Ctrl+C to stop it, say Yes (or hit Ctrl+C again) to the “Terminate batch job (Y/N)?” prompt, and then run scripts\test.bat again, and you’re on your way. No longer an issue with Karma 0.10.10.

Home page redesign

The excastle.com home page has been looking pretty drab for years now. Recently I decided to flex my creative muscles and make it look prettier, with the power of Bootstrap and OpenGameArt (props to extradave and pixel32).


ExCastle.com, pre-2013


ExCastle.com, circa 2013

This was a fun way to spend a Saturday, and I like how it turned out. I’m still not going to call myself a graphic designer, but at least the page won’t put you to sleep right away.