Joe White’s Blog

Life, .NET, and Cats

Archive for the ‘Programming’ Category

Microsoft.TeamFoundationServer.Client: getting started, and getting sprint dates

Sunday, August 16th, 2015

There’s a new REST API in TFS 2015. It’s completely undocumented at the moment, although Visual Studio Online’s REST API is well-documented and they probably overlap a lot.

There’s a NuGet package called Microsoft.TeamFoundationServer.Client that provides a .NET client library for the TFS REST API. Unfortunately, it’s also completely undocumented, and in this case there is no other documentation to refer to — which makes it hard to figure out where to start.

But you’re in luck: I’ve been tinkering with it a bit, and I’m going to start blogging what I’ve learned about how to use the darned thing.

Connecting to work item tracking

Unlike the old-school TFS API, there isn’t some central class you have to create and authenticate to and then query for services.

Instead, you instantiate the client for the particular service you want, and hand it an object with authentication information. Then you start calling methods. I’m guessing that each method corresponds to a single HTTP request, but I haven’t verified that.

Let’s say we want to do something with work item tracking — basically anything relating to work items, including things like saved queries, area paths and iterations, work-item edit history… all that stuff. For that, we need to instantiate WorkItemTrackingHttpClient.

using Microsoft.TeamFoundation.WorkItemTracking.WebApi;
using Microsoft.VisualStudio.Services.Common;

var client = new WorkItemTrackingHttpClient(
    new Uri("http://myserver:8080/tfs"),
    new VssCredentials(true));


VssCredentials tells the library how you’re going to authenticate. There are a bunch of constructor overloads, and I’m not clear on when you would use some of them. But here are the most obvious ways I see to authenticate:

Authenticate using my current Windows credentials

new VssCredentials(true)

This is probably simplest in most cases. Just pass true for the useDefaultCredentials constructor parameter.

Authenticate using a specific username and password

Maybe you’ve got a username and password in your web.config (stored in encrypted form, of course), and you want to use those to authenticate against TFS.

new VssCredentials(new WindowsCredential(new NetworkCredential(userName, password)))

It’s possible there’s a way to do that without chaining quite so many objects together. Or maybe not; you get used to things being unnecessarily complicated when you’re working with TFS.

Getting a list of sprints

Sometimes you need to query the work items for the current sprint. So let’s start by getting the list of sprints.

Sprints are just iterations (with some extra metadata that describes their start and finish dates), so we’ll query the iteration tree.

using Microsoft.TeamFoundation.WorkItemTracking.WebApi.Models;

var rootIterationNode = await client.GetClassificationNodeAsync(
    depth: int.MaxValue);

Each team project has its own areas and iterations, so you have to pass in the name of your team project. I assume you already know your teamProjectName and have it ready to pass in. (You can also pass in the project’s Guid if you happen to have it.)

GetClassificationNodeAsync returns a WorkItemClassificationNode:

public class WorkItemClassificationNode : WorkItemTrackingResource
    public WorkItemClassificationNode();

    public IDictionary<string, object> Attributes { get; set; }
    public IEnumerable<WorkItemClassificationNode> Children { get; set; }
    public int Id { get; set; }
    public string Name { get; set; }
    public TreeNodeStructureType StructureType { get; set; }

The method returns the root node for the iteration tree; as far as I know, the root’s Name always matches the name of the team project. Then you’ve got its children, and you can recursively walk your way through the tree.

Be careful — if a node has no children, Children is null rather than an empty list. (Makes sense for the JSON, but the API really should have smoothed out that wrinkle.)

You’ll notice that this class doesn’t have properties for StartDate and FinishDate for sprints. (I think that’s because you can also ask this method for area paths, where start- and finish-date properties would make no sense at all.) The object hides the dates inside the Attributes dictionary, under the keys startDate and finishDate; the values are DateTimes. Once again, beware: if a node has no attributes (e.g., if it represents an iteration that isn’t a sprint), the Attributes dictionary is null.

Here’s some sample code to get the start and finish dates (if present) from an iteration. If they’re present, it’s a sprint.

if (iterationNode.Attributes != null) {
    object startDateValue;
    object finishDateValue;
    iterationNode.Attributes.TryGetValue("startDate", out startDateValue);
    iterationNode.Attributes.TryGetValue("finishDate", out finishDateValue);
    var startDate = startDateValue as DateTime?;
    var finishDate = finishDateValue as DateTime?;
    if (startDate.HasValue && finishDate.HasValue) {
        // do something with startDate.Value and finishDate.Value

Or you could do like I do, and write a GetValueOrDefault extension method for Dictionary<TKey, TValue>, at which point it becomes:

if (iterationNode.Attributes != null) {
    var startDate = iterationNode.Attributes.GetValueOrDefault("startDate") as DateTime?;
    var finishDate = iterationNode.Attributes.GetValueOrDefault("finishDate") as DateTime?;
    if (startDate.HasValue && finishDate.HasValue) {
        // do something with startDate.Value and finishDate.Value

Other notes

If you want to load both iterations and area paths, you can save yourself a round-trip by calling client.GetRootNodesAsync, which returns them both in a single call (it returns a list of WorkItemClassificationNode instead of a single node). Remember to specify the depth parameter, or it’ll only return the root iteration and the root area path, without their children.

Further reading

The Visual Studio Online API Overview has a good conceptual overview of some of the terms you’ll see; for example, it explains the difference between “revisions” and “updates” when you’re looking at work item history. Worth checking out.

If I blog more about Microsoft.TeamFoundationServer.Client, you’ll be able to find it under my microsoft.teamfoundationserver.client tag.

Bootstrap grid system explained

Thursday, July 24th, 2014

One of my co-workers ran across an article called “The Subtle Magic Behind Why the Bootstrap 3 Grid Works“. It’s worth mentioning here, partly so I have a good place to find the link again.

The article does a very good job of laying out two things: how Bootstrap’s grid system works (for those of you, like me, who are geeks and want to know how everything works), and — far more importantly — how to use Bootstrap’s grid system correctly (for those of you, like me, who wished the Bootstrap documentation did a better job on that score).

The “how to make it work” is vital. The first time I tried to use Bootstrap’s grid system, I tried to follow the documentation, finally gave up in disgust, and just did a View Source to see what classes the documentation page actually used, and dorked around with it until I figured out how many of those I really needed. If you read this article, you won’t have to do that. It lays out, very clearly, what classes you need to put where, and what you can safely nest inside what. The illustrations are also quite helpful.

Well worth the read.

Angular street cred

Tuesday, July 15th, 2014

Today at work, a couple of my co-workers flagged me down to help them figure out a problem with an Angular app they were working on.

Now, my only experience with Angular has involved playing with it in my spare time (and trying to write a video game with it) — whereas these are people who have been doing Angular development as their day job for the past month or so.

Of course, I did volunteer to give the team a lunch-and-learn presentation, back when they started on this project, to help introduce them to Angular. So I guess I already had some Angular street cred. Plus, they’ve known me for years — enough to know how I like to dig into the details and figure out how things tick.

But still, I thought that was pretty cool — that even in my spare time, I’ve picked up enough Angular knowledge that they called me over to help.

(And yes, I was able to help get their code working.)

ng-rpg: Trying to scroll the world map with Angular animations

Friday, July 4th, 2014

I’ve talked before about how I’m writing a Final Fantasy-style roleplaying videogame in AngularJS. As is my wont, I’m actually trying to write a reusable framework that other people could use to build their own console RPGs; my game will just be a proof-of-concept for the framework.

(I’m calling dibs on the name “ng-rpg”.)

One of the most basic things that a Final Fantasy-style RPG needs to do is let you walk around the map. Angular has animation support baked in, and I had some ideas on how I could implement “walk around the map” with animation and databinding, as crazy as that sounds. But I needed to do some testing to make sure I’d get good results.

Let’s say the user hits the left arrow on the keyboard. I always show the in the center of the screen, so for the hero to move left within the map, the map needs to slide to the right. I scale the tiles based on screen size, but let’s say the map needs to move 100 pixels. Once it’s moved 100 pixels, I need to check to see whether the left arrow is still held down; if so, I kick off the animation again, and the hero keeps moving smoothly westward through the (scrolling) map.

Angular can do a bunch of this. I could use databinding to add a CSS class to the “map” div, and as long as I’ve defined the animations in my CSS, Angular would automatically kick off a nice animation to move everything 100 pixels.

I also need to know when the animation is done, so I can’t just use the built-in ngClass binding — it has no way to give a “done” notification. Instead, I need to write my own directive, and have it use the $animate service to add the class and specify a “done” callback. Then in the callback, I can decide whether I need to restart the animation, or perhaps start a different animation, or just stop. So far so good.

The question is, if the callback restarts the animation, what will the user see? Will the transition be seamless and continuous, or will there be a tiny pause before the animation restarts?

To find out, I put together this proof-of-concept plunk. It just has a box that you can move left and right using a couple of buttons, and each time you tell it to move, it does two animations in a row, so you can see whether there’s a seam between them.

The results were disappointing. With Angular 1.2, Chrome’s animation is smooth and seamless; you can’t even tell it’s chaining two animations end-to-end. But in both FireFox and IE, there’s a noticeable pause before it starts the second animation, so the animation is choppy instead of continuous. And in the Angular 1.3 beta (plunk), I haven’t had any luck getting this technique to work at all — if I remove and re-add the class in the “done” event, the animation doesn’t restart.

So it looks like pure-Angular animations won’t really work for RPG maps (though I expect they’ll be pretty slick for battle animations). But that’s okay; my next map-animation proof of concept showed more promise. Stay tuned.

Using ImageMagick for palette-swapping

Monday, February 24th, 2014

A common thing in console RPGs is palette-swapped monsters, where the same image is given a different set of colors, so an Imp becomes an Imp Captain, Dark Imp, etc. without the need for brand-new artwork. I used this technique to generate the various hair colors in my Ponytail and Plain hairstyles that I posted on OpenGameArt and contributed to the Universal LPC Spritesheet.

You can do palette swaps fairly straightforwardly with the ImageMagick command line. You pass something like the following to the convert command line:

-fill #FFFF00 -opaque #999999

This replaces all light gray pixels (#999999) with eye-searing yellow (#FFFF00). Note that it’s target color first, then source. You can repeat this pair of commands as often as you like.

It gets a little odd when you get to semitransparent colors. Say your source image has some pixels that are #9C59919C (that’s ImageMagick’s oddball RGBA syntax, with the alpha at the end). If you try to replace #9C5991 with some other color, it won’t affect the semitransparent pixels — -opaque does an exact match, and #9C59919C is not an exact match for #9C5991[FF]. So you need to explicitly specify the source and target alpha with each semitransparent color:

-fill #FF00FF9C -opaque #9C59919C

Note for GIMP users: If you use ImageMagick to replace semitransparent colors, and then open the output file in GIMP, GIMP may not show the semitransparency. This seems to be a GIMP-specific bug with indexed-color images (i.e., those stored with a palette). Just switch GIMP to Image > Mode > RGB and it’ll display correctly.

Combine this with the trick we saw last time to remove the embedded create/modify dates and you get something that looks like this (though all on one line):

    -fill #FFFF00 -opaque #999999
    -fill #FF00FF9C -opaque #9C59919C
    +set date:create +set date:modify

Using ImageMagick to write PNGs without embedded create/modify dates

Sunday, January 12th, 2014

I’m writing some Rake tasks to generate a bunch of PNGs to check into version control, and ran into a problem: every time ImageMagick generated the files, they showed up as diffs in Git, even if the pixel content was identical. The files’ hashes were different each time I generated them.

I did some hunting, and found a discussion thread “create/modify-date written into file”, but its fix (adding +set modify-date +set create-date to the command line before the output filename) didn’t work for me — if I generated the same content twice, the two files were still different.

So I looked at the output of identify:

identify -verbose generated.png

Which spews out quite a lot of output, but in it was this:

    date:create: 2014-01-12T08:01:31-06:00
    date:modify: 2014-01-12T08:01:31-06:00

Maybe they just renamed the properties in a later version of ImageMagick? (The thread was from IM 6.4; I’m on 6.8.7.) So I tried adding +set date:create +set date:modify.

And that did it: I could generate the same file twice, and the two files were binary equal.

file 'generated.png' => ['infile.png', 'Rakefile'] do
  sh "convert infile.png[64x64+0+0] infile.png[64x64+128+0] +append " +
    "+set date:create +set date:modify generated.png"

Growl for angular-seed test-status notifications

Sunday, October 20th, 2013

Updated 5/4/2014 for Angular 1.2.16 and Karma 0.10.10.

When you run angular-seed’s test runner (formerly scripts\test.bat, now npm test), it runs all the tests once, and then enters watch mode. If you make a change to any of your source files (either production code or test code), it will automatically re-run the tests. This is pretty cool, but you need some way to see the test results.

If I were doing Angular development at work, I’d have dual monitors, and I could put my editor on one monitor and the test runner’s console window on the other monitor, so I’d get instant feedback. But then I’d go to look something up in the documentation, and now the console is covered up by a Web browser, so I’d have to juggle that. And in my case, Angular is just a hobby, and I don’t have dual monitors at home.

So I went looking for a way to show the build status via toast alerts. And it turned out to be pretty easy.


The Jasmine test runner has lots of extensions available, and one of them shows build status via Growl notifications.

Growl for Windows in action

Growl is a toast-alert platform for the Mac, and there’s also a Growl for Windows. I’ve included a screenshot of a couple of alerts so you can see what it looks like. Each alert sticks around for a few seconds, so I made my test fail, saved, quickly made it pass, saved, and took a screenshot.

So this is pretty cool. You’ll still have to look at the console window to see which tests failed and why, but if you expected it to pass and it did, you can just keep storming along.

Configuring angular-seed and Growl

  1. Download and install Growl for Windows. The installation is pretty simple.

  2. Go to a command prompt, cd to your project directory (the directory with the package.json in it), and type:

    npm install --save-dev karma-growl-reporter

    This will install the karma-growl-reporter Node.js module (and all its dependencies) into your project’s node_modules directory, where they need to be for Jasmine to find them.

    It will also (because of the --save-dev option) automatically modify your package.json file to say that you depend on the karma-growl-reporter package. This isn’t important now, but will be when you want to check out your code on another computer — you can rebuild your node_modules directory on that other computer by running npm install to install all the dependencies listed in your package.json.

  3. Edit your config/karma.conf.js and make the changes labeled “STEP 1″ and “STEP 2″. (The below config file also includes the changes to make angular-seed run CoffeeScript tests.)

    module.exports = function(config){
        basePath : '../',
        files : [
        autoWatch : true,
        frameworks: ['jasmine'],
        browsers : ['Chrome'],
        plugins : [
                // STEP 1: Add 'karma-growl-reporter' to this list
        // STEP 2: Add the following section ("reporters") -- or if you already have
        // a "reporters" section, add 'growl' to the list:
        reporters: ['progress', 'growl'],
        junitReporter : {
          outputFile: 'test_out/unit.xml',
          suite: 'unit'

Then run npm test, and you should see a toast alert pop up, telling you your tests are passing.

Writing CoffeeScript tests with angular-seed

Friday, October 18th, 2013

Updated 5/4/2014 for Angular 1.2.16 and Karma 0.10.10.

The first thing I did, after snagging a copy of angular-seed, was try to write a test in CoffeeScript. It didn’t work out of the box, so I went hunting to figure out how to make it work.

(I also found a few GitHub projects that are prepackaged angular-seed with CoffeeScript, but I didn’t see any that have been kept up-to-date with new Angular versions like angular-seed has. Maybe that’ll change when Angular 1.2 ships. Nope, they’ve pretty much been abandoned.)

Why CoffeeScript rocks for Jasmine/Mocha tests

As awesome as nested describes are, they’re even more awesome in CoffeeScript.

Jasmine tests look like this:

describe('calculator', function() {
    it('adds', function() {
        expect(calculator.add(2, 2)).toBe(4);

CoffeeScript has some nice advantages for code like this.

  1. CoffeeScript has a built-in lambda operator, ->, that replaces function() {. So that’s a little less typing, and a lot less of the line taken up by furniture.
  2. CoffeeScript doesn’t need curly braces. Instead, you indicate nested blocks by indenting them (and heck, you were doing that anyway).
  3. CoffeeScript lets you leave off the parentheses around a method’s arguments. (Unlike Ruby, you do still need the () if you’re not passing any arguments.)

Combine these, and you get code that looks like this:

describe 'calculator', ->
    it 'adds', ->
        expect(calculator.add 2, 2).toBe 4

Each line is shorter than before, so there’s less typing and a little better signal-to-noise. But you also don’t need those }); lines at the end, because you don’t need the close-curlies to end the lambdas, and you don’t need the close-parens to end the argument lists for describe and it — in both cases, the indentation suffices. Less ceremony, more essence.

Until you see it on your screen, it’s hard to appreciate just how much it improves your tests, not having all those }); lines. Many tests will consist of a single assertion, so by cutting that worthless extra line, you go from almost 33% noise-by-line-count to 0%. Plus, the entire test goes from three lines to two — so now you can fit 50% more tests on your screen. That’s a win.

The syntax is uncluttered enough that it can even be reasonable to put the whole test on one line:

describe 'calculator', ->
    it 'adds',      -> expect(calculator.add 2, 2).toBe 4
    it 'subtracts', -> expect(calculator.subtract 6, 2).toBe 4

When you’ve got a bunch of little tests that are all on the same theme, this can work really well. That would look a lot uglier if it was function() { instead of ->, and if you had to find a place for the }); as well.

And last but not least, it’s really easy to find a text editor that can do code folding based on indentation (and not as easy to find an editor that can collapse based on curly braces). I use Sublime Text, which is a bit pricey, but you can also do indentation-based code folding with the free SciTE if you put it in Python mode. So if you’ve got a whole nested describe for your calculator’s trig functions, and you want to collapse that because now you’re focusing on hex arithmetic, you just fold that whole section of your code up into a single line.

CoffeeScript tests with angular-seed

It takes some ritual to get angular-seed working with CoffeeScript, but the good news is, you only have to do it once (well, once per project).

Updated 5/4/2014: With Angular 1.2.16 and Karma 0.10.10, the “preprocessors” and “coffeePreprocessor” sections no longer need to be added, so I removed them from the instructions below. If you’re on a version where you do need them, you can copy them from the karma-coffee-preprocessor readme.

All the changes are in config/karma.conf.js. Look for the parts I tagged “STEP 1″ and “STEP 2″.

module.exports = function(config){
    basePath : '../',

    files : [
      // STEP 1: Add 'test/unit/**/*.coffee' to this list:

    autoWatch : true,

    frameworks: ['jasmine'],

    browsers : ['Chrome'],

    plugins : [
            // STEP 2: Add 'karma-coffee-preprocessor' to this list:

    junitReporter : {
      outputFile: 'test_out/unit.xml',
      suite: 'unit'


Once that’s done, you’re off and running: you can drop a .coffee file in your project’s test/unit directory, and the test runner will pick up all the tests in it and run them.

Note that, at least with the current versions of angular-seed and jasmine, you’ll have to stop and restart scripts\test.bat before it’ll pick up the new test file. (This is true any time you add a new test file; it’s not CoffeeScript-specific.) Watch mode apparently doesn’t extend to watching for new test files. Just Ctrl+C to stop it, say Yes (or hit Ctrl+C again) to the “Terminate batch job (Y/N)?” prompt, and then run scripts\test.bat again, and you’re on your way. No longer an issue with Karma 0.10.10.

Latest project: a video game in AngularJS

Monday, October 14th, 2013

I’ve been playing with AngularJS (“Superheroic JavaScript MVW Framework”) for a while. And, as is my wont, I’m trying to learn it by writing a video game with it.

Here’s an introduction to what I’m doing. I’ll go into more details in the near future.

The video game

I’ve been writing and re-writing the same video game for probably at least ten years now, sometimes in Delphi (with or without DirectX), sometimes in WinForms or XNA or XAML, now in HTML5/JavaScript. (It’s a spare-time project, which means I don’t do it to finish something; I do it to learn something.)

The game I’m trying to write is a 2D console RPG, in the vein of the early Final Fantasy games. I haven’t decided whether I’ll target a Web browser, or a Windows Store app; possibly both. Gameplay will be much like Final Fantasy if it were written with a touchscreen in mind. Artwork will come largely from OpenGameArt, especially the Liberated Pixel Cup-style artwork (including some of my own) and the Universal LPC Spritesheet.

And yes, I’m going to try to do this using Angular. Hey, you don’t really know a tool until you’ve tested its limits, right?

A brief introduction to Angular

Some JavaScript frameworks, like jQuery and Handlebars and Knockout, try to solve one small problem and solve it well. Angular, on the other hand, tries to do it all — everything from templating to two-way databinding to extending HTML to dependency injection to testing. And it does it all with relatively few mechanisms — you’ve got services for shared code, directives for manipulating the DOM, and controllers/scopes, and that’s about it.

One nice perk is that Angular uses Jasmine for testing, which means you get those beautiful nested describes. God, I wish I could do that in .NET for my day job.

Angular has a fairly steep learning curve, so I’m not going to cover the mechanics here. The Angular web site has good documentation, including a tutorial. There’s also some good stuff on YouTube, and I picked up a copy of “Mastering Web Application Development with AngularJS”, which is pretty decent.


The angular-seed project looks like it’s probably the best starting point for writing a generic Angular app. Sure, you could start from zero and reference Angular via CDN, but angular-seed gives you a few niceties like a local Web server (node scripts/web-server.js), and it comes pre-configured for you to be able to run your tests (scripts\test.bat) — in watch mode, yet, so they automatically re-run whenever you save one of your source files. Pretty nice.

(There are other starting-point projects out there too, so if you want something with Bootstrap already baked in, or a Node.js/Express backend, there may already be something better than angular-seed for you. But I don’t want any of that stuff, so angular-seed is great for me.)

My only real complaint about angular-seed is that it doesn’t have any Grunt support built-in. It’d be nice to at least have a Grunt task to run the tests, especially since Grunt doesn’t look nearly as easy to get started with as something like Rake. But otherwise, angular-seed seems to be a great jumping-off point.

So… Angular for a video game? Really?

I don’t know whether it would be practical for the overland map, but I think Angular has some potential for the battle screen. There’s a lot of state management that would work well with an MV* framework, like showing a hero’s menu when it’s his turn. And the upcoming Angular 1.2 has some nice support for CSS3 animations — and there are a lot of little animations during a battle: hero takes a step forward, swings his sword, bad guy flinches from the hit, damage numbers appear, etc. I think I can make all those animations happen with databinding — which would mean it’d all be just as testable as I wanted it to be.

Of course, if you’re just using the built-in databinding, then Angular doesn’t tell you when the animation is done so you can start the next one. But Angular is extensible, and if you write your own directive, you can get those notifications easily. I’ve been working on an animation queue to coordinate all these animations across different scopes and elements, and it’ll be interesting to see if I can pull it off.

What’s next?

Angular-seed gives you a workable start. But to be honest, it’s not really a comfortable environment, at least not right out of the box. In particular, Jasmine tests are much nicer in CoffeeScript than they are in JavaScript. And you need a good way to see whether your tests are passing or failing. More on those next time.

Making DataContractSerializer play nice with UpdateControls

Friday, January 18th, 2013

I’m really psyched about UpdateControls. I haven’t used it in too many projects yet, but it’s already changing the way I think about INotifyPropertyChanged, and opening new horizons yet unexplored.

(If you’re not familiar with UpdateControls, here’s my intro. tl;dr: If you do MVVM, you want to learn about UpdateControls. And it’s free.)

But UpdateControls just takes care of what happens between your model and your viewmodel and your view. There are other concerns your app has to figure out too, like moving data between your model and some kind of persistent storage. I’m writing a game, so I don’t have a database back-end or a service tier; I just want to save state to a local file. Sounds like a job for DataContractSerializer.

Simple enough, right? I just slap a [DataContract] attribute on my model, [DataMember] on all the public properties, and write it out, no problem. Then I try to load it back in, and… my app promptly crashes with a NullReferenceException. What went wrong?

War of the Constructors

The crux of the problem is that an UpdateControls-based model has to do some initialization in its constructor before you can start using its properties, whereas DataContractSerializer doesn’t acknowledge that the constructor even exists.

Recall that, when you define a model object using UpdateControls, you wrap each of your data fields in an Independent<T> that handles the bottom half of the notification magic. Our model object looks something like this:

public class FooModel {
    private readonly Independent<bool> _active = new Independent<bool>();
    public bool Active {
        get { return _active; }
        set { _active.Value = value; }

Notice that the _active field is initialized to a new instance of Independent<bool>. When this code is compiled, that assignment actually gets inserted at the beginning of the IL for FooModel’s constructor(s).

DataContractSerializer, when it deserializes a stream back into an object, doesn’t call any constructors; it just grabs an empty hunk of memory and calls it an object. I don’t know why it skips the constructors; it makes things awfully weird — as well as making it hard to delegate the property’s storage to another object, like you do with UpdateControls. Since FooModel’s constructor never ran, _active is null; so when the deserializer tries to set the Active property, the _active.Value assignment causes a NullReferenceException.

You can get different behavior by removing the [DataContract] attribute and serializing the object as a Plain Old CLR Object (POCO). If you’re deserializing a POCO, DataContractSerializer actually will call the constructor. But, it won’t serialize sub-objects — for POCOs, it looks like it only operates on properties with primitive types. No good for me — I don’t want to cram my entire application’s state into a single object.

Classic solution: Persistence objects

If I was using Entity Framework, I wouldn’t even be thinking about saving my model objects directly; I’d already have a separate set of entity classes that represent tables in the database. These classes wouldn’t depend on UpdateControls, and I’d have to manually copy the data between my model and my entity objects. Similarly, if I was saving to a service tier in a multi-tier application, I’d have the same situation with data transfer objects: I’d have to copy my model object’s contents to a DTO and back.

I could do the same thing with DataContractSerializer. I could define some persistence classes — dumb data contracts that are only used to save and load — and then copy data between those and my models.

The thing is, that “copy data back and forth” step is (a) hard, (b) boring, and (c) easy to screw up. I strive to be constructively lazy, and hard/boring/easy-to-screw-up is high on my list of things to avoid.

At work, we automate the hard/boring parts (and unit-test the easy-to-screw-up parts) with AutoMapper, which works really well. I think it would play pretty nicely with UpdateControls. But there isn’t (yet?) a WinRT version of AutoMapper, so that won’t help me with my WinRT app.

And even if I could use AutoMapper, it feels redundant to make a whole separate set of classes, with all the same properties as my model objects, unless it’s absolutely required by some persistence framework. Even if AutoMapper was there to remove the grunt work (and warn me when I forget to add a property to my persistence object), creating all those extra classes still feels unnecessary. DataContractSerializer should be able to serialize objects; that’s its job. The only problem is that it doesn’t run any of our initialization code. If that was solvable, we’d be golden.


If you poke around the System.Runtime.Serialization namespace, you’ll find the OnDeserializingAttribute.

The documentation for this attribute is pretty vague: it just says that it lets you designate a method to be “called during deserialization of an object”. When during deserialization? Before any fields are deserialized? After all the fields?

But since there’s also an OnDeserializedAttribute, I think I’m fairly safe in guessing that these follow the usual pattern: first the -ing method is called, then some other work (deserializing the object’s properties) is done, then finally the -ed method is called. Assuming that’s true (and I think it is, since my tests are passing), then you can use it to make a constructor stand-in for deserialization.

So you can make these changes:

  1. Add a new method called InitFields.
  2. Remove all the initialization expressions from the field declarations, and move those assignments into InitFields.
  3. Remove the readonly from the fields, since they’re now being assigned in a method, not in the constructor.
  4. Add an OnDeserializing method and tag it with [OnDeserializing], and give it a parameter of type StreamingContext. (I don’t know what the parameter is for, but you get a runtime error if it’s not there.)
  5. Call InitFields from both your constructor and your OnDeserializing method.

If you’re using ReSharper, then you’ll also want to add the JetBrains.Annotations NuGet package, and mark the OnDeserializing method as [UsedImplicitly] so ReSharper doesn’t warn you about the unused parameter.

Et voilà — you now have an UpdateControls-based model that you can serialize and deserialize successfully with DataContractSerializer! Here’s what it looks like:

public class FooModel {
    private Independent<bool> _active;
    private void InitFields() {
        _active = new Independent<bool>();
    public FooModel() {
    [OnDeserializing, UsedImplicitly]
    private void OnDeserializing(StreamingContext context) {
    public bool Active { ... }

This works, but it feels a bit clumsy, what with the extra two methods, and the assignments being separated from the field declarations. Just for fun, I decided to see if I could go one better.

Automating the process

Why not make a base class that uses Reflection to find all of our Independent<T> fields, and instantiate them automatically as needed?

My first thought was to plug this logic into both the constructor and OnDeserializing. Here’s what a model would look like in that case:

// Note: this example isn't compatible with the final version of SerializableModel
public class FooModel : SerializableModel {
    [UsedImplicitly] private readonly Independent<bool> _active;
    public bool Active { ... }

If the base constructor instantiates the Independent<T>s for us, then _active doesn’t need an initializer anymore. But then ReSharper’s static analysis warns us that the field is never assigned, and we need to suppress the warning by adding an attribute to tell it that the field is used implicitly (i.e., via Reflection).

I thought I was being clever by removing all the “duplicate code” of Independent<T> instantiation, but ReSharper’s warning was my first hint that no, this really wasn’t technical elegance — it was a code smell. And after playing around with it for a couple of days, I had to agree.

It comes down to cognitive load. Our brains are only so big. When the code does what it says, you don’t have to waste as much of your brain capacity on technical minutiae; you have more brainpower available to actually solve the problems at hand. It’s not worth “removing duplication” if it means the code no longer does what it says. Any time you have to stop and think about where that field is being instantiated, it derails your train of thought.

As an added bonus, doing things this way also means that you can make an existing UpdateControls-based model into a serializable model just by changing its base class, and nothing else:

public class FooModel : SerializableModel {
    private readonly Independent<bool> _active = new Independent<bool>();
    public bool Active { ... }

I like the way this code reads. All the UpdateControls-based stuff looks exactly like you’d expect. The only unusual thing is that you’re descending from SerializableModel, and that’s almost as declarative as the [DataContract] attribute.

One big caution: if you pass a default value to the Independent<T> constructor, that default won’t be used for deserialization. This isn’t a problem in the usual case, where you’re about to load that property’s value from storage. But if you’re reading an older stream that doesn’t contain that property, you might have to figure something out.

Without further ado, here’s the SerializableModel class. This was written for WinRT, but should also work in .NET 4.5. (If you’re using an older version of .NET, the Reflection APIs are totally different.)

public class SerializableModel
    private IEnumerable<TypeInfo> Ancestry
            for (var type = GetType(); type != null; type = type.GetTypeInfo().BaseType)
                yield return type.GetTypeInfo();

    private void CreateIndependentFields()
        var independentFields =
            from type in Ancestry
            from field in type.DeclaredFields
            let fieldType = field.FieldType.GetTypeInfo()
            where fieldType.IsGenericType &&
                  fieldType.GetGenericTypeDefinition() == typeof(Independent<>)
            select field;

        foreach (var field in independentFields)
            var instance = Activator.CreateInstance(field.FieldType);
            field.SetValue(this, instance);
    [OnDeserializing, UsedImplicitly]
    private void OnDeserializing(StreamingContext context)

I consider this code to be more of a useful technique than actual copyrighted intellectual property, so feel free to use the above code in any context you wish (no credit necessary). But I’d appreciate hearing about it if you find this useful.

Joe White's Blog copyright © 2004-2011. Portions of the site layout use Yahoo! YUI Reset, Fonts, and Grids.
Proudly powered by WordPress. Entries (RSS) and Comments (RSS). Privacy policy