Stop writing INotifyPropertyChanged, start using UpdateControls

If you use MVVM, you know all about INotifyPropertyChanged, and mostly you know that it’s a pain. Simple properties are a lot of code, but otherwise not too bad; but once you start having derived properties, it’s hard to make sure you fire all the right change events at the right time (and even harder to keep it right as you refactor).

Can’t we do things the way Knockout.JS does, with a framework that automatically discovers the dependencies for you?

Why yes, yes we can. Last weekend I discovered UpdateControls.

Introduction to UpdateControls

UpdateControls is free and open-source (MIT license), is available via NuGet, and works pretty much everywhere: WinRT, WPF, Silverlight, Windows Phone 7 — even WinForms! Update: WP8 is supported too.

The idea is:

  • Don’t implement INotifyPropertyChanged on your viewmodel.
  • Wrap each of your independent variables inside an Independent<T> instance. Expose the underlying value through properties.
  • For computed properties, just write the code to compute the values.
  • Before you assign your top-level viewmodel to a DataContext, wrap it with a call to ForView.Wrap.

And that’s it. You don’t need to fire notifications manually; UpdateControls does it for you.

An example:

public class MyViewModel {
    private Independent<string> _firstName = new Independent<string>("");
    private Independent<string> _lastName = new Independent<string>("");
    private Independent<bool> _lastNameFirst = new Independent<bool>();

    // Expose independent values through simple wrapper properties:
    public string FirstName {
        get { return _firstName; }
        set { _firstName.Value = value; }
    }
    public string LastName {
        get { return _lastName; }
        set { _lastName.Value = value; }
    }
    public bool LastNameFirst {
        get { return _lastNameFirst; }
        set { _lastNameFirst.Value = value; }
    }

    // Dependent (computed) properties are just code:
    public string Name {
        get {
            if (LastNameFirst)
                return string.Format("{0}, {1}", LastName, FirstName);
            else
                return string.Format("{0} {1}", FirstName, LastName);
        }
    }
}

// And in the view:
DataContext = ForView.Wrap(new MyViewModel());

And like magic, all your properties, all your logic, will start notifying the UI of changes. Between the ForView.Wrap wrapper at the top and the Independent<T> at the bottom, UpdateControls is able to automatically track all the dependencies, and triggers change events for computed properties whenever their ingredients change. This even works if a property changes which ingredients it uses.

ForView.Wrap wraps your top-level viewmodel in an internal class called DependentObject<T>. The first time a databinding asks for a property, the DependentObject<T> switches into recording mode, and then invokes your viewmodel property. Until that property returns, the DependentObject<T> takes note of every time you read the value from any Independent<T> anywhere; and whenever you do, it hooks that Independent<T>‘s change event.

Then the DependentObject<T> caches the property value and returns. Next time XAML asks for the value, it just returns it. (Its ingredients haven’t changed, so why not?) But as soon as any of the ingredients change, it clears its cache and fires PropertyChanged. There’s a lot of magic under the covers, but using it is dead simple. In principle.

As with any magic, there are some subtleties and gotchas, which I go into below. But first, some important notes about the documentation.

API drift, or, an UpdateControls documentation survival guide

UpdateControls has a fair bit of documentation, including several good videos. Unfortunately, the API has changed in a couple of significant ways over time, but a lot of documentation still refers to the old APIs. So some places you see it one way, and some places you see it the other way, which is hella confusing.

The old APIs do still work (though not on every platform and not with every edition of Visual Studio), but there are easier — and more universal — ways to do things now.

Here are the two things you really need to know before you start reading the UpdateControls documentation:

Codegen vs. Independent<T>

Independent properties used to take more code than what I showed above. It was so tedious that the author even wrote a Visual Studio add-in to do the code generation for you. But now we have Independent<T>, which is enough simpler that you don’t need codegen. The old way still works (as long as you don’t have Visual Studio Express, which doesn’t support add-ins), but Independent<T> does a better job of encapsulating the boilerplate code, so IMO it’s the way to go.

Unfortunately, most of the code samples and screencasts still use the old way. You’ll see (and hear, in the videos) lots of references to Ctrl+D G, which tells the VS add-in to do its code generation. And you’ll see a lot of code like this:

private string _firstName;

#region Independent properties
// Generated by Update Controls --------------------------------
private Independent _indFirstName = new Independent();

public string FirstName
{
    get { _indFirstName.OnGet(); return _firstName; }
    set { _indFirstName.OnSet(); _firstName = value; }
}
// End generated code --------------------------------
#endregion

My advice: Don’t install the MSI (in these days of NuGet, the MSI is really just there to install the VS add-in). Ignore the VS add-in entirely. Tune out whenever the videos talk about Ctrl+D G.

Just add the NuGet package to your project, and then use Independent<T>. The above code becomes this, which is simple enough to write and maintain by hand:

private Independent<string> _firstName;

public string FirstName {
    get { return _firstName; }
    set { _firstName.Value = value; }
}

By the way, that’s not a typo in the getter. You can return _firstName.Value if you like, but Independent<T> also has an implicit conversion to T, so as long as it’s not Independent<object> you can indeed just return _firstName;.

{u:Update} vs. plain old bindings

Originally, UpdateControls had a XAML markup extension ({u:Update}) that you were supposed to use in place of {Binding}. It took care of turning on dependency-recording mode whenever values were bound to the UI, so that it knew how to watch for changes and update the UI when ingredients change later.

Unfortunately, not all the XAML platforms support markup extensions. Silverlight didn’t get them until v5. Windows Phone 7 doesn’t have them; not sure about 8. WinRT doesn’t support them. {u:Update} isn’t going to cut it for those platforms. I suspect that {u:Update} also wouldn’t play well with visual designers like Blend; they’re all about the {Binding}.

So UpdateControls added ForView.Wrap(), which you can use in combination with plain old {Binding}. If the objects you expose to your XAML and your DataContexts and your Bindings are wrapped for view, you get the same magic, but without needing markup extensions.

Any time you see {u:Update} in the documentation, just mentally substitute {Binding}, and make sure the viewmodel gets wrapped with ForView.Wrap before the view sees it. For example:

// Using the PersonViewModel class from the first code sample, above:
var viewModel = new PersonViewModel();
view.DataContext = ForView.Wrap(viewModel);

<!-- This will automatically update whenever ingredients (FirstName, LastName) change. -->
<TextBlock Text="{Binding Name}"/>

Gotchas

If you’re starting with UpdateControls on a new project, I would imagine it’s not too hard to follow the rules and have things work well. If you’re integrating it into an existing code base, you’re more likely to run into gotchas. Here are some of the problems I ran into when trying to refactor my project to use UpdateControls.

Don’t use INotifyPropertyChanged

If your viewmodel implements INotifyPropertyChanged, UpdateControls assumes you know what you’re doing, and ForView.Wrap won’t wrap that class; instead it presents your class as-is to the view. The consequence is that the computed-property notification magic won’t work for your INPC class or anything inside it. If you implement INotifyPropertyChanged, it’s all on you.

Which means that, if you’re trying to refactor from INPC to UpdateControls, you can’t do it one property at a time. To use Independent<T>, you can’t have INotifyPropertyChanged. You’ll have to change your entire viewmodel class at once: remove INotifyPropertyChanged, add Independent<T>s for everything, all in one fell swoop.

If you do viewmodel composition — where one viewmodel exposes other viewmodels as properties (think nested panels inside a larger screen) — then you probably want to start at the top level (the whole screen), and work toward the leaves (nested panels).

Corollary: Don’t use ObservableCollection<T>

I had viewmodels that exposed collections, and nothing was happening when the collection items’ properties changed. It turns out that ObservableCollection<T> implements INotifyPropertyChanged (makes sense when you think about it; you might want to bind to its Count) — so UpdateControls just exposed the collection instance directly to the UI, without attaching any of the dependency-recording logic: not for the collection itself, and not for the items inside it, either.

So don’t use ObservableCollection<T> with UpdateControls. Use IndependentList<T> instead (it’s the UpdateControls equivalent of an observable collection). Or just a plain old List<T> if the contents aren’t going to change. Or even a LINQ query (great for filtering — as long as the filter conditions boil down to Independent<T>, changing them will automatically re-run the LINQ query). But if you start using UpdateControls, you should get rid of all your references to ObservableCollection<T>.

ForView.Wrap and dependency properties

This one may be peculiar to my code, but I had some custom dependency properties whose value was a custom class. Once I started using UpdateControls, their value wasn’t that custom class anymore; the view layer (where dependency properties live) operates on the wrapper objects, not the actual viewmodels. Since my DependencyProperty.Register call specified that the property value should be of type MyCustomClass, but the actual instance was a DependentObject<MyCustomClass>, the bindings saw a type mismatch and never set my property. To get around this, I had to change my DependencyProperty.Register call to specify object as the property’s value type. (And of course my property’s change event had to use ForView.Unwrap<T> instead of casting directly to the class type.) This probably won’t affect many people, but it’s worth noting since I burned some time trying to figure it out.

Debugging wrapper objects

If you’re in the debugger, and you’re looking at a wrapper object (from ForView.Wrap), the tooltip will show the ToString() from the wrapped class — which is often the name of the wrapped class. It can be hard to even realize that the object you’re looking at is a wrapper, and not the viewmodel itself.

It’s not hard if you know how, though. If you expand the tooltip, wrappers will have a “HasErrors” property, and not much else. That should be easy to tell apart from your actual viewmodel (which will typically have lots of properties).

If you then want to drill down to see the properties of the wrapped instance, you can expand “Non-Public Members”, then “_wrappedObject”.

Debugger tooltip with UpdateControls

Integrating with Caliburn.Micro

I tried combining both UpdateControls and Caliburn.Micro in the same project, mostly because I was already using Caliburn.Micro’s IViewAware so my viewmodels could fire animations on the view and wait for them to complete. Here’s what I learned:

  • Don’t use Caliburn.Micro’s base viewmodel classes, because they implement INotifyPropertyChanged (see above).
  • Caliburn.Micro doesn’t know about UpdateControls’ wrapper classes. If you’re doing viewmodel-first (where Caliburn.Micro automatically discovers and instantiates the correct view type for you), and your DataContext is a DependentObject<PersonViewModel>, Caliburn.Micro’s conventions will probably look for a DependentObjectView or something — it doesn’t know that it’s supposed to look for a PersonView. This isn’t hard to deal with, though; Caliburn.Micro has good extensibility points for this sort of thing.
  • The IViewAware stuff won’t work out of the box either, because Caliburn.Micro takes the DataContext and says “does it implement IViewAware?” and of course it doesn’t, because it’s got the wrapper object, not your viewmodel. None of the lifecycle stuff (IGuardClose and the like) would work either, because it’s also based on interfaces. Again, you could hook CM’s extensibility points to make this work.
  • No idea how Caliburn.Micro’s {x:Name}-based binding conventions would play with UpdateControls. I didn’t get that far.

In the end, I decided not to use Caliburn.Micro for this app. I had a lot of custom UI, so the conventions weren’t much use to me; I was really only using CM for IViewAware, which would require some coding to integrate with UpdateControls. Easier to roll my own at that point.

But I’m confident you could use the two together. If anyone digs into this further and wants to share their code, drop me a line and I’ll either post it or link to you.

Conclusion

I’m pretty excited about UpdateControls. Yeah, I spent a lot of time here talking about the downsides and gotchas. But I wouldn’t have bothered if it didn’t show so much promise.

If you’ve used Knockout, or if you’re tired of keeping track of which calculated properties to fire PropertyChanged events for whenever something changes, you owe it to yourself to take a look at UpdateControls.

Here’s where to start:

  • First, look through the notes above about API drift. You’ll want to be able to mentally translate the out-of-date code samples as you see them.
  • Then head over to the UpdateControls documentation and read through all half-dozen or so pages; they’re short. Also take a quick look through the other sections in the sidebar (“Examples”, “Tips”, “Advanced Topics”).
  • Then check out the videos, which cover different material than the docs, and which dig a lot deeper. They’ll really give you an idea of what UpdateControls can do.

My current project: Stupid Quest

I haven’t blogged much in a while, and I need to. I’ve been writing a video game (aren’t I always?), and there’s stuff to blog about.

But for now, I’ll just post an early screenshot. (The real thing will run fullscreen, as a Windows 8 app.)

Screenshot of the Stupid Quest battle screen

Attacking isn’t working yet, so the monsters just taunt you.

Hmm. It looks a lot better with animation. Oh well.

User group presentation on AvalonEdit

This month, the Omaha .NET user group did lightning rounds. I did a presentation on AvalonEdit.

If anyone’s interested, here’s the video: 2012 December Omaha .NET User’s Group. My part goes from 1:02 to 1:20. The slides are pretty readable in the video, the code less so. The audio isn’t great, but is mostly intelligible. If anyone’s interested, check it out.

I haven’t made my code available for download, but I keep intending to. If anyone’s interested in the code, drop me a note.

Also presenting were:

  • Volker Schulz, “Windows Azure – Build, Debug and Deploy” (0:00 to ~0:35)
  • Naveen Akode, “MSBuild – Basics and Usage” (~0:35 to ~1:02)
  • Brian Olson, “WebAPI and Moustache” (~1:20 to ~1:35)

XamlParseException in WinRT

I’m writing a video game (as always), this time as a Windows 8 app. Currently I’m still developing on a VM running the Release Preview of Windows 8 and Visual Studio 2012, but I just got a chance to build and run on a machine running the final RTM versions of both.

When I ran my app, it showed the splash screen for a second or so, then dumped me back to the Start screen. So I tried running under the debugger, and got a XamlParseException: “XAML parsing failed.”

<rant>

I’ve commented before that Microsoft obviously put their most junior programmers to work on writing the WinForms ToolStrip in .NET 2.0. (If you haven’t had to deal with the flaky design, and bugs, of ToolStrip, be glad.) They had to give ToolStrip to their junior devs, because all their experienced devs were already busy working on WPF.

With WinRT, it’s pretty clear that all their junior people were working on error handling. The number of things that fail with absolutely meaningless errors (e.g. “A COM error occurred”) is staggering. Makes me wonder what they’ve got the senior devs busy on, if it’s not WinRT.

</rant>

Anyway, it turns out the problem is a codegen bug if your assembly has a dot in its name. There’s a simple workaround: go into Project Properties > Application tab > Assembly name, and change it to something with no dots in it. (I’ll probably also rename the .csproj and the folder to match, but that’s not necessary.)

The knowledgebase article says this is a problem with XAML-based controls, but I haven’t seen a problem with our WPF4 code at work. So either it’s only WinRT (rather than all XAML as the KB article suggests), or it’s only a problem with .NET 4.5.

There’s allegedly a hotfix that will fix the codegen bug properly, but there’s no actual way to download the hotfix. Easiest thing is to rename the assemblies, or wait for the service pack.

System colors in WinRT/XAML

Windows apps have always adapted to the Windows color settings. But over time, Microsoft has made this progressively harder.

In WinForms, you had SystemColors and SystemBrushes, which were well-documented and easy to use. In WPF, you had a new flavor of SystemColors, which were fairly well-documented, but using them required a lengthy incantation involving DynamicResource. In HTML Metro (WinJS), you can expand out the references in Solution Explorer and read the theme CSS files, which is less discoverable than anything before, but is still a little like documentation.

But in XAML Metro (WinRT/XAML), the system colors are buried in theme resources that you can’t even see. If you want to use one of the system colors, you can find them in the visual designer’s Properties page, but you can’t find them in the documentation. This bugs me, since I’m a coder and tend not to bother much with the designer.

So I went ahead and wrote the documentation myself. I now have a metro.excastle.com subdomain, which contains documentation for the system brush resources in WinRT/XAML. I even show color swatches for both the Dark and Light themes. (I recommend you use a modern browser that understands CSS alpha transparency.)

(BTW, if anyone can suggest a better algorithm for deciding whether a semi-transparent color’s RGB code should be shown in black or white for best contrast, I’m all ears. I admit I didn’t spend a lot of time on that part.)

If you have suggestions for other content or links I could add to metro.excastle.com, leave a comment here or drop me a line.

Running Windows 8 Release Preview in a virtual machine

I just spent the whole weekend trying to get the Windows 8 Release Preview and Visual Studio 2012 Release Preview running in a VM on my personal laptop. I think I may have it working now, though I’ve been burned by enough false starts that I’m not willing to call it yet. But I’ve certainly learned enough to be worth sharing.

If you’re more interested in the destination than the journey, skip ahead to the tl;dr.

Reasons for a VM

I didn’t want to install Windows 8 as the primary OS on my laptop, for two main reasons:

  • I’m not impressed with its stability. The week I was at TechEd with the company laptop, the video driver crashed an average of twice a day. Sometimes it recovered, but sometimes (usually when waking from sleep) the machine bluescreened. (Granted, that’s just the video driver; but the Windows 7 video driver never had any problems.)
  • My laptop came with some software called Beats Audio that seriously improves the sound quality. Because it was shovelware, I don’t have a separate installer for it, so I would have no way to install it on Windows 8, and watching Netflix would suck a lot without the good audio.

The second reason was really the biggie. What can I say? I bought this laptop for audio first, development second.

So my choices were between a VM and dual-boot. And a VM would make it a whole lot easier to switch back to Windows 7 to watch Netflix, so I was leaning strongly toward a VM if I could make it work.

Requirement: Run fullscreen

My laptop’s screen is 1366×768. Windows 8 disables some features, like Snap, if your screen is any smaller than 1366×768. Since I want to develop Metro apps, and test them with things like snapped mode, I have to be able to run my VM fullscreen. I assumed this wouldn’t be a big deal.

What a fool I was…

First failure: VirtualBox

I started with a c|net article called Install Windows 8 virtually with free software. It covered both VirtualBox and VMWare Player. I’ve used VMWare before (Workstation though, not Player), so I figured I’d try something new, and try VirtualBox first.

The version of VirtualBox I tested is 4.1.18. The UI is pretty clean and usable. I created the virtual machine, installed Windows 8, installed the Guest Additions, switched to fullscreen mode, and…

…it didn’t go fullscreen.

My laptop’s screen is 1366×768. VirtualBox, in fullscreen mode, only ran the guest at 1360x768. So there was a three-pixel black bar at the left and right sides of the screen.

I tried using Snap just on the off-chance that Microsoft had left a few pixels leeway on that 1366×768 requirement. They had not.

Six pixels. So close, and yet so far.

I did some research, and found that VirtualBox used to have a bug where it would round the width down to the nearest multiple of 8. (Which doesn’t make the slightest bit of sense. It’s not like it’s a memory-alignment issue; each individual pixel is already 32 bits, so everything is already 32-bit aligned.)

But they claimed to have fixed this bug 19 months ago. So either they broke it again, or they lied about having fixed it.

I found a StackExchange question from someone else with the same problem, and they claimed to have fixed it by going into the VirtualBox settings and bumping up the guest’s video memory to 128MB. But my guest was already set to 128MB of video memory; that had been the default. I tried increasing it to 256MB, but no good: the screen was still that fatal 6 pixels too narrow.

It was about this time that I decided to post a question on SuperUser and then try plan B. Alas, the question didn’t get an answer until I knew enough to go back and answer it myself. But that’s later.

Second failure: VMWare Player

I soon lost count of how many hoops I had to jump through just to download VMWare Player. I had to register, but I couldn’t register because they already had a registration under my e-mail address, but they wouldn’t accept my best guess at a password, so I had to do the “Forgot Password” and reset my password. Then I went back to the download page, and they prompted me to log in again. (Earth to VMWare: I just entered my new password when I reset it!) So I logged in again, and then they forgot what it was I had wanted to download. I used Google to find the download page again, and then they’d let me select the software to download, but they asked me to accept the license agreement first, and when I did, they promptly forgot what I was downloading again.

Downloading VMWare Player was almost as maddening as it had been trying to get VirtualBox to work.

But I finally did manage to get it downloaded. The version of VMWare Player I used is 4.0.4. I can’t say I care for the puffy blue window, but it seemed functional enough. Followed the instructions to get Windows 8 installed (apparently VMWare is pretty touchy about Windows 8 unless you perform all the right song and dance), and got it up and running. Great!

I installed the VMWare Tools in the guest, installed the Visual Studio 2012 release preview, created a new Metro grid application, ran it, and…

…FAIL. The app started up fine, the first screen animated in — and the instant the animation stopped, all the images disappeared.

The “Grid application” project template comes with some sample images that it displays on the grid tiles. They’re all various shades of solid gray, but they’re still images. And apparently the video driver in VMWare Tools has a problem with the way Metro apps display images. So as soon as the screen is done animating in, all the tiles vanish, and all that’s left is the text.

Older versions of Windows had someplace in Display Options where you could ratchet down the video acceleration if things weren’t displaying properly, but either Windows 8 doesn’t have those options anymore, or they were disabled for the virtual video driver inside VMWare. I looked inside the VMWare Tools options, but there was nothing about graphics acceleration. And unlike VirtualBox, VMWare doesn’t seem to have video acceleration options in the host app. Nowhere could I find even a single knob to try to fix this problem.

And there’s no way in hell I can develop a Metro app in a VM that can’t show half the app’s UI.

I spent a while longer trying to make this work. I tried a refresh of Windows, in case it was some Windows setting that had gotten out of whack. Then reinstalled VS2012, re-created my project, re-ran it. And the images re-disappeared.

There’s not even anywhere to go from here. Dead-end on VMWare Player.

Third failure: Forcing Snap

During the course of my research, I had run across some articles that said there was a Registry setting that would force Windows 8 to always enable Snap, even if the screen resolution was too low.

I figured I could give that a go; as long as I planned carefully, the six-pixel error might not throw off my testing too much. So I went into my six-pixels-too-small VirtualBox VM, created the Registry setting, and tried snapping some apps. But the “snap” gutter never appeared: it just wanted to task switch, not snap.

Well, maybe I need to restart Windows in the VM before the setting will take effect. Tried it. Still no good.

It looks like the “force Snap” Registry hack no longer works in the Release Preview.

Eventual, roundabout, partial success: Remoting into VirtualBox

VirtualBox has a feature where it can run an RDP server, so you can remote into your virtual machine. So I figured, hey, maybe I can minimize the VirtualBox window and Remote Desktop into the VM. Remote Desktop doesn’t have any stupid multiple-of-8 restrictions, so it might work.

So I went into VirtualBox and enabled “Remote Display” (their name for their RDP server), and then tried to connect to it.

Here’s a little tip for UI designers: if a particular feature will only work if the user has installed an add-on product, then don’t show the UI options for configuring that feature unless the user has actually downloaded and installed the necessary add-on. I burned quite a bit of time trying to figure out why all my settings were accomplishing nothing. Turns out you have to download and install the VirtualBox Extension Pack before VirtualBox will actually support all the RDP options you’ve spent the last hour fruitlessly trying to configure.

If you want to try to use VirtualBox’s RDP support, don’t expect it to work in any kind of reasonable way; just save yourself time and read the manual. I finally got it set it up, and managed to get the headless server started from the command line.

And Remote Desktop connected! And gave me a black screen.

So I minimized Remote Desktop and looked in the server’s console window, to see if it had reported any errors. It had not, but now I noticed a mysterious new window on my host machine, and it looked like it was showing the VM screen that was supposed to be getting sent over Remote Desktop. This mystery window’s title bar said “Chromium Render SPU”, so off to Google I went.

Turns out the problem here is VirtualBox’s 3D acceleration. If you have 3D acceleration enabled, then the host renders to the mystery window instead of to RDP. Very screwed up. But the message-board thread said all you have to do is disable 3D acceleration, and Remote Desktop will start working.

So I shut down my VM (trying to click in about the right place in Remote Desktop to hit the “power” icon, then minimizing everything so I could look at the mystery window and see if I’d clicked in the right place; rinse, repeat), then went into my VM’s settings and turned off 3D acceleration, connected Remote Desktop again, and ha, I got a login screen!

I maximized the Remote Desktop window, and it went fullscreen, but the remote desktop was only 1024×768, centered on my 1366×768 screen. So I went into “Screen resolution” on the remote machine, and lo and behold, for the first time in VirtualBox, 1366×768 was an option in the list! I selected it, and…

(angelic choir)

…it worked. My VM was running fullscreen at 1366×768.

Then, on some crazy whim, I shut down the VM, opened it the normal way through VirtualBox Manager, and it was still 1366×768. Something I had done while trying to enable Remote Desktop had actually fixed the original problem!

Coming down from the euphoria, and a conclusion

So somehow, in this long crazy string of events, I had done something that had finally kicked VirtualBox over the hump and into proper fullscreen support. I was a little nervous about touching anything at this point, lest it revert to its original broken state. But at the same time, I like to clean up after myself. I don’t really want an unauthenticated RDP server running on my laptop anytime I’m running Windows 8.

So I started backing out my changes. I turned off the RDP server. And as long as I’m running locally, I might as well turn 3D acceleration back on. Restart the VM…

Black bars.

1360×768.

And at this point I had an inkling of a suspicion. (And it was this suspicion alone that kept me from descending into madness.) I shut down the VM, turned 3D acceleration back off, and started the VM.

Worked like a charm. Fullscreen goodness.

What a letdown. I had burned an entire freaking weekend on what boiled down to a single buggy setting.

(What’s worse, I can’t even report the bug without giving Oracle three forms of ID and a note from my mother. Who the hell made the Marketing department the gatekeepers for the bug tracker?)

Anyway, the VM seems to be working so far. No blank screens, no mystery windows. It actually does run fullscreen, no black bars, the full 1366×768. I can snap apps like a champ. I installed VS2012 in the VM, created a new grid app, and ran it, and not a single image disappeared. It all seems fine, though of course only time will tell.

One freaking setting.

tl;dr

To run Windows 8 in a VM: Don’t bother with VMWare Player. Use VirtualBox and disable its 3D acceleration.

Editing CoffeeScript with Sublime Text 2

I’ve been playing with CoffeeScript and IcedCoffeeScript lately, and I’ve been looking for a decent editor for them. Not as easy as you might think.

I wanted an editor that Just Worked. And by “Just Worked”, I mean the basics, of course, like syntax highlighting, multiple editor tabs, and all that other stuff you take for granted. But I also mean that the editor should have these two features:

  • Code folding. If I’m writing Mocha tests (and man, CoffeeScript is awesome for writing Mocha tests!), I want to be able to collapse describes that I’m not actively working on at the moment.
  • Smart indentation. If I type -> and press Enter, I want the editor to indent the next line for me. (This isn’t just laziness — it’s also a hint to me when I forget to type ->, which I still do fairly often.)

It’s kind of depressing how much those two features narrow the field. Here’s all I wound up with:

  • SciTE. It doesn’t actually support smart indentation or CoffeeScript syntax highlighting, but I’ve been using it for years, it’s comfortable, and if you lie and tell it it’s editing Python, then you get all the code folding you want. Plus, it’s free. Far from great, but it’s at least adequate.
  • Cloud9 IDE. Its smart indentation is great, but when I looked at it last, it didn’t support code folding. (They claim that it does now, though they give no details on whether/how it works with CoffeeScript, and I haven’t bothered to sign up for an account to try it out.) Freemium, and also Web-based, which is intriguing; I may give this another look at some point.
  • JetBrains IDEA-based IDEs (IntelliJ IDEA, PyCharm, WebStorm, etc.) Commercial, $49 and up. I had already bought a license for PyCharm, and then they released 2.0 and added support for CoffeeScript, and I was eligible for a free upgrade. Code folding and smart indentation are there, all right, but so are a boatload of bugs — I could never use it without finding four or five bugs at a sitting. I wrote them all up, they fixed some of them in the next update but introduced more. Repeat. And again. I finally got frustrated and gave up.
  • Sublime Text 2 (the subject of the rest of this post). Commercial, $59.

I was a little bit amused to find that, even though I’m only interested in Windows, everything on this short list is cross-platform: Windows, Linux, and Mac OSX.

ST2

I’ve been using the free trial of Sublime Text 2 for the past couple of months (it’s fully functional, it just shows the occasional nag screen when you save). And I’ve been loving it, and finally sprung for a license today. (Note: not a paid advertisement. Just a happy customer.)

ST2 (as the Sublime Text 2 users like to call it) seems to be in perpetual beta. New versions come out every now and then, but the documentation is far from complete and there are the occasional rough edges. But that’s easily made up for by the fact that it rocks. It’s fast, it’s beautiful, it’s crazy powerful.

Package Control

How powerful? ST2 has a built-in Python console for scripting the editor. Scripts and customizations can be bundled into add-on packages (it’s even mostly-compatible with TextMate bundles, i.e. packages designed for the Mac TextMate editor). Packages can add everything from syntax-highlighting rules for new languages, to entirely new editor commands, to who knows what else (Python scripting, remember?)

Here’s the really awesome thing: somebody wrote a free add-on package, called Package Control, that is a GUI package manager for ST2, that runs inside ST2. You install it by pasting a single line (okay, a long line) of code into the Python console, which downloads and installs Package Control for you. And forever after, installing a new package is a matter of opening Package Control via ST2’s menu, selecting “Install”, typing a keyword search into the incremental-search box, and selecting the package you want — which it then downloads and installs in a second or two, and the new package is instantly live and working. Crazy fantastic.

Fast, powerful

The editor is always snappy and responsive — it scrolls like the scrollbar is glued to the mouse. There’s a sidebar that shows a miniature version of your whole file; it updates fast and scrolls fast, and you can recognize the shape of your code even though it’s approximately a 1.5-point font. There are a few animations in the editor, but they never slow you down; they just make the editor feel even more responsive.

There’s also an optional sidebar on the left, that shows a list of open files (which duplicates the tab bar, but I like it). If you create a “project” (basically just a list of directories you want to work with, which can be as simple as “.” or can include filtering and multiple locations), then the sidebar also shows a directory tree, which is a feature I’ve always liked and seldom found.

(These sidebars are optional, of course. You can turn them off, along with the tab bar and the status bar and even the menu bar. You can run ST2 fullscreen with no window chrome at all, if you like — they even have a shortcut to get you there called “distraction-free mode”.)

Autocomplete is interesting. ST2 is not an IDE; it doesn’t parse your code and understand which classes have which methods. Instead, it keeps track of all the different words in the current source file, and when you start typing a word, it will try to complete from that list. It’s surprising how well this simple, non-context-sensitive completion can work, especially since it’s lightning-quick.

Keystrokes

ST2 has a way of making mundane features awesome. For example, if you press Ctrl+D (select word), it also highlights all the other usages of that same word in the file. And then if you press Ctrl+D again, it will add the next one to your selection (that’s right, ST2 supports disjoint selection). Keep pressing Ctrl+D until you’ve highlighted all the usages inside a method, and then start typing a replacement — and hello, Sync Edit.

(I keep trying to use Ctrl+W to select word, because that’s what it is in ReSharper at my day job. But of course Ctrl+W is Close File, just like in Web browsers and SciTE and most everything else except Visual Studio. At least, if I close a tab by mistake, I can use Ctrl+Shift+T to reopen it — hooray for features stolen from Web browsers!)

Then again, there are the keystrokes that are just different for the sake of being different. You can do rectangular selections, but you can’t use the usual Alt+Shift+arrow keys; you have to use Ctrl+Alt+Up/Down followed by Shift+Left/Right. And for some reason, Ctrl+Shift+S defaults to Save As, instead of the more familiar (and far more useful) Save All.

You can customize all the keystrokes (and a lot of other things, for that matter) by editing JSON config files. There are convenient menu items to open these config files for editing right there inside ST2.

CoffeeScript and beyond

ST2 doesn’t support CoffeeScript out of the box, but there are free CoffeeScript and IcedCoffeeScript packages. (The IcedCoffeeScript package is a simple superset of the CoffeeScript one, so if you want both languages, it’s enough to just install the Iced package.) The easiest way to install them, of course, is via Package Control. Code folding works beautifully, though ST2 hides the folding arrows by default (to reduce screen clutter) — if you just move the mouse over the gutter, the folding arrows will appear. The smart indentation works great too. With the Iced package, ST2 is a perfect fit what I set out to find.

I’ve also installed the Task package (via Package Control, naturally), which gives you syntax highlighting for a minimalist “task” pseudo-language that lets you use ST2 as a to-do list (but without all the stupid limitations that most task-list apps want to give you, like “must be a single flat list” — here you can easily create hierarchies, multiple lists, whatever). Task highlights lines starting with a hyphen (“-“) as “undone tasks” and lines starting with a check mark (““) as “done tasks”, and it adds an editor command to toggle a line between – and ✓. This add-on isn’t very polished, but it’s still pretty nice to have. Unfortunately it’s tightly coupled to the color scheme you’ve selected in ST2; by default, with the Monokai scheme I prefer, everything shows up as plain white. You have to edit the package to make it work with your theme (instructions included in the readme). (For reference: with Monokai, it works well to set Task Title to markup.inserted, Completed Tasks to comment, Action Items to support.function, and the rest to keyword.)

If you use the Task package on Windows, you’ll run into the problem that the Consolas font (which ST2 uses by default on Windows) doesn’t support the “CHECK MARK” Unicode character. This is easily fixed: download and install the “DejaVu Sans Mono” font (yes, it distinguishes between O and 0, 1 and I and l; and it’s also useful for seeing Mocha’s output properly on Windows) and then add "font_face": "DejaVu Sans Mono" to your user settings file (Preferences > Settings – User; the file is in JSON format).

Where IDEA could win

The IDEA-based editors (PyCharm, WebStorm, etc.) have a couple of things going for them compared to ST2: IDEA is much more aggressive about highlighting syntax errors and warnings as you type, and it has a feature that auto-formats your code (prettyprinting), either on demand, or as you type. Both would be awesome features if they worked reliably.

Unfortunately, both of those features have been major bug farms in the IDEA IDEs. They highlight “errors” in code that’s perfectly valid, they “reformat” code in ways that cause it to become non-compilable or that subtly change its meaning, etc. And of course, if CoffeeScript adds a new feature (I’ll bet do (x = 1, y = 2) ->, just added in CoffeeScript 1.3.1, is going to give them problems), then you have to submit an enhancement request and wait for them to add the feature and ship a new point release — which means you’re stuck with false-positive error highlighting for a good long while. ST2 packages are typically open-source, so can be a lot more agile.

I’m sure IDEA will be pretty awesome for CoffeeScript if JetBrains ever use it to dogfood some real CoffeeScript development of their own, and actually shake out some of the bugs before they ship (and add some actual regression tests, and everything else it would take to make it stable). But until that happens, if you’re easily frustrated, then don’t use IDEA for CoffeeScript.

So how about that extensibility?

I haven’t actually done much yet to extend ST2 on my own. The big hurdle here is that, by and large, you extend ST2 by writing TextMate packages, and I haven’t been able to find any documentation on how to write a TextMate package, beyond “open TextMate, tell it to create a new package, and use the GUI to specify all the options”. Since I don’t have a Mac, and since TextMate is Mac-only, that doesn’t help me much.

But hey. Even just using extensions other people have written, ST2 is just plain fun to use. I think it’s well worth the modest price.