Joe White’s Blog

Life, .NET, and Cats


Archive for the ‘.NET’ Category

.NET and CoffeeScript: comparing Jurassic, Jint, and IronJS

Sunday, October 16th, 2011

Recently I went looking for ways to write a .NET desktop app that could compile CoffeeScript to JavaScript. There are already several NuGet packages for exactly this, but most of them look like they’re tightly bound to ASP.NET. So I struck out on my own, following the general steps in “CoffeeDemo – A Simple Demo of IronJS, using CoffeeScript”. The main CoffeeScript compiler is written in CoffeeScript, but they also provide one written in JavaScript, so my basic outline was:

  1. Instantiate a JavaScript engine.
  2. Tell the JavaScript engine to run coffee-script.js. This creates the CoffeeScript object and its compile method.
  3. Tell the JavaScript engine to call CoffeeScript.compile, and pass a string containing the CoffeeScript code to compile.

But my first attempt ran slower than I’d hoped for. (Well, coffee-script.js is 163 KB, and that’s the minified version! So I guess it does have a lot to do.)

I decided to find out whether I could do better: I tried several different JavaScript-in-.NET implementations, to see which one would perform the best. I tested Jurassic, Jint, and IronJS. My results are below, along with the C# code in case anyone is interested in seeing the minor differences between the APIs.

In all three cases, the coffeeCompiler parameter contains the 1.1.2 version of coffee-script.js, as downloaded from GitHub; and the input parameter contains a one-line CoffeeScript script:

alert "Hello world!"

Jurassic

Jurassic dynamically compiles JavaScript to CLR code at runtime, so you take a performance hit the first time you run some JS, but it should be pretty fast after that. Jurassic is available via NuGet.

private void CompileCoffeeScriptUsingJurassic(
    string coffeeCompiler, string input)
{
    Console.WriteLine("Jurassic");
    var stopwatch = Stopwatch.StartNew();
    Console.WriteLine(stopwatch.Elapsed + ": Creating engine");
    var engine = new ScriptEngine();
    Console.WriteLine(stopwatch.Elapsed + ": Parsing coffee-script.js");
    engine.Execute(coffeeCompiler);
    Console.WriteLine(stopwatch.Elapsed + ": Adding compile wrapper");
    engine.Execute("var compile = function (src) " +
        "{ return CoffeeScript.compile(src, { bare: true }); };");
    Console.WriteLine(stopwatch.Elapsed + ": Compiling CoffeeScript input");
    var output = engine.CallGlobalFunction("compile", input);
    Console.WriteLine(stopwatch.Elapsed + ": Done");
    Console.WriteLine("Output:");
    Console.WriteLine(output);
    Console.WriteLine();
}

Jint

Jint is a JavaScript interpreter. It’s not available through NuGet yet, but it’s a single DLL.

private void CompileCoffeeScriptUsingJint(
    string coffeeCompiler, string input)
{
    Console.WriteLine("Jint");
    var stopwatch = Stopwatch.StartNew();
    Console.WriteLine(stopwatch.Elapsed + ": Creating engine");
    var engine = new JintEngine();
    Console.WriteLine(stopwatch.Elapsed + ": Parsing coffee-script.js");
    engine.Run(coffeeCompiler);
    Console.WriteLine(stopwatch.Elapsed + ": Adding compile wrapper");
    engine.Run("var compile = function (src) " +
        "{ return CoffeeScript.compile(src, { bare: true }); };");
    Console.WriteLine(stopwatch.Elapsed + ": Compiling CoffeeScript input");
    object output = null;
    try
    {
        output = engine.CallFunction("compile", input);
    }
    catch (JsException ex)
    {
        Console.WriteLine("ERROR: " + ex.Value);
    }
    Console.WriteLine(stopwatch.Elapsed + ": Done");
    Console.WriteLine("Output:");
    Console.WriteLine(output);
    Console.WriteLine();
}

IronJS

IronJS is based on the DLR, so it seemed like it might strike a great balance between upfront compile time and runtime — after all, that’s what the DLR is all about.

IronJS is available through NuGet — there’s both an IronJS.Core (standalone) and an IronJS (depends on IronJS.Core), with nothing to explain the difference between the two; but at least for this code, you only need IronJS.Core.

private void CompileCoffeeScriptUsingIronJs(
    string coffeeCompiler, string input)
{
    Console.WriteLine("IronJS");
    var stopwatch = Stopwatch.StartNew();
    Console.WriteLine(stopwatch.Elapsed + ": Creating engine");
    var engine = new CSharp.Context();
    Console.WriteLine(stopwatch.Elapsed + ": Parsing coffee-script.js");
    engine.Execute(coffeeCompiler);
    Console.WriteLine(stopwatch.Elapsed + ": Adding compile wrapper");
    engine.Execute("var compile = function (src) " +
        "{ return CoffeeScript.compile(src, { bare: true }); };");
    Console.WriteLine(stopwatch.Elapsed + ": Fetching compile wrapper");
    var compile = engine.GetGlobalAs<FunctionObject>("compile");
    Console.WriteLine(stopwatch.Elapsed + ": Compiling CoffeeScript input");
    var result = compile.Call(engine.Globals, input);
    var output = IronJS.TypeConverter.ToString(result);
    Console.WriteLine(stopwatch.Elapsed + ": Done");
    Console.WriteLine("Output:");
    Console.WriteLine(output);
    Console.WriteLine();
}

The results

Jurassic
00:00:00.0000063: Creating engine
00:00:00.1540545: Parsing coffee-script.js
00:00:03.4346969: Adding compile wrapper
00:00:03.4408860: Compiling CoffeeScript input
00:00:05.3466983: Done
Output:
alert("Hello world!");

Jint
00:00:00.0000019: Creating engine
00:00:00.2617049: Parsing coffee-script.js
00:00:02.5270733: Adding compile wrapper
00:00:02.5295317: Compiling CoffeeScript input
ERROR: Parse error on line 2: Unexpected 'STRING'
00:00:02.5832895: Done
Output:


IronJS
00:00:00.0000019: Creating engine
00:00:00.2282421: Parsing coffee-script.js
00:00:55.5590620: Adding compile wrapper
00:00:55.5629230: Fetching compile wrapper
00:00:55.5642908: Compiling CoffeeScript input
00:01:17.8580574: Done
Output:
alert("Hello world!");

Jint wasn’t up to the task — it got a weird error when trying to call CoffeeScript.compile. I played with this a bit, and found that it would work if I passed an empty string, but give errors with non-blank CoffeeScript to compile; sometimes a string error like above, sometimes a weird error about multiline comments. It’s too bad, because Jint shows a lot of promise, speed-wise. I don’t know what the problem is; the error didn’t give me much to go on, and I’m not terribly motivated to pursue the problem when the other libraries work. (I did write this up in their bugtracker, though — it’s issue #6928.)

I was surprised that IronJS was so much slower than the others — about 20x slower than Jint at running coffee-script.js, and about 10x slower than Jurassic. This is especially puzzling since the article I based my code on mentions a “compilation lag”. To me, 55 seconds is hardly “lag”!

The winner here (and coincidentally the first one I tried) is Jurassic — so the performance that disappointed me is also the best I’m likely to get. On my laptop, you take about a 3.5-second penalty to compile coffee-script.js, and then another two seconds to run CoffeeScript.compile on a one-line script.

I did find that subsequent calls to CoffeeScript.compile were nearly instantaneous with all three libraries. So Jurassic’s 2 seconds is probably due to the JIT compiler running for the first time on that runtime-generated code. Not sure what to make of the 20 seconds for IronJS; is the DLR just that big?

If Reflector needed money so badly, why didn’t they ask?

Thursday, February 3rd, 2011

Your copy of Reflector will self-destruct at the end of February… unless you pay the ransom.

This is depressing, not because of the money — I could easily pay $35 for a kick-ass tool like Reflector — but because of the betrayal of trust.

It’s not unlike the way Borland repeatedly betrayed their users’ trust with crap like “Inprise” and “Application Lifecycle Management”. (What’s left of Borland just got bought out. Good riddance.) Or the way Embarcadero priced me out of the Delphi market a couple of years ago. (They’ve since decided that was a bad move and started selling a starter edition. Some people learn from their mistakes, though sometimes too late.)

When RedGate bought Reflector, they said that they would continue to offer a free version. Now they admit that they lied. Well, actually, they don’t admit anything; they just repeatedly say that they never “promised” a free version. I guess that interview, and the “Red Gate will continue to offer the tool for free to the community” soundbite, were imaginary.

But wait! You can buy a version that will continue to work forever! Honest! They promise! Well, no, actually. If you search their open letter for the word “promise”, you’ll find it conspicuously absent.

I’ve been reading the reactions on StackOverflow, and even finally got a Twitter account so I could follow the news there. Some people say “suck it up, it’s worth it”. More people say “that’s not the point, RedGate has proven they can’t be trusted”. I lean toward the latter camp.

Then, across the Twitter feed comes a link to a YouTube interview with Simon Galbraith, one of the co-founders of RedGate, about the decision to charge for Reflector. Apart from again going on about the word “promise”, he explains something that should have been forefront of their announcement: keeping up with new frameworks and new platforms is a big deal. They want to make Reflector an even more awesome tool, and people haven’t been paying for Reflector Pro (our department actually did buy it, BTW) so they can’t bankroll what they want to do.

But instead of actually talking to the community about this, they kept it quiet. They “agonized” over it for about six months, and then decided that the right move was to break their word, antagonize the community, and try to extort the money, at the cost of their professional reputation.

They forgot two things that should have made this easy.

One, they forgot to ask. Wikipedia isn’t afraid to ask for donations, and they get them. NaNoWriMo isn’t afraid to ask for donations, and they get them. Granted, there are plenty of open-source projects with “Donate” buttons that probably never see a dime. But for a tool like Reflector, if they had said, “Hey, we want to do X and Y and Z to make this great tool even better, but we can’t do it without your help. We need to raise this many dollars to make it happen. Who’s with us?” I think people would have responded.

And two, empathy matters. Putting up a cold, faceless, impersonal warning icon that says “Screw you, we know we said we wouldn’t do this but we’re sticking it to you anyway” is not going to earn you many friends.

Compare that to: “We need your help. We want to keep up with new platforms and features, and we want to do more than keep up: we want to make an awesome tool even more awesome. But we can’t do it without you. We know you’ve always had Reflector for free — and we promise that won’t change — but if this tool is going to survive the changes Microsoft has in store and still get even better than it’s ever been, if this tool that’s satisfied your curiosity, and taught you loads, and, yes, saved your butt time and time again, is going to stay relevant, we need you. Think back over the questions you’ve answered with Reflector, the number of people you’ve recommended Reflector to, the times you’ve sworn you couldn’t do your job without Reflector, and then just tell us this: Can we count on your help?”

Sigh.

Okay, that’s enough of that. Off to check out Monoflector.

Refactoring with MVVM is easy!

Friday, July 30th, 2010

I’ve built up a kind of intellectual appreciation for some of the things MVVM gives you, but today I had a real “wow!” moment.

I’ve been working on a WPF UserControl, and it was getting kind of big and cumbersome. I wanted to extract some pieces from it into smaller UserControls to make it more manageable (and to make it easier to make some changes to the high-level layout).

So I created a new UserControl, moved my XAML into it, and referenced it from the original spot.

And that was it. I ran it, and it worked. That simple. No worries about moving dozens of code-behind methods onto the new control. No messing with method and field visibilities, and figuring out which objects the new control needed to have references to so it could do its work. No re-hooking event handlers.

Okay, it wasn’t quite cut-and-paste — there was some fixup to be done. The new UserControl needed some xmlns: attributes added. And I wanted the attached layout properties (Grid.Row, Grid.Column) to stay in the original file, not move into the new one (they’re part of the parent layout, not intrinsic to the child UI). So it took maybe a minute or so.

But it was nothing like the splitting headache that is extracting a UserControl in WinForms.

And then I extracted another UserControl. And I ran. And it just worked.

Wow.

Just, wow.

But the downside is, now I’ve got this overwhelming temptation to rewrite our million-line codebase in WPF…

MVVM and DialogResult with no code-behind

Sunday, July 25th, 2010

I like the Model-View-ViewModel pattern in WPF, and the way it helps get code out of the UI and into a place you can test it. But every now and then you run into a weird limitation — something you can’t do out of the box. One such example is closing a dialog box.

WPF’s Button doesn’t have a DialogResult property like buttons did in Delphi and WinForms. Instead, the codebehind for your OK button has to manually set the Window’s DialogResult property to true. This makes sense in principle — it lets you validate the user input before you close — but it makes it hard to use “pure” MVVM with no code-behind. I don’t actually give a hoot about blendability (I still write all my own XAML), but since I’m still learning WPF and MVVM, I take it as a challenge to find pure-MVVM solutions to problems, just as a learning exercise.

The obvious (wrong) solution

The obvious solution would be to just do this:

<Window ...
        DialogResult="{Binding DialogResult}">

Then make your ViewModel implement INotifyPropertyChanged in the usual way, and DialogResult gets pushed up to the view the same way as everything else. Right?

Unfortunately, DialogResult isn’t a dependency property (good grief, why not?), so the above code gives you a runtime error when you try to create the window:

A ‘Binding’ cannot be set on the ‘DialogResult’ property of type ‘TestWindow’. A ‘Binding’ can only be set on a DependencyProperty of a DependencyObject.

Back to the drawing board.

Others’ solutions

Some Googling found a StackOverflow post, “how should the ViewModel close the form?”, with an accepted answer (with 5 downvotes) of “give up; you can’t use MVVM for dialog boxes”. But I’m not quite ready to throw in the towel, so I keep reading.

Another answer on the same question — which had 0 upvotes at the time I read it, despite perfectly answering the question — pointed to a blog post by Adam Mills: “Window.Close() from XAML”. Adam’s solution uses an attached behavior. I’m learning to appreciate the attached-behavior pattern; you create an attached property, but then give it side-effects. It’s a good way to get code out of the codebehind, and it forces you to make it reusable at the same time.

But I’m not crazy about the details of Adam’s solution, because it requires you to create a style, hook up triggers, …a lot of mess. His post doesn’t actually have a complete code sample, so I’m not even sure how you hook the style into your window, though I’m sure I could puzzle it out eventually. And even his incomplete example is five lines of XAML. It’d probably be up to 7 or 9 by the time you actually got it fully wired up, and that’s 7 or 9 lines that you have to repeat for every dialog box you write.

Shouldn’t it be simpler? Shouldn’t it be almost as simple as the databinding syntax would have been, if the WPF team had gotten it right and made DialogResult a dependency property?

The one-line* attached behavior

* Okay, yes, it’s two lines if you count the XML namespace.

So I rolled my own attached behavior that does make it almost that simple. Here’s how you use it:

<Window ...
        xmlns:xc="clr-namespace:ExCastle.Wpf"
        xc:DialogCloser.DialogResult="{Binding DialogResult}">

Your ViewModel should expose a property of type bool? (Nullable<bool>), and should implement INotifyPropertyChanged so it can tell the view when its value has changed.

Here’s the code for DialogCloser:

using System.Windows;
 
namespace ExCastle.Wpf
{
    public static class DialogCloser
    {
        public static readonly DependencyProperty DialogResultProperty =
            DependencyProperty.RegisterAttached(
                "DialogResult",
                typeof(bool?),
                typeof(DialogCloser),
                new PropertyMetadata(DialogResultChanged));
 
        private static void DialogResultChanged(
            DependencyObject d,
            DependencyPropertyChangedEventArgs e)
        {
            var window = d as Window;
            if (window != null)
                window.DialogResult = e.NewValue as bool?;
        }
        public static void SetDialogResult(Window target, bool? value)
        {
            target.SetValue(DialogResultProperty, value);
        }
    }
}

I’ve posted this as an answer on the StackOverflow question, so if you think it’s a good solution, feel free to vote it up so that others can find it more easily.

Using a worker AppDomain to register a COM assembly

Friday, December 4th, 2009

I’ve been coding in .NET since 2002. Today I finally had a reason to use an AppDomain. I had a task that needed to run in a separate AppDomain and then return, so I’m thinking of it as a “worker AppDomain”.

We have a code path that needs to programmatically register one of our assemblies as a COM library (shudder). And yes, there are good reasons why it’s not enough for us to do this at install time. But that’s okay, because the code is pretty simple:

public static class MyRegistrar
{
    public static void Register()
    {
        var assembly = LoadMyComAssembly();
        var registrationServices = new RegistrationServices();
        registrationServices.RegisterAssembly(assembly,
            AssemblyRegistrationFlags.SetCodeBase);
    }
}

(I also could have shelled out to regasm.exe with the /codebase option, and gotten the same result. But that would have required hard-coding the path to the .NET Framework binaries, which is even worse than COM.)

The downside is that, once my process loads the COM assembly, that DLL file is now locked on disk until my process exits. This turned out to be problematic — it’s actually a Windows service that’s running the above code, and I was having trouble building the COM assembly because the service had it locked!

So I had to make sure the assembly got unloaded after we ran the above code. That means either a separate process, or a separate AppDomain. And if I used an AppDomain, I wouldn’t have to add yet another project to our solution. So I dove right in, and after several false starts, got something that worked. Here’s my code to load and regasm an assembly, without keeping the file locked thereafter:

public class MyRegistrar : MarshalByRefObject
{
    public static void Register()
    {
        var domain = AppDomain.CreateDomain("Registrar", null,
            AppDomain.CurrentDomain.BaseDirectory,
            AppDomain.CurrentDomain.BaseDirectory, false);
        try
        {
            var me = typeof(MyRegistrar);
            var assemblyName = me.Assembly.FullName;
            domain.Load(assemblyName);
            var registrar = (MyRegistrar) domain.CreateInstanceAndUnwrap(
                assemblyName, me.FullName);
            registrar.RegisterAssembly();
        }
        finally
        {
            AppDomain.Unload(domain);
        }
    }
    private void RegisterAssembly()
    {
        var assembly = LoadMyComAssembly();
        var registrationServices = new RegistrationServices();
        registrationServices.RegisterAssembly(assembly,
            AssemblyRegistrationFlags.SetCodeBase);
    }
}

To run code inside an AppDomain, I need an object that lives inside the new domain, but that I can call into from outside it (i.e., from the primary AppDomain); hence the change from static class to MarshalByRefObject descendant, and the move of the actual registration code to an instance method. Then I can just create a new AppDomain, create an instance of my class inside that domain, call the object’s instance method (which then executes inside the new domain), and then unload the AppDomain so that it unloads the assembly. Most of the gyrations are there because there isn’t a generic version of AppDomain.CreateInstanceAndUnwrap — if there was, the inside of that try..finally would be all of two lines long, if that.

Actually, better yet would be if RegistrationServices could take the filename of an assembly, rather than only taking a reference to an already-loaded-and-locked Assembly object. Then it would be a drop-in replacement for regasm. Still, the above code isn’t too complicated, and it seems to work nicely.

NUnit and Silverlight

Friday, October 9th, 2009

Unit testing in Silverlight is a persnickety business. The NUnit.Framework binary is built for full .NET, so you can’t easily use it to test Silverlight assemblies. I tried a few different things, but kept running into walls.

Fortunately, smarter people have already figured it all out. Jamie Cansdale made a Silverlight NUnit project template that gets you started right. It’s intended for TestDriven.NET, but it works great with ReSharper’s test runner too. Just download and open his template, and it’ll add itself to Visual Studio. Then the next time you do New Project, there’s an extra “Silverlight NUnit Project” option available under the Visual C# > Silverlight project type. Very cool.

However, the nunit.framework assembly in Jamie’s template is from some unidentified, but old, version of NUnit. There’s no version info in the DLL, but I know it’s gotta be 2.4.x or earlier, because its Is class (from the fluent assertions — Assert.That(2 + 2, Is.EqualTo(4));) is in a different namespace, whereas I know that 2.5 moved it into the main NUnit.Framework namespace.

Since I use the fluent assertions all the time, and since I just don’t want to use an old version, I went hunting again, and found Wes McClure’s NUnit.Framework 2.5.1 for Silverlight 3. It’s only a little old — right now the latest version is 2.5.2 — and his binaries are working out quite nicely so far.

So I use Jamie’s template to create a new project, which includes a lib directory with the old version of nunit.framework.dll; then I grab Wes’s nunit.framework.dll and drop it into the lib directory, replacing the older version. And I’m good to go.

Now, back to those fiddly trig calcs… (See, there was a reason I wanted to add a test project!)

Update, Oct 10 7:30am… Intellisense works great with Wes’s assembly. Building and running are a different story. Much unexplainable behavior from Visual Studio. Short version: I couldn’t get Wes’s assembly to work with the ReSharper test runner. But Jamie’s template is working fine so far.

Upcoming Omaha developer-ish conferences: BarCamp and HDC

Friday, September 18th, 2009

BarCamp Omaha is October 2-3 (Friday evening to Saturday). I’ve never been to a BarCamp, but it promises to be intriguing — the session schedule is planned on a whiteboard Saturday morning, and anyone who wants to run a session, can. It’s not just a tech conference; they say the major tracks will be Tech, Creative, and Entrepreneurship.

I understand that BarCamps are usually free, but this one costs $5. But (a) that’s cheap (heck, I thought the HDC was cheap at $200 — but then again, I’m not paying for the HDC out of my own pocket) and (b) you get the full conference experience for that $5: free T-shirt (great, another one to go into the drawer), free breakfast, free lunch, and free pop and snacks all day. Free food is the primary reason for going to a conference, so it should feel just like home.

I’m toying with the idea of speaking at BarCamp (haven’t really decided yet). Their FAQ says the time slots are only 30 minutes, and I’m wondering if I should take a stab at a 31 Minutes of ReSharper. It’d take some serious editing, mind you, given that my original material was 31 days.

The Heartland Developers Conference (HDC), a Microsoft-themed conference here in Omaha, is October 15-16. I guess I don’t really need to hype it, since it’s 100% sold out this year (I wonder how long until they need to start spilling over into the first floor of the Qwest Center?). But I’m looking forward to it just like every year, because the second-day breakfast always has all the free bacon you can eat.

Oh, and they always have some pretty awesome sessions, too. A couple years ago they had Scott Guthrie from Microsoft do one of the keynotes, if that gives you any idea (at the time, he was Microsoft’s General Manager in charge of the CLR, ASP.NET, Silverlight, WPF, and IIS, among other things). They get good people and a lot of interesting content. I can pretty much register without even looking at the session list, and know that every timeslot will have a session that’s well worth my while — and even with the economy the way it is, my boss was happy to pay all our conference registrations. There’s benefits to making something cheap. (Delphi sales department, are you listening?)

Micro-rant: Ninject and IKernel

Tuesday, July 7th, 2009

Ninject looks cool. So does Autofac, but Ninject has automatic self-binding.

But I hate how Ninject calls its container “IKernel”.

This is the one part of Ninject that you are guaranteed to use. (Well, I guess you need modules too — not that that name is much better.) IKernel is the single most visible part of the Ninject API. And its name is absolutely wretched.

It’s named for its implementation, with no regard to its usage. In real life, it isn’t a kernel; it belongs to the kernel module in Ninject’s modular implementation. Don’t get me wrong, I appreciate that it’s modular. But don’t make me, as a dev who just wants to use a DI framework, suffer through that implementation detail.

Nobody should have to care what a “kernel” is unless they’re writing their own. “Kernel” means nothing to someone using the container (which is almost everybody). The name is not just distracting, it’s outright misleading — it actively suggests “this is not a container”, when in fact it’s exactly a container, keeper of the Get<T> method.

I really want to give Ninject an honest try, but this seriously bugs me. Why not “IContainer”? Or something that isn’t a lie? (Yeah, I know they’re ninjas and all, but shouldn’t the subterfuge be reserved for the enemy?)

WPF oddity of the day: attached/dependency properties and breakpoints

Sunday, May 24th, 2009

Silverlight doesn’t have any real support for ICommand, so I wrote an attached CommandBinder.Command property. Then it wasn’t working (at least when I compiled for WPF, which is a lot easier to debug), so I tried putting breakpoints in my SetCommand and CommandChanged methods. Neither ever got hit.

It turns out that WPF’s attached-property pattern is weird: you have to declare a getter and setter method, but they don’t get called.

Here’s some sample code for an attached property, just for reference:

public class CommandBinder
{
    public static DependencyProperty CommandProperty =
        DependencyProperty.RegisterAttached(
            "Command", typeof(ICommand), typeof(CommandBinder),
            new PropertyMetadata(CommandChanged));
 
    private static void CommandChanged(DependencyObject d,
        DependencyPropertyChangedEventArgs e)
    {
        // do something
    }
    public static ICommand GetCommand(DependencyObject d)
    {
        return (ICommand) d.GetValue(CommandProperty);
    }
    public static void SetCommand(DependencyObject d, ICommand value)
    {
        d.SetValue(CommandProperty, value);
    }
}

You have to have a setter method; otherwise you get a compiler error. (Apparently the compiler has some logic to enforce the design pattern.) But as far as the XAML parser is concerned, the setter method is more of an attribute — it says “Yeah, I’m settable.” The XAML loader doesn’t call SetCommand; it calls SetValue directly on the target object.

I did some experimenting, and it looks like this applies to dependency properties as well. You have to declare a corresponding CLR property, or you’ll get a compiler error when you try to set the property in XAML; but the XAML loader never actually uses the property.

In both cases, your setter could actually do nothing at all, and the XAML would still set the property’s value correctly. (But then anyone who tried to set the property programmatically would be in for some head-scratching. So don’t do that.)

I had wondered, in the past, why you would pass a PropertyMetadata with a change-handler delegate, rather than just putting the on-change logic inside the SetXXX or property setter. Now I know.

(Actually, in my experimenting, I was able to get it to call the property setter. All I had to do was misspell the property name that I passed to DependencyProperty.Register. So that’s why I have to tell it the name (something else I had always wondered) — so it can do a dictionary lookup. If the dictionary lookup fails, it falls back on trusty-but-slow Reflection.)

So that explains why my SetCommand wasn’t being called. Why wasn’t CommandChanged firing either? Stupid mistake on my part — I hadn’t set the DataContext, so the binding expression failed.

Geek quote of the day: Git and Home Depot

Thursday, May 21st, 2009

Git is intimidating. It’s a distributed revision-control system, so it’d work online or off, and it’s got tons of cool toys (like git-bisect to automatically figure out which commit introduced a bug). But good luck figuring out which of the umpteen zillion commands you actually need to get something done. (I cheat — I IM my friend Sam and say, “Help?”)

Git has everything from fine-grained commands to handle a tiny part of a single commit, up through high-level commands that mow your lawn and make Julienne fries, and I have no idea how to tell which is which. Like I said, intimidating. Git has been described as not so much a revision-control system, but rather as a toolkit you can use to build your own revision-control system that works exactly the way you want it to. Which is kind of like writing your own lexer, parser, keyhole optimizer, runtime library, memory allocator, JIT compiler, and IDE, and designing custom hardware while you’re at it, and mining the silicon yourself, so you can write a programming language that works exactly the way you want it to.

And it doesn’t help that Git’s Windows support has been very slow in coming, though apparently now it’s mostly as good as on other platforms.

Yesterday I was working on a toy project that might amount to something someday, but that I was more likely to lose interest in after a few days. And I wanted revision control for it (I like diffs). But it didn’t feel worth creating a Subversion repository for something potentially throwaway. Git stores your repository right there in your working copy, which felt like a good fit. So I finally installed msysgit, and promptly found that it’s got some awesome features (I was skeptical of the index when I first heard about it, but it’s actually very cool, especially through the GUI — you can commit just certain lines from a file!… not sure how you run the unit tests on them, though), but that it’s got some stuff that truly sucks (the people who wrote the Git GUI have never heard of window resizing, word-wrapping, or context menus — and the terminology is deliberately confusing. If I can’t figure out how to revert a file, there’s a problem somewhere.)

While reading around, I happened across a mention of Mercurial, another distributed revision-control system, and I started sniffing around for comparisons between Mercurial and Git.

I hardly ever laugh out loud, but Jennie, from another room, called, “What’s so funny?”

From Use Mercurial, you Git!:

I ordered a version control system, not a toolkit for building one! If I’d wanted building blocks for rolling my own, I’d have gone to Home Depot and bought a 1 and a 0.


Joe White's Blog copyright © 2004-2011. Portions of the site layout use Yahoo! YUI Reset, Fonts, and Grids.
Proudly powered by WordPress. Entries (RSS) and Comments (RSS). Privacy policy