Diet binary

We’ve been working toward a deadline at work, and a couple of weeks ago, our boss brought in a big cooler, which he’s been keeping pretty well stocked with caffeinated beverages. Among them are various diet breeds, and today some of us got into a discussion of them, particularly Pepsi One and its kid brother, Coke Zero.

The most amusing comment of the afternoon was that hey, now that we’ve got both one and zero, we could build a computer that was powered entirely by Coke and Pepsi.

XP: Constant shippability

One of the tenets of XP is that, at the end of each iteration (every 1-2 weeks), the software should be shippable.

Whether it actually gets shipped isn’t particularly relevant. Management could decide to ship it, if that was the right business decision for them to make. But that’s not the real reason we keep the software shippable.

The real reason is this: You don’t know how long something new will take, until you’re done. Something will always come up.

If you have ten major new feature areas to work on for the next version of your software, it may be tempting (it was to our team) to work on all the hardest things first, the high-risk changes. We were doing that for a while, a few months back. Doing the hard things first would give us extra time to test those changes, we reasoned. So for a while, we thought, we’ll work on all ten major features at once.

This, we have since come to realize (after being told several times by our XP coaches), is not necessarily the right thing to do. It’s better to get things finished, because as you work on major feature A, you’ll realize that you have to do more work than you originally thought, or that you don’t understand something you thought you did. Obstacles come up, inevitably. But you don’t know how much those obstacles are slowing you down. And if you have ten projects going at once, you have at least ten times the uncertainty.

Alistair Cockburn draws a parallel to packing the contents of a house. It’s a bad idea to pack all the rooms at once, because you have no clue how much progress you’re going to make. It’s much better to pack one room at a time, and get it finished before you move on to the next one. That gives you a much better idea of how fast you’re going, and how soon you’ll finish, and whether you’re going to be in deep trouble the day the moving truck arrives.

More importantly, when you finish packing a room, you want it to be completely packed. From Cockburn’s article:

It is practically impossible to tell whether one is 60% or 70% done (speaking to both software and room-packing). For most of the project, one sees neither the true size of the task at hand, nor the (slower than expected) rate of progress. Missing both of those, it is impossible to tell when one will get done. In both cases, an ugly surprise awaits at the end, when a myriad little unexpected items suddenly become visible (think about when you loaded all the boxes into a moving truck and walked back into the house, only to discover all sorts of things that suddenly “appeared” when everything else was taken out).

There should be nothing remaining in that room when it’s packed — as Cockburn puts it, “not even a sock”. Because if there is a sock left, then there might be something else, too. Having helped a friend move recently, and later having to go back to her apartment to pick up some items she hadn’t brought along, I can attest to this. If there’s background clutter, it’s that much harder to tell where the foreground is. You work on a feature until you’re satisfied that a customer could use it tomorrow. Then, and only then, it’s done, and you can move on.

Even if you have to have all ten features done before you ship, you still need to focus on a few at a time, and get each one done. Until it’s done, you have no idea what kind of progress you’re making, and you have no way of predicting whether you’ll meet your ship date. Even if the news is “we’re in trouble”, you’ll know that sooner, not later, and can tell management so they can act on that information. Nobody wants to hear “we can’t possibly ship for another two months” the day after the drop-dead date. Nobody wants to hear that anytime, but if you tell management you’ll slip by two months, and you tell them three months ahead of time, they can do something about it: cut features, change the date, hire more people, whatever it is that needs to be done.

We finally figured out the sock thing a few months ago. We rearranged our tasks, and have been knocking out one major feature after another. And with the possible exception of bugs (we still haven’t achieved the XP goal of always fixing bugs as soon as they’re found, but we’re getting darned close), we’re all pretty confident in our progress. The days of “well, we’ll probably ship one of these months” are gone, and I, for one, am thrilled.


When you see a sex link in my blog, it’s usually comment spam. Not this time. Just a couple of things I’ve run across that I can’t not pass along.

Link #1 is from Ali Davis. A while ago, I blogged about her article in, “Outsourcing Rejection“, a.k.a. “Too late, Mr. Jenkins. You were an F7 back at question 2.” More recently, I was re-reading that article, and noticed the author’s bio at the end, which included a link to another of her works: “True Porn Clerk Stories“. Wow. She’s got a way with words, and this thing is just too good not to read. But don’t blame me if your laundry and dishes don’t get done for the next couple of days, while you read the whole thing. (Don’t forget to eat now and then, too.)

Link #2 is from Tsarina, whose blog I stumbled onto a while ago, I’m not even sure how anymore. I follow her blog because she’s a teacher in an inner-city school, and one who really cares about her students. That is not relevant to this particular link, though, so put it out of your mind before you read “Tsarina Goes to a Smut Party“. (Those of you who have read my OWL posts won’t be too surprised to hear that I’ll probably be passing this link on to some friends from church.)

Pathological QueryInterfaces

After my last post (about getting an interface back out of a Variant), Sebastian asked a question about the code I had posted: “Have you tested that Supports() puts nil to the ‘out Intf’ parameter if the interface is NOT supported?”

That was a good question — I started to answer, and then realized that I didn’t know for sure. I thought it was safe, but I hadn’t done any detective work to verify that. So I went hunting. Here’s the implementation of Supports (the overload that takes an IInterface, which is the one I was using):

function Supports(const Instance: IInterface; const IID: TGUID;
  out Intf): Boolean;
  Result := (Instance <> nil) and
    (Instance.QueryInterface(IID, Intf) = 0);

I could see two cases where the Intf parameter might end up being non-nil in the event of failure:

  1. If the body of the method never assigns to Intf, and the compiler doesn’t magically nil out the variable.
  2. If QueryInterface returns false, but still assigns a non-nil value to Intf.

#1 first. I suspected that the compiler would magically nil out the variable, but I wanted to confirm that. First stop: the CPU view. I started by writing a trivial function with the same kind of method signature Supports deals with:

function TForm1.Foo(out Intf);

And called it from my FormCreate. Then I did the usual “look at the CPU view, figure out what the compiler is doing, and add more stuff to make sure I’ve got it right” drill.

And yes, the compiler does indeed generate magic code to nil out Intf. I was expecting that magic code to be put inside my Foo() method, but it turned out to be in the caller:

Unit1.pas.36: Intf1 := nil;
0046038C 8D45FC        lea ex,[ebp-$04]
0046038F E8BC5BFAFF    call @IntfClear
Unit1.pas.37: Foo(Intf1);
00460394 8D45FC        lea ex,[ebp-$04]
00460397 E8B45BFAFF    call @IntfClear
0046039C 8BD0          mov edx,eax
0046039E 8BC3          mov eax,ebx
004603A0 E8CBFFFFFF    call TForm1.Foo

Notice those repeated lines. When it compiles the call to Foo (and sees that the interface variable is being passed to an out parameter), the compiler automatically calls @IntfClear before actually calling Foo. So I was wrong about the where, but right about the idea; the compiler does do some magic to nil the out parameter (when it’s a magically-memory-managed type, at least).

(Now that I think back on it, of course the interface-clearing happens at the caller. The Foo method has no idea what type is being passed on it, so it can’t call @IntfClear — the parameter might just as well be a string, a dynamic array, a float, whatever. When it’s an untyped out parameter, only the caller knows the type, so it’s the caller’s responsibility to clear it if need be.)

Okay, so question #1 is answered. Now how about question #2? More hunting…

I found the MSDN docs for QueryInterface, and they state: “If the object does not support the interface specified in iid, *ppvObject is set to NULL.” So according to the specs, any well-behaved QueryInterface, if asked for a GUID it doesn’t support, should both return an error code, and set its out parameter to nil.

Next question: how sure am I that every QueryInterface in existence will always be well-behaved? This gave me pause. I mean, I can look at the way Delphi implements its own QueryInterface stuff, but what about COM objects? Can I be sure that any COM object, written by anybody in any language, is guaranteed to pass back nil? (This is the problem with duplicated information — it’s obvious from the out parameter whether QueryInterface was successful, but Microsoft chose to also return a flag saying the exact same thing, opening the possibility that those two return values could be inconsistent. Thanks, Microsoft.)

So I thought through it some more. If QueryInterface is asked for a GUID that a particular object doesn’t support, it will return E_NOINTERFACE. In that case, there are four meaningful things it can do with its out parameter:

  1. Set the out parameter to nil (or leave it alone, which amounts to the same thing, since the caller already set the value to nil).
  2. Set the out parameter to a garbage value.
  3. Set the out parameter to point to a valid interface, but don’t increment that returned object’s refcount.
  4. Set the out parameter to point to a valid interface, and do increment that returned object’s refcount.

Now consider this code:

procedure X(Intf1: IInterface);
  Intf2: IInterface;
  if Supports(Intf1, IFoo, Intf2) then

There’s a local variable of an interface type, so at the end of this method, the compiler automatically generates some teardown code (which checks to see if the reference is nil, and if not, calls Intf2._Release). So what happens with that teardown code in each of those four conditions?

  1. When X returns and its teardown code executes, Intf2 is nil. Everything runs successfully.
  2. When the teardown code executes, Intf2 is a garbage value, but is not nil. So _Release gets called, and… boom! Virtual method call on a garbage value. If you’re lucky, this would result in an Access Violation. More likely, we’re talking “Do you want to send this error report to Microsoft?” and an infinite series of exception messages, until you finally get fed up and kill the process.
  3. The specs say that QueryInterface should increment the refcount of the object it’s returning. If a QueryInterface doesn’t do that, then it would return an object with a refcount of zero. When our perfectly reasonable code calls DoSomething, the refcount gets incremented to one, and when DoSomething returns, the refcount gets decremented — to zero, so the object frees itself. Then we get to the finalization code. Calling a virtual method on a freed object isn’t really any better than calling a virtual method on a garbage reference, so — boom!
  4. When X returns and its teardown code executes, Intf2 is non-nil, so its _Release gets called. The object’s refcount gets decremented to zero, and it gets freed. But we never called DoSomething, because QueryInterface claimed that it wasn’t returning a reference.

Cases #2 and #3 are wrong in painfully, immediately obvious ways. If somebody actually implemented a QueryInterface that way, they would find out about it, real quick, because all the code that used their object would crash in a hurry.

That leaves us with case #1, which makes perfect sense, and case #4, which is ridiculous — but still possible. Code could implement QueryInterface to return a valid reference, but still return E_NOINTERFACE. That’s just screwy, and it’s not what the specs say you should do, but it could happen.

I suspect that there’s some C++ code out there that wouldn’t bother to call Release if they’d been told they weren’t getting anything back. But that would lead to a memory leak, not a crash. So it’s possible that there’s some otherwise-valid QueryInterface code out there, somewhere, that implements case #4. It would be stupid, but it probably wouldn’t be immediately obvious.

Sigh. My variant-to-interface code assumes case #1. It will do the wrong thing in case #4; namely, it will return the interface that QueryInterface lied and said it didn’t have. And if QueryInterface is already lying to you, it’s entirely possible that the interface it’s returning is the wrong type. Then you get back into the virtual-method-calls-on-garbage-values thing. Ack.

Okay, here’s the bottom line. If you use a reasonable programming language (like Delphi), and you use the built-in QueryInterface stuff that’s already been written for you and doesn’t have evil bugs like case #4, then you can use my variant-to-interface code without modification. It’ll be safe (I just checked; TInterfacedObject.QueryInterface implements case #1, as all reasonable QueryInterfaces should). But if you’re dealing with COM objects from unknown sources, or if you just like to be cautious, you may want to take a look at the revised code Sebastian posted. It shouldn’t be necessary, but if you ever end up dealing with a pathological QueryInterface, the extra checking may be safer.

Moral: Don’t design stupid APIs that return the same information twice!

Getting an interface back out of a Variant

Delphi’s Variant type can contain an interface reference (and it holds a reference count, and everything else you’d expect). But this morning, we tried to figure out how to get that interface back out of the Variant, and it was a pain. Here are the results, in case anyone else ever needs to do such a thing (or in case I need to do it again).

We wrote a little test app, and we tried this first:

  Foo: IFoo;
  V: Variant;
  Foo := TFoo.Create;
  V := Foo;
  Foo := V;

The compiler didn’t like that last line, complaining about “Incompatible types: ‘Variant’ and ‘IFoo'”. So the obvious didn’t work. What if we tried “Foo := V as IFoo;“? Nope, no dice there either: “Operator not applicable to this operand type”. “Foo := IFoo(V);“? No again: “Invalid typecast”.

The Variants unit has some helper functions like VarAsString and VarAsWideString, but there’s no corresponding VarAsInterface. There’s a VarAsType, but that just returns another Variant, which doesn’t help us; the compiler still wouldn’t let us assign it into our Foo variable.

Internally, a Variant is just a struct. Is there a way, we wondered, to just say V.Innards.Interface? We dug around, found the TVarData struct that describes a Variant’s internal structure (not to be confused with TVarRec), and found this line in the record’s variant section:

varUnknown:  (VUnknown: Pointer);

Eww, they store the interface reference as a pointer. That ended that particular line of inquiry. I know you can cast an interface to a pointer and vice versa, but it opens up new cans of worms as far as refcounting goes, and we didn’t want to go there. There had to be an easier way to cast a Variant to an interface!

We finally turned to the Help. (If all else fails, read the instructions.) We found that, while you can’t assign a Variant directly into a variable of type IFoo, you can assign it directly into an IInterface. (You can also assign directly to IDispatch if you’ve got a dispinterface, which we didn’t. But you can’t assign a Variant to any other interface type.) We tried it, and it works:

  Foo: IFoo;
  Intf: IInterface;
  V: Variant;
  // ... set up V with an IFoo instance
  Intf := V;
  Foo := Intf as IFoo;

Or the simpler:

Foo := IInterface(V) as IFoo;

The only problem was, the “Intf := V;” line, and the “IInterface(V)” cast, both throw an exception if the Variant doesn’t happen to contain an interface. We just wanted “give me an IFoo if there is one, otherwise nil”, so we wound up with something like this (also changing the “as” cast to use SysUtils.Supports):

function VariantAsIFoo(V: Variant): IFoo;
  if VarIsType(V, varUnknown) then
    Supports(V, IFoo, Result)
    Result := nil;

If the Variant contains a non-interface, or an interface other than IFoo, this returns nil. Otherwise it returns the IFoo.

I see refactorings

I see refactorings.

I think it has a fair bit to do with reading Ron’s book. I understood how to do refactorings before, but the book gave me an idea of the higher-level view — not just the nuts and bolts, but also the overall effect on the code, the way things move around over time. I used to know that refactorings help the code; now I understand a little more about how. The best way I can think to explain it is as the difference between two still photos, and seeing time-lapse video of flower petals opening. Kind of a hokey example, but you see what I mean.

And last Thursday, I was pairing with Sam, and we found some slightly-tricky duplication that we decided to remove. It was a bit of a struggle in the barren blocklessness that is Delphi, but after a few false starts, we hit on something that worked, in two of the four sites. And then we had to put in a kind of a hack to get it to work in the other two.

And I looked at that hack, and I could see the code saying, “Just a little farther! Extract a class here!” I could see all the steps falling into place, from the hack all the way through to the elegant solution — and from there, who knows? The code would tell me. Maybe we’d be able to extract some more methods onto the new class, and then we’d really go to town. It was beautiful.

But we’d already gone past our estimated hours on that story, and we already had something that worked. I wanted to keep refactoring, but Sam talked me out of it, rightly pointing out that it was Thursday afternoon and we had another sixteen-hour story to finish before the weekend. So I sadly let it go. (And yes, we did finish that last story.)

Then today, it happened again. I was pairing with Brian this time, and we saw some code that was going to cause us problems. We needed to make a change, but we didn’t know if it would be safe; the code was kind of brittle, and we didn’t know if this change would break other things, which we might not even know about until later.

And once again, I saw the refactoring. The uncertainty all came from this object having state that was only valid some of the time; some methods set up this state, others tore it down, and still others relied on that state. Big headache trying to make our change, because we’d have to figure out when that state was valid and when it wasn’t.

This state didn’t really belong on that object, because it violated the “keep things together that change together” principle. It could have been a local variable if it was local to a single method, but it wasn’t; and besides, it was six variables, not one.

And I could see the refactoring-to-be, plain as day. Make a new class for that “sometimes” state. Find all the methods that depend on that state, and move them onto the new class; I could tell that we could do that, that it would work. Then, when the first object needed that state, it would just instantiate the new class, call the method or two it needed, and free the object. It would be beautiful.

This time we did have time to do that refactoring, and it was beautiful. And when we were done, the problems we’d been worried about were just gone: now it was easy to make our change, and tell that it wouldn’t break anything, because it was absolutely clear when the state was valid (we had an instance of the new class) and when it wasn’t. In fact, it didn’t even matter, because our change only had to be made on the new class, whose state was valid for as long as the instance existed. (Now if I could just figure out how to do this everywhere else we get grief from that kind of “sometimes state”.)

It was way cool. I was truly grokking the code, in a way I’ve never done before.

“I’m still geeking out about it.” — Syndrome

Ron’s book: Extreme Programming Adventures in C#

Sam loaned me his copy of Ron Jeffries‘ latest book, “Extreme Programming Adventures in C#“. I’m nearly done reading it, and will probably re-read it over the holiday weekend.

My parents tell me I’ve been programming computers since I was four years old, and let’s face it, if you spend 26 years learning a subject, you learn a lot. It’s not that often anymore that I’ll learn something seriously new and big from a computer book. It’s rare for me to look up from a computer book and say, “Ohhhh… now I get it!” or “Man, we have got to start doing that!” or “So that’s why!” I’m not saying I never learn anything from computer books, but they rarely give me that kind of “aha!” anymore.

But… well, damn. True, I’ve only been doing XP for five months or so, for however many grains of salt that’s worth, but I learned stuff from this book. Hell, Sam learned stuff from it, and he’s quite a bit better at XP than I am.

Of course, I could tell it was going to be a good book when I read this bit in the introduction:

In this book, Ron pair programs with you. As you read it, you will feel that you are sitting next to him, watching him — even helping him — to write C# code. You’ll read his thoughts, his fears, his complaints, and his rejoicings. You’ll laugh with him, and you’ll get mad at him.

I read that last sentence, looked up from the book, and thought to myself, “Yeah, that sounds about like pairing with Ron.” 😛

Okay, so here’s my recommendation: If you code, read this book; you will most likely learn something worth learning. Even if you don’t do XP. Even if you don’t do C# (or, for that matter, even if you do do C#), but keep in mind that the book isn’t a C# tutorial by any means.

Here are a handful of things I picked up from this book:

  • Before: I knew that XP says to start simple, and improve your design as you go. But I only knew how to do that at a small scale: individual tests, individual methods. I had no concept of how it actually worked at a larger scale, nor how it worked over time.
  • After: Ron writes an application throughout the course of the book. I got to see the design unfold. He removes duplication, sprouts new methods and new objects, trips over awkward code and then improves it, and I’m along for the ride. I learned, man. I saw how to improve your design as you go, while still keeping it as simple as possible (but, as Ron, like Einstein before him, points out, no simpler). I saw how to do it. I get it now — at least, a lot better than I did before.
  • Before: I knew (because Ron and Don and Brian had all told us) that we needed customer tests. I didn’t have a grasp of what that really meant. I envisioned the customer telling us what the program should do, and us transcribing that to Delphi code, DUnit tests. Brian Marick had shown us a demo of Fit, but I only grasped the technical side (how the programmers write code to make it work), not the customer side (how the customer actually writes tests and changes them and extends them).
  • After: Ron’s customer tests were text files in a directory, written in a trivial homebrew scripting language. Just enough to get the job done (growing the language as needed along the way). And they’re something the customer could actually write directly. And I thought, “We could do that.” I’m starting to see places we could use that sort of testing, even with an app as monolithic as ours is. It all came from seeing how to do it.
  • Before: I didn’t know how to write an end-to-end test. It hadn’t occurred to me even to realize that I didn’t know.
  • After: Ron’s customer tests are end-to-end, and as he states (and illustrates quite clearly a time or two), “End-to-end is farther than you think” — which doesn’t mean much until you’ve seen how to do it, and what happens when you don’t quite achieve it, both of which happen in the book.
  • Before: Especially when I was coding test-first, I would tend to code bottom-up: find the objects that do the most-detailed work, and get those tested and written; then work on the objects that use those tested-and-written objects; then work on the next layer up, and so on. That’s probably why, when I was technical lead on a project last year, the high-level tests used all the classes below them, with all their required inputs and idiosyncracies. It’s probably why our tests had to create some really monolithic test-data objects just to run.
  • After: Ron’s a big proponent of top-down design. Write the top-level flow of the code, and where you don’t have the lower-level objects yet, “fake it till you make it”. And in the book, there are a fair number of examples of why, especially in the “Long, Dark Teatime of the Soul” chapter. I put this together with a recent example from my own experience, and I get it: if you code from the bottom up, you’re guessing at how the objects will really be used, so you’re guessing at their design: a sure recipe for brittle design, rampant YAGNI, or both. If you code from the top down, the code will tell you how it wants to be used, and you’ll get to the right design faster. I got to see this happening in the book.
  • I learned some interesting testing techniques — and reflected on the ways I would have implemented them instead. One or two of my ideas probably would’ve been better than what he wrote, but there were a lot of cases where I thought, “No, it’d be better to do it this way,” only to see his code do exactly what it was supposed to do.
  • And, of course, the book is pretty amusing. I learned, among other things, that it’s always Chet’s fault. (From the appendix of XP sound bites.)

Definitely a book worth the read. I’ll have to talk to my boss about ordering a few copies to keep around the department (if not a copy for each developer).

Interesting dorkage

Ancient Chinese curse: “May you live in interesting times.”

I got a bit of a sunburn on Thursday. That’s because I was standing around outside for about two hours in the early afternoon, with no sunscreen. And that is because I was waiting for a tow truck, and for a ride back to work, and for a call back from the insurance company, and for a call back from the car-rental company, and for the cop to finish writing out the ticket.

Yeah. I wrecked our remaining drivable car. (Jennie already blogged all about it.)

When you’re a dork like me, it’s good to have friends who will help bail you out. Thanks, Sam and Erica, for coming out to administer taxi service, hugs, chocolate, and assorted other remedies. And a sort of preemptive gratitude to everyone in my department, ’cause I know that anyone on the team probably would’ve dropped what they were doing to come rescue us. (Heck, the first two months I lived in Omaha, I didn’t have a car, but getting to work was never a problem. “Team” kind of took on a new meaning when I started working here.)

I don’t think about my friends often enough. It shouldn’t take a car wreck to remind me. (There I go being a dork again.)