After Harry Potter: interview with J. K. Rowling (no spoilers)

No spoilers in this post, but I link to a page that does have spoilers. FYI.

Today’s newspaper had an article that said J. K. Rowling had done “a recent 90-minute web chat” with readers, “her first public comment since Harry Potter and the Deathly Hallows debuted on July 21”, answering some of the over 120,000 questions that had been submitted. The article did not, however, mention anything about where this chat had taken place, or where transcripts could be had. Odd.

After a bit of searching, I’m doubting the bit about “her first public comment”, because Rowling got interviewed by Dateline, and that interview aired last Sunday. And most of the article’s tidbits from the chat could have been copied right out of the Dateline interview. Hmm.

Anyway, I didn’t know about the Dateline interview in time to catch it on the air, but NBC’s Web site has a transcript that covers a great deal (all?) of the interview. Lots of really good stuff, about what the characters are doing now, what it was like to kill characters off, fandom, and a fair bit else. It’s five fairly long pages, and worth the read (assuming you’ve already read Deathly Hallows, of course). I figured I’d blog the link to the transcript, partly to share, partly so I can find it again.

Transcript of J. K. Rowling’s interview with Dateline’s Meredith Vieira, July 29, 2007

New DUnitLite feature: Test insulation with TInsulatedTest and TSpecification

DUnitLite 0.3 (download) has a new feature: test insulation. Test-case instances live only as long as it takes to run them.

Here’s a quick illustration of the difference. Given this test case:

TestFrabjulizer = class(TTestCase)
procedure TestFoo;
procedure TestBar;
procedure TestBaz;

With out-of-the-box DUnit, you’d see something like this:

  • Three instances of TestFrabjulizer are created, one for each test.
  • The DUnit GUI is displayed.
  • You select some tests and press Play.
  • The existing TestFrabjulizer instances are executed in turn.
  • You close the program. The TestFrabjulizer instances are freed.

But if instead of TTestCase, you descend from DUnitLite’s new TInsulatedTest, the behavior looks like this instead:

  • The DUnit GUI is displayed.
  • You select some tests and press Play.
  • An instance of TestFrabjulizer is created for TestFoo, executed, and freed.
  • An instance of TestFrabjulizer is created for TestBar, executed, and freed.
  • An instance of TestFrabjulizer is created for TestBaz, executed, and freed.
  • You close the program.

This scheme has several benefits over DUnit’s default behavior:

Interfaces work the way you expect. You can put interface-reference fields on your test case, and they’ll be automatically freed when the test is done — just like the way you’re used to them getting freed when you’re done using any other object.

TestFrabjulizer = class(TTestCase)
strict private
FFrabjulizer: IFrabjulizer;

procedure TestFrabjulizer.SetUp;
inherited SetUp;
FFrabjulizer := TFrabjulizer.Create;
// With TTestCase, you need to nil FFrabjulizer in TearDown.
// With TInsulatedTest, you don't: it'll go away automatically.

Inline records and arrays are practical. Much like the previous item, but instead of IFrabjulizer, use TMyBigRecord or array [0..3] of array [0..1023] of Double. With TTestCase, this is a quick way to make your test app run out of memory, especially if you have thousands of tests like we do. With TInsulatedTest, it’s a handy way to simplify your test: no mucking around with GetMem and FreeMem, and the memory is automatically initialized to all zero bytes.

Tests can’t interact with themselves. If you run a test twice in a row, it gets clean instance variables each time, so there’s no chance of problems due to dangling state that you forgot to zero out in SetUp. (Of course, global variables can still screw you up.)


There’s a lot less to say here, but 0.3 also introduces a new TSpecification base class.

uses Specifications;

FrabjulizerSpec = class(TSpecification)

TSpecification is really just TInsulatedTest, with very little added. Mostly this is to help stick with the “specification” mindset, rather than having a specification that descends from something with “test case” in its name.

TSpecification also, through a type-aliasing trick, makes Specify.That and Should... available to all of its descendants, without those descendant units needing to explicitly use the Specifiers unit. It’s kind of cute. (Specify and Should are, of course, still available outside of TSpecification.)

Delphi with a RAM disk… whee!

Our shipment of Gigabyte i-RAM cards arrived yesterday. You plug in RAM cards, and plug the i-RAM into your disk controller: bam, you’ve got a 4GB RAMdisk.

These things are sweet. I know, because we already had them installed on two of our development machines. People would show up early in the morning so they could start pairing at one of those stations.

Here are some highlights:

  • Our Build All used to take almost nine minutes. With the code and .dcu files on the i-RAM, it’s down to three and a half minutes. (We’re all looking forward to getting one of these on the continuous build machine.)

  • Some of the slower Subversion operations — check out, commit, update, check for modifications (anything where Subversion has to scan the entire directory tree looking for changes) — are blazing fast.

  • GExperts Grep is speedy. On a hard drive, we used to have to wait half a minute or so for the first Grep to come back, just because it took Windows so durned long just to iterate the directory tree. (Later searches were faster, since everything was in the disk cache.) Now, every search is fast, and it’s a real jolt to go back to one of the disk-bound computers.

  • The thing even has a battery back-up, so even if the power goes off, your data will be safe for a while. (They don’t define how long “a while” is, but hey, you’re doing frequent commits, right?)

  • And good Lord, these things are cheap. They retail for around $140, plus the RAM (and you can buy cheap RAM, since the ATAPI bus is the bottleneck, not the speed of the RAM chips). For a business that makes its living doing disk-intensive work like compiling, this thing is a no-brainer. (And no, they’re not paying me commission.)

I wish we could get our regression-test datasets onto an i-RAM. But each i-RAM can only take a maximum of 4 GB of RAM, which won’t cut it for our 20 or 30 GB of test data. We haven’t tried yet, but I suspect you could RAID these things together… but these machines don’t have nearly enough expansion slots to take that many i-RAM cards!

The one big downside to the i-RAM is availability. It took us months to get our first two i-RAM cards. (For a while, we were wondering whether these things actually existed.) I suspect the manufacturer was the bottleneck, because we ordered them weeks apart from two different resellers, and when they finally arrived, they came within about a week of each other. Then, once we had tried them and decided we wanted them in all the computers, we had to wait again.

But they’re finally here, and by the time I get there in the morning, I suspect all the machines will have our source repository checked out onto the RAMdisk, ready and rarin’ to go. Sweet!

Delphi’s “refactorings only work in current unit” bug

Delphi’s refactoring support has never worked for us. If I try to rename something, it only renames it in the current unit. But it always worked fine in test apps.

Today I finally found a repro case and submitted the “Delphi refactorings only work within the current unit” bug to QC. Hah! Take that, bug!

Now I just have to wait for the bug to work its way to “Opened”. If that ever happens. And then wait some more, with no further feedback, hoping it will get fixed and released someday. Sigh.

(If you’ve ever run into this bug, by the way, do feel free to stop by QC and add your votes.)

The issue is that the refactoring engine only pays attention to the unit you’re currently editing, plus units that have been explicitly added to the project (with Project > Add to Project), or automatically added to the project (by File > New). Other units are left untouched by the refactor. So Find References is fundamentally useless. And if I rename a class in unit Foo, unit Bar still refers to the old name, and won’t compile.

If the code compiles before a refactoring, and it doesn’t compile after the refactoring, was it really a refactoring? (Hint: No.)

The refactorings are supposed to work with everything that’s compiled into the project, following all the “uses” dependencies all the way down and affecting all of the units. I confirmed that in a conversation with Allen Bauer at BorCon two years ago; he was surprised to hear about the problem. That’s why I’m pretty sure it’s a bug, not a design decision. (Sadly, it took me this long to find a simple repro case. I’m not sure why; it was easy enough when I tried it today.)

We have maybe one or two files explicitly added to each project (plus a few more manually added to the “uses” clause, but they don’t count for this), out of thousands of source files. With something like thirty different projects that share different portions of the same code, and nine developers working simultaneously on the same code base, it’s simply not practical to keep every source file listed in the project file.

Here’s hoping we get an IDE with refactoring support sometime soon. (One other than Visual Studio, I mean.)