Diamondback: Method ‘VirtualPC.CommitSomeStuffAndRollOtherStuffBack’ not found

So my first try didn’t go so well. I still don’t know why — problem with Diamondback? problem with Virtual PC? problem with the Registry? problem with the stuff I checked or didn’t check during installation? No clue. But on my second attempt to install Diamondback and compile our code base, I decided to be a little smarter about the whole process.

Step 1: I made another copy off our master “clean Windows XP” virtual hard disk, and created a new virtual machine. Then I called IT (before I installed Diamondback) to join this new machine to our domain. (Once again, Matt was over within five minutes. Our IT people are awesome.) Then, I shut the VM down and saved a copy of the virtual hard drive. Now, if I need to start over and reinstall Diamondback again, I can start from this snapshot, and not have to pull Matt over to this end of the building again. I hate bugging people when I don’t have to.

Step 2: Install Diamondback inside the VM.

Step 3: Shut down the VM and enable undoable disks.

This is an awesome feature. While the VM is running, anything it writes to its “hard disk” isn’t saved directly back to the virtual hard-drive file on the host. Instead, the changes are written to a separate log file. Then, when you shut the VM down, you’re asked whether you want to permanently merge the changes into the virtual hard drive, or throw the changes away.

Since I suspected that something in my installation and/or Registry got hosed last time, this feature is perfect. Just put the program files on one virtual hard drive, and enable undo on that drive. And put my source files on another drive without undoable support. That way, whenever I want to reboot, I can revert my Program Files and my Registry to their original, pristine state; but any code changes I’ve made for Diamondback compatibility, and any .bpls and .dcps from any packages I’ve already compiled, remain untouched. Beautiful.

Can’t do it.

It turns out that Virtual PC’s “undoable disk” checkbox is per virtual machine, not per drive. You can’t make virtual drive C undoable and drive D not undoable. You can’t commit one drive on shutdown and revert the other, either. There’s no UI for it. They simply never thought anyone could use that feature, so they didn’t put it in.

I’m pretty sure that VMWare does let you make some disks undoable and some not. And I actually do have a license for VMWare back at my desk. But what I don’t have is a ready-made VMWare virtual hard disk with Windows XP already installed, and installing an OS in a VM takes a long time. Sigh.

Okay, Virtual PC can’t do what I want through this feature alone. Might it be possible through some combination of features? If Virtual PC can do it, then that’s considerably easier than installing VMWare and starting from scratch.

Well, yes. They can do it. They have a feature called “shared folders” (I think VMWare actually had this feature first) where you pick a folder on the host, and map it to a drive letter in the VM. Since those files aren’t on the virtual hard disk, they won’t be undone. Fits the bill. But it actually operates using Windows file sharing, on the VM side at least, so you’d be taking a sizable speed hit. Viable, but it’s worth looking into other possibilities first.

They have one other feature that looks promising: you can have the VM talk directly to a physical hard drive on the host computer. Combine that with an undoable drive, and you get nirvana. And we had a spare partition with almost nothing on it. I cleaned it out and tried to point Virtual PC to it. But it only showed me one available drive, and I could only set that one up as read-only.

I hunted around a little more before I figured it out — when they say they connect to a hard disk, they mean a hard disk. Not a partition — an entire physical hard disk. This machine had multiple “drives” in multiple partitions, but they were all sitting on the same piece of hardware. And since that hard drive was in use by the host OS, Virtual PC said, “No, you ninny, I’m not going to let you have two OSes writing to the same hard drive at the same time. That would be stupid.”

Fair enough.

I checked with IT, but they were fresh out of spare hard drives. I thought about running out and buying one over my lunch break, but decided against it.

I decided on the shared folders. I suspected that they would be slow, but decided to follow the Rule of Optimization: actually measure the performance before you decide it’s too slow.

And the speed actually was, to a point, okay. The speed isn’t what killed shared folders as an option. Ah, no. They got a little more creative than that.

The Power of Independent Thinking

My company does an employee survey every year.

There are a few things wrong with the way they do it. For example, one of the company goals is to get a certain score on the survey. A pretty high score. And when it comes time for raises, one of the ingredients in the formula is, “How well is the company meeting its goals?” In other words, the way everyone answers the employee survey directly impacts everyone’s raises.

Now, the whole point of any kind of self-evaluation, if you want to get some good out of it, is to point out things that you don’t do so well, so you can go find ways to improve them. Constructive criticism, and all that. But when you attach a guillotine to the low end of the rating scale, how many people are going to be honest if there’s a real problem?

I point this out in every year’s survey, too. And the managers, and the owner, insist that they read every single comment. And yet, every time another year rolls around… guess what? Yep, raises are still tied to the employee survey.

(To come back to reality for a moment, though: if that and the thermostats are the biggest problems with this company, I’m actually doing pretty darned well.)

Another thing they do is calculate the standard deviation on each question. And the owner was disturbed at this past year’s survey, because the standard deviations were higher than they had been in years past. He said this was a bad thing, because people weren’t all on the same page.

He actually believed that this was a bad thing. And that disturbs me.

If everyone thinks the same thing, that means that nobody is looking for anything new. Nobody is seeing the flip side of an issue. Nobody is noticing that there’s an opportunity to do things in a different way than the way they’ve always been done. There’s no innovation. There’s no thinking. There’s just yes-men and groupthink.

There’s an old saying that, if two people doing the same job agree all the time, that one of them should be fired.

(For completeness, I’ll fill in the rest of the saying: “If they disagree all the time, then both of them should be fired.” But that’s neither here nor there.)

But interestingly, some people take it even farther than that.

Tom Peters posted a very interesting article called The Power of Independent Thinking.

You’ve got a huge marketing decision to make post haste. (Or a decision about War & Peace if you’re, say, President.) You gather 10 experts in the field. Lock them in a room for 72 hours. Ask them to come up with a best estimate of, say, success of a New Product you’re close to launching. The process is better than nothing—maybe.

Alternate: Select 10 experts from disparate fields, some closely associated with the decision at hand, some not. Tell each one to stay isolated in his or her individual office, lock and bar the door, turn off all phones and computers—and come up with a best estimate in 72 hours, which will then be emailed to you. You, in turn, average their estimates and take the result as the collective output. This process/result is likely to be … Solid Gold!

Surowiecki’s argument (supported by a ton of evidence and research, from every field you can name and some you can’t) is that crowds, even crowds of non-experts, are wise beyond measure. IF JUDGEMENTS ARE TRULY INDEPENDENT … AND 100% PROTECTED FROM PRESSURE AND GROUP-THINK.

Go read the rest of the article. Every now and then, I stumble across an article that is a real gem. This is one of them.

Um… Diamondback and QualityCentral

It’s not there.

I was going to submit an enhancement request to Borland QualityCentral. But… there’s no category for Diamondback feedback. There’s no “Diamondback” project in the dropdown list, and the “Version” list for Delphi stops at 7.0, while the Version list for Delphi for .NET stops (and starts) at 8.0.

There’s this funky “Public Beta – Delphi” in the Project dropdown list, but (a) the Diamondback preview really wasn’t a public beta — they’re being pretty hard and fast about conference attendees only; (b) the only version listed in the Version dropdown is 1.0, which is not at all suggestive of Diamondback; and (c) nobody’s reported anything in there, so it’s all too possible that if I report something there, nobody will be watching to see it.

So, I did the next best thing: I filed a QualityCentral enhancement request for QualityCentral. It’s QC# 9167. Go vote for it.

(I would send you to their Web interface, but it’s broken with a “Directory Listing Denied” right now. Hmm, time for another QC report…)

Diamondback: My first attempt to compile real code

When I got back to the office after BorCon, I wanted to get a feel for how Diamondback’s refactoring tools would work with our real, live production code base.

But I had to install it first.

I was a bit leery of installing it on my production machine. That machine is touchy enough as it is. I mean, we have Delphi 8 code that compiles on some of our development PCs and doesn’t compile on others. (Well, we used to. Then Brian renamed a directory, and his compiler now misbehaves the same way as Sam’s and mine. We have code in our code base that should not compile, but does.)

My best guess is that our Delphi 8 problems have something to do with the different computers having had different combinations of Microsoft Visual Studio, Microsoft Visual C#, and Borland C#Builder installed on them at various points in the past, combined with the order in which we installed Delphi 6, Delphi 7, and Delphi 8 on each machine. But that’s just a guess. When you come right down to it, I really have no idea why the Delphi 8 compiler misbehaves differently on different computers. After the time we’ve spent wrestling with it, I’m just happy that it works (for the most part).

But I wasn’t about to do anything to tick it off. I was not about to install the Diamondback preview on there. (After all, it might look bad if something went wrong and I wasn’t able to compile the app that I’m technical lead on.)

I asked my boss if we had any spare machines around, and unfortunately, we didn’t. What I wound up settling for was a Virtual PC virtual machine.

(If you haven’t used Virtual PC and/or VMWare, they let you run an entire “computer” inside a window. Ward’s Wiki has more info on the VmWare page.)

I created a new virtual machine, installed Diamondback, and then realized that I needed to call IT to join the virtual machine into the domain, so I could see our network drives (where VSS lives). Our IT guys are awesome; Matt was over within five minutes to take care of it. I got latest code and started compiling away.

Now, compiling our code base isn’t quite as easy as just hitting Compile, because we have a large number of third-party components and home-grown components. So the first step is compiling all the packages. Actually, no; the first step is figuring out the dependencies between all the packages, so you know which one to compile first. Actually, no; the first step is removing the TeeChart that ships with Delphi, so we can use TeeChart Professional that we purchased some time ago.

On this first iteration, I never even got as far as that last step (removing TeeChart). But I did nose around and figure out the dependencies between all the packages, and stopped to document them on our wiki, to make it easier for the next guy. And then I started compiling away.

The upgrade to Diamondback, I’m thinking, is going to be about our easiest Delphi upgrade yet. All I had to do was open up the package, tell Delphi to upgrade it to a Win32 package (not .NET), go into Project Options to set the search path and output path (which we’ve never actually had set properly in the project files we check into VSS — I was doing some cleanup while I compiled), and then do the compile. Repeat approximately fifty times. We have a lot of packages.

The very peculiar thing is, every time I compiled a package, Diamondback got slower. And slower. And slower. There was a funny delay whenever I compiled, and there was a funny delay whenever I went into Project Options. In each case, the CPU would go straight to 100%, and stay pegged there for some amount of time — by the time I gave up, it had reached about two minutes (!). Just spinning the CPU; not doing anything else. And after that little interlude, it would happily proceed with the compilation, or would happily pop up the Project Options dialog. So I had to deal with this delay at least twice for every package. And it was getting ridiculous.

Now, Delphi 6 often gives a delay before opening Project Options, but it’s like fifteen seconds, not two minutes. And I never did figure out what was going on. Rebooting the virtual machine didn’t work. Quitting and restarting Virtual PC didn’t work. Rebooting the host system didn’t work. I even tried repairing the Diamondback installation, but I don’t think that was actually working properly, because (a) it ran so darned fast and (b) it never actually prompted me for CD #2. (And no, it didn’t fix the problem.)

And I was only about halfway through the packgaes by this point. Obviously, something was wrong, and it didn’t look like I could fix it with anything short of a reinstall.

But reinstalling software inside a virtual machine is silly. At that point, you’re better off just creating a new virtual machine and starting fresh. But I decided to be a little smarter about it this time.

Diamondback: First observations from the field

Just a few scattered observations…

I had borrowed a laptop from work and taken it to BorCon, figuring that there might be some demo CDs or something that I could play around with while I was there. (Little did I know.) And since that machine was going to have its hard drive wiped and re-Ghosted when I got back, I figured I’d play around with some beta software, so I installed Visual C# Express. I’ve already blogged a bit about that.

Since this wasn’t a live development machine, installing the Diamondback preview was a no-brainer. But it was also just a toy machine. Sure, I played with it a little, and worked on a little toy app in the airport. Sync Edit is pretty cool, although as it is, to move the cursor around, you can only use the keyboard. I wish they’d let me click on one of the underlined symbols without canceling Sync Edit mode. (I’ve gotta get that one into QC.)

And Declare Variable is going to be very cool. At first, I thought that I would prefer the C# way, where you just declare a variable wherever you need it; you don’t need IDE support for a Declare Variable refactoring, because you don’t have the hassle of jumping up to the var section and then back down to the code you’re working on. But after trying it, the Delphi way is actually much, much cooler… because the IDE will just magically figure out what type that new variable should be.

Think about it. Say you’ve got a mile-long expression, and you don’t know what the hell it does, and you’re trying to break it down. To use Fowler’s “Introduce Explaining Variable” refactoring to get a handle on the code. So you’re pulling out sub-expressions and assigning them into temporary variables. So far so good. But in C#, if you want to pull part of the expression — say it’s something like “Workspace.Persistence.Tables[tableIndex]” — into a temporary variable, you have to start by declaring the variable. But first you have to know what type that variable has to be. And when it’s the variable behind an indexer, as in this example, then it’s just that much more of a pain to figure out what that variable’s type should be. Even if you originally wrote the code, you might not know that datatype off the top of your head. (Heck, especially if you wrote that code.)

But in Delphi, you just type Foo := Workspace.Persistence.Tables[tableIndex]; And a red wavy underline appears under Foo, and you right-click it, select “Add Variable”, accept the defaults in the dialog that pops up, and the IDE just figures it out. No wandering through IDE tooltips looking for something that actually tells you the datatype, no hunting through SDK docs, no firing up Reflector. It just does it. Now, that’s cool.

At the Meet the Team session, I was very interested to learn that, contrary to what was said last year, Diamondback does still support old-style objects. Mainly because of a very vocal minority of the Delphi community who really, really wants them to stay. (A minority of one, as it happens. I actually sat next to him in a couple of sessions this year, and Danny said he’s done some amazing things with those old-style objects.)

Not only does the compiler still support old-style objects, the refactoring engine supports them as well. However, its support is incomplete: the engine can’t find constructor and destructor calls if you use the New(MyVar, Init); and Dispose(MyVar, Done); syntax. And that’s exactly what our old-style objects did. (Of course, I can’t imagine this being high on their priority list to fix.) I’d been hoping that we would be able to use Diamondback to refactor those old-style objects into new-style ones, but it wouldn’t have worked too well without being able to see the constructors and destructors.

(Not that it matters anymore — with some help, we got rid of all our old-style objects last week. But that’s another story.)

This much I was able to glean just from playing around with the Diamondback preview. But that was with no real-life source code for it to chew on — just little stuff I made up on that laptop.

When I got back to the office, I decided to try it out with some real code… (Stay tuned.)

No Purchase Necessary

The gym where my wife works is running a contest. Obviously, employees and family members aren’t eligible, but when I went to pick her up tonight and was waiting for her to pack up her stuff, I took a look at the pad of entry forms.

The fine print at the bottom starts off like this: “No purchase necessary. Must be 18 years or older and a member of [gym name] in good standing to enter to win.” (Gym name omitted to protect the guilty.)

That’s right. No purchase necessary to enter, except for the minor matter of the purchase that’s necessary to qualify to enter in the first place.

Jennie isn’t actually allowed to talk to customers about how much a membership costs — only the membership counselors can do that. (You’d think it was a union shop, for crying out loud.) So she wasn’t 100% sure how much a membership costs, but she did mention that not only do you have to pay a hefty enrollment fee, but you also have to pay a hefty “processing fee”. I checked with their Web site to confirm the exact details, and it’s true — $119 enrollment fee, plus $79 processing fee, plus $36.99 per month for the ongoing dues. Ouch.

So there actually is a purchase necessary, to the tune of a minimum of $234.99. But you know, that’s such a small thing.

Traffic

BorCon has been pretty good to me. Or to my blog, at least.

I was downloading my server logs a couple days ago, and noticed that the size of each day’s logfile seemed to be back down to about the same as before BorCon. Then I looked more closely and noticed an extra digit. My server logs before BorCon averaged somewhere around 50k a day, and now they’re more like 400k a day. Whew.

My biggest day was September 13, the Monday of BorCon. My server log for that day was a little over 3.5 megs. Most of that was from everyone who linked to my blog of John Kaster’s preconference tutorial on what’s new in Diamondback. (Actually, I got as many non-English sites linking to me as I did English sites.) My blog post is evidently the definitive word on that session. It’s even been plagiarized.

Since BorCon started, I’ve had 456 different referers appear in my logs (not counting links from my own domain, or search-agent spiders). And I now have 140 people who reguarly (i.e., more than once within the past week, as identified by both IP address and user-agent) read my RSS feed. There used to be nine.

(Wow. Maybe I’d better come up with something interesting to say one of these days.)

Oddly enough, all this hasn’t done much for my Google rank. A Google search for joe white blog places me within the top ten, and has for quite a while; “joe white” blog and joe white’s blog have long pegged me at #1. But in a search for just “joe white”, I’m still lost in the noise — largely because of some country singer with the nerve to be named Tony Joe White. (Dude, do me a favor and drop the middle name, ‘kay?) In that search, I’m somewhere around #368 of 831, when I’m in the search results at all. (Sometimes Google’s servers get out of sync, cache weird results, and Google shows duplicate URLs in its results, while I disappear into a seam between search-result pages. I’ve seen it happen twice now. It’s very disqueting to think that Google might have a bug.)

Too little, too late

So my Hotmail account just got upgraded to 250 MB.

But I’m hardly using it anymore, because lately, their spam filtering has been pitiful. I’ve been reporting something like five spams a day from the same spammers for the past three weeks or so. And they still keep getting through — offers from the exact same two or three spammers, over and over and over again. So I’ve been moving pretty much everything to my GMail account.

At the same time, sadly, I’ve gotten my first spam on my GMail account. My own damn fault, of course; I posted to the newsgroups via Google’s web interface, and it stuck my GMail address out there for all the world to see. But on the bright side, only two spam messages have gotten through so far, even though my first post was a week ago. Let’s hope that trend continues.

What’s really amusing about my Hotmail account is that, when I checked it yesterday, there was a notification from Hotmail Member Services saying “You’re running out of space!” from a few days prior, still sitting in my Inbox. Oddly enough, they didn’t think it was important to send me another notification when they actually upgraded my inbox to 250MB. Evidently, they must have assumed that the need to clean out the spam would keep me coming back often enough that I would just notice the new limit on my own.

It turns out they were right, of course. Of course, now that they’ve upgraded my mailbox, I no longer need to check my spam nearly as often…