Kinda wishing I could keep using Delphi

I’ve been reading the recent debate about a “Delphi Community Edition”. Most of it seems pretty abstract: “I think it would help the Delphi community if…”

That’s all well and good, but I thought I’d share something less theoretical: something that directly affects me, a longtime Delphi geek.

I’ve been using Delphi since its first release in 1995. I still have one of the original brochures somewhere, with the quote saying “It’s going to change our lives, you know.” That quote still sticks in my head, all these years later.

Delphi has been a pretty good tool to work with. Sure, the IDE freezes for half a minute at a time when you type an open paren, Ctrl+click doesn’t work half the time, and the IDE still can’t pull off a cross-unit rename refactoring (and, and, and…). But damn it, Delphi lets you write programs that work and work well. Good God, just try to find a decent unbound grid in .NET. Or a WinForms control — any WinForms control — that even begins to approach the quality of Toolbar2000 or VirtualTree.

My company got acquired, and is moving to .NET. On the one hand, I honestly can’t say I mind too much; .NET has some sweet libraries — way fewer cool GUI controls than Delphi, but more cool everything else. I love not having to write one set of code to deal with an array and a different set of code to deal with a list. The unified type system rocks for unit testing (no need to write a CheckEquals overload for every enum we’ve ever defined). And, of course, Delphi doesn’t have ReSharper. ‘Nuff said.

But I cut my Windows-programming teeth on Delphi (after banging my head against VB). Asked my comp sci department for a copy of Delphi instead of a cash scholarship, way back in ’95. Used it for my senior honors project. Knew it inside and out. Took at least one Delphi aptitude test where I was correcting the questions. Somehow I became one of the first Delphi conference bloggers. And Delphi has gotten me more than one job, and it got me where I am, working with a terrific team.

I’d really like to keep a copy of Delphi around, just to play around with for hobby projects. It’d be great to play with the new stuff in Delphi 2010. And to play with Prism. And, hell, just to show my appreciation for the product, by paying for a license now that my company won’t need to anymore.

But if it’s just for me, I can’t justify the price they want me to pay.

If it was $200, I’d find the money; Delphi is totally worth that. I might even be able to stretch to $300, especially if it included both Win32 and Prism. But $400, for an upgrade that’ll just need upgrading again in another year, and for just one platform?

At $400, I walk away. With regret.

The last version of Delphi that I own is Delphi 4 — after that, I was always able to convince my employer to buy it for me (or they already had it). So after December, when the “upgrade from any prior version” offer expires and even the painful $400 price-point goes through the roof, I’m probably going to be out of the Delphi business permanently.

And so we come to (presumably) unintended consequences. By setting their prices so high, the Delphi sales team is ensuring that, once I’m no longer paid to use Delphi, I’ll no longer be able to use Delphi. And so I won’t be able to stay current, and a couple years from now, I probably wouldn’t be able to get another Delphi job even if I could find one.

I suppose they need to run their business as they see fit. They have every right to set prices high enough to drive away their longtime customers, if that’s what they feel they need to do.

But I’m becoming painfully aware of what a lot of people have been saying for years: Delphi’s pricing is downright hostile to the hobbyist. They’re aiming strictly at the enterprise, and screw anyone else.

And that’s sad, because it’s the Inprise attitude, and the ALM attitude, all over again. They’ve got this “if we raise the price tag, we’ll get more money, and the money’s all that matters” mentality. It apparently never occurred to them that sometimes, lowering the price tag can multiply your income (and your customer base). Would that hold for Delphi? A lot of community members have been pretty vocal in thinking it will, and I’m becoming a believer.

But the sales department, so far, has stuck its fingers in its ears and pretended not to hear. And the dev team either can’t, or won’t, pressure them to change their minds.

I totally admire what the dev team has been doing lately. They’ve been making Delphi, well, Delphi again. They’ve been doing some serious moving and shaking. D2010 sounds sweet, and I’d love to play around with it.

But unless they fire the Borland/Inprise sales team, and hire some CodeGear folks, my Delphi days are numbered.

I’ve always hated change. Life will go on, of course. But still, I’ll miss Delphi.

Upcoming Omaha developer-ish conferences: BarCamp and HDC

BarCamp Omaha is October 2-3 (Friday evening to Saturday). I’ve never been to a BarCamp, but it promises to be intriguing — the session schedule is planned on a whiteboard Saturday morning, and anyone who wants to run a session, can. It’s not just a tech conference; they say the major tracks will be Tech, Creative, and Entrepreneurship.

I understand that BarCamps are usually free, but this one costs $5. But (a) that’s cheap (heck, I thought the HDC was cheap at $200 — but then again, I’m not paying for the HDC out of my own pocket) and (b) you get the full conference experience for that $5: free T-shirt (great, another one to go into the drawer), free breakfast, free lunch, and free pop and snacks all day. Free food is the primary reason for going to a conference, so it should feel just like home.

I’m toying with the idea of speaking at BarCamp (haven’t really decided yet). Their FAQ says the time slots are only 30 minutes, and I’m wondering if I should take a stab at a 31 Minutes of ReSharper. It’d take some serious editing, mind you, given that my original material was 31 days.

The Heartland Developers Conference (HDC), a Microsoft-themed conference here in Omaha, is October 15-16. I guess I don’t really need to hype it, since it’s 100% sold out this year (I wonder how long until they need to start spilling over into the first floor of the Qwest Center?). But I’m looking forward to it just like every year, because the second-day breakfast always has all the free bacon you can eat.

Oh, and they always have some pretty awesome sessions, too. A couple years ago they had Scott Guthrie from Microsoft do one of the keynotes, if that gives you any idea (at the time, he was Microsoft’s General Manager in charge of the CLR, ASP.NET, Silverlight, WPF, and IIS, among other things). They get good people and a lot of interesting content. I can pretty much register without even looking at the session list, and know that every timeslot will have a session that’s well worth my while — and even with the economy the way it is, my boss was happy to pay all our conference registrations. There’s benefits to making something cheap. (Delphi sales department, are you listening?)

Geek quote of the day: Git and Home Depot

Git is intimidating. It’s a distributed revision-control system, so it’d work online or off, and it’s got tons of cool toys (like git-bisect to automatically figure out which commit introduced a bug). But good luck figuring out which of the umpteen zillion commands you actually need to get something done. (I cheat — I IM my friend Sam and say, “Help?”)

Git has everything from fine-grained commands to handle a tiny part of a single commit, up through high-level commands that mow your lawn and make Julienne fries, and I have no idea how to tell which is which. Like I said, intimidating. Git has been described as not so much a revision-control system, but rather as a toolkit you can use to build your own revision-control system that works exactly the way you want it to. Which is kind of like writing your own lexer, parser, keyhole optimizer, runtime library, memory allocator, JIT compiler, and IDE, and designing custom hardware while you’re at it, and mining the silicon yourself, so you can write a programming language that works exactly the way you want it to.

And it doesn’t help that Git’s Windows support has been very slow in coming, though apparently now it’s mostly as good as on other platforms.

Yesterday I was working on a toy project that might amount to something someday, but that I was more likely to lose interest in after a few days. And I wanted revision control for it (I like diffs). But it didn’t feel worth creating a Subversion repository for something potentially throwaway. Git stores your repository right there in your working copy, which felt like a good fit. So I finally installed msysgit, and promptly found that it’s got some awesome features (I was skeptical of the index when I first heard about it, but it’s actually very cool, especially through the GUI — you can commit just certain lines from a file!… not sure how you run the unit tests on them, though), but that it’s got some stuff that truly sucks (the people who wrote the Git GUI have never heard of window resizing, word-wrapping, or context menus — and the terminology is deliberately confusing. If I can’t figure out how to revert a file, there’s a problem somewhere.)

While reading around, I happened across a mention of Mercurial, another distributed revision-control system, and I started sniffing around for comparisons between Mercurial and Git.

I hardly ever laugh out loud, but Jennie, from another room, called, “What’s so funny?”

From Use Mercurial, you Git!:

I ordered a version control system, not a toolkit for building one! If I’d wanted building blocks for rolling my own, I’d have gone to Home Depot and bought a 1 and a 0.

Fixing MenuStrip, part 2: Visible vs. Available, and a repro case

Not all MenuStrips will exhibit the scrolling bug.

In a nutshell: if you ever hide any menu items, you’re living dangerously.

Visible and Available

First, an aside on how you go about hiding menu items.

ToolStripMenuItem has two visibility properties: Visible and Available. They both do the same thing, except when they don’t.

To be more specific, both their setters do the same thing. So if you want to hide a menu item, you can either set Visible to false, or you can set Available to false. Same thing. So why are there two properties for the same thing?

The difference is if you ever want to read the properties, to find out whether the item is already hidden. The Visible getter does not do what you want. Never use it. Reading Visible does not tell you “did I set Visible to true?” No, that’s what Available is for. (Obviously.) No, reading Visible tells you “is the menu currently popped up on the screen?” Which has a usefulness score of somewhere less than or equal to toe fungus.

Summary: always use Available. Never use Visible. The one exception is the form designer — Available isn’t shown in the Property Grid, so there you’re stuck with Visible.

The repro case(s)

Here’s the repro case I’ve been working with. There may be simpler repro cases, but this one is expressive enough to be interesting. Here are the contents of my File menu:

  • New
  • Open
  • Import (hidden)
  • Export (hidden)
  • Exit

If you write some simple code to create a MenuStrip and add those items, you’ll see the bug. If you use the form designer to do the same thing, you won’t see the bug.

That’s because the designer-generated code will instantiate all the menu items, then add them to the File menu’s DropDownItems collection, and then set all the menu items’ properties, including Visible. If you do that, with these five items, no bug. If you set Visible Available to false before adding the items to the menu, you see the bug. Don’t ask me, I didn’t write the buggy code.

However, if you only hide one of the menu items, not both, then you’ll still see the bug even with the designer. In fact, the menu will scroll so far you’ll only see the last item, followed by a whole bunch of white space.

If you show and hide menu items at runtime, you’ll never get back to the “bug with both hidden” state, at least not as far as I’ve been able to tell — that only happens if you hide them before you add them to the menu, and then never touch their Visible Available property again.

But anytime you’re only hiding one of the two — hiding Import and showing Export, or vice versa — you’ve got the bug. Design-time, runtime, whatever.

Simple repro

Create a new project and paste the following code into its Main method.


var Form = new Form();
var Menu = new MenuStrip();
var File = new ToolStripMenuItem("&File");
    new ToolStripMenuItem("&New"),
    new ToolStripMenuItem("&Open"),
    new ToolStripMenuItem("&Import") {Available = false},
    new ToolStripMenuItem("&Export") {Available = false},
    new ToolStripMenuItem("E&xit")});

Delphi Prism:

var Form := new Form();
var Menu := new MenuStrip();
var File := new ToolStripMenuItem("&File");
  new ToolStripMenuItem("&New"),
  new ToolStripMenuItem("&Open"),
  new ToolStripMenuItem("&Import", Available := False),
  new ToolStripMenuItem("&Export", Available := False),
  new ToolStripMenuItem("E&xit")]);

Feel free to play around with these. I did a test project with a couple of checkboxes so I could toggle the menu items’ visibility at runtime. It was kind of fun to poke at, but I still have no idea how they got it to screw up the way it does.

What’s next?

Tune in next time for the Microsoft-sanctioned workaround for their bug, and a bit of stumbling since it’s a pretty awful workaround. Hang in there, it’ll get better.

This post is part of the Fixing MenuStrip series.

Fixing MenuStrip, part 1: Introduction and screenshots

In .NET 2.0, Microsoft added MenuStrip and ContextMenuStrip controls to replace the old MainMenu and ContextMenu. The new ones support images next to menu items, edit boxes inside menus, etc. Fairly cool.

However, they’ve got a major bug that Microsoft doesn’t intend to fix. See the screenshot at right. Why the blank space at the bottom of the menu? And more importantly, what happened to the first menu item (the highlighted one) — why is it mostly cut off?

It’s easy to showcase this bug. For that matter, it’s easy to fix. (Why Microsoft can’t figure it out, I don’t know.)

In this “Fixing MenuStrip” series, I’ll demonstrate the problem, and then show how to make a very specific fix for a very specific scenario. From there, I’ll work my way up to progressively more general solutions. By the time I’m done, you’ll be able to fix every menu on the form (including context menus) with one method call.

The coolest thing is that there’s no need to descend from MenuStrip to fix it. You’ll be able to add a line of code saying “And by the way, this menu should support the keyboard,” and it will just start working. It’s basically declarative programming: you declare your intention — “I want my MenuStrip to work” — and somebody else takes over and figures out how to do it. Multicast delegates and lambda expressions FTW!

Along the way, I’ll provide code snippets in both C# and Delphi Prism (aka RemObjects Chrome, aka RemObjects Oxygene — somebody let me know when they stop changing the name). If people ask, I could probably cook up complete project files to download, though I’d rather spend my time writing about the interesting technical stuff. (And besides, my personal computer only has the command-line compiler for Prism, not the IDE, so project files would be a bit of a challenge — the project-file format appears to be undocumented.)

All of my Prism examples will work on .NET 2.0, since my main computer is a Windows 2000 laptop, and .NET 3.x requires XP. My C# examples, especially the later ones, will be for C# 3.0 since I want to use lambdas — I’ll just have to borrow Jennie’s computer when I want to write those articles.

Screenshots of the problem

Let’s start by showcasing the problem. Here are a couple of screenshots from a little sample app I wrote. In both, I’ve got a menu with three visible menu items: “New”, “Open”, and “Exit”. In the left picture, I used the mouse to highlight the “Exit” menu item. In the right one, I used the keyboard’s arrow keys to do the same thing.

The two should look identical. Selecting something with the mouse, selecting something with the keyboard — there’s no reason for those to work differently.

And even apart from that… I mean, just look at it. What the hell? The menu apparently just scrolls its contents up, even though everything fits without scrolling and there are no scroll arrows. It’s obvious that everything fits, because the left screenshot looks great — the selection rectangle isn’t cut off, and the margins are fine.

In this example, everything scrolls by 17 pixels, but that will vary in practice. I’ve seen one — a menu with around 10 or 15 menu items — that scrolled so far that you could only see the bottom half of the last menu item, hanging out there at the top of the menu. The rest of the menu was just blank white space.

So how serious is it?

In their response to the bug report, Microsoft notes that the issue is purely visual, and there’s no functionality loss. You can use the arrow keys to scroll the menu contents back into view.

I contend that that’s a poor excuse for making a crappy design. Little things matter. And Microsoft’s slipshod coding (and testing) makes my programs look like jokes, and me look like an amateur.

And besides, why put up with crappy code, when you can fix it? Tune in next time as I show code that reproduces the problem, and start working on a fix.

This post is part of the Fixing MenuStrip series.

More SmartInspect license WTF

Still reading the SmartInspect license agreement.

Now, it’s normal for a license agreement to, in effect, say “We don’t promise the software actually works”. It’s frightening when you think about it, but it’s become standard operating procedure.

Once again, the overachievers in Gurock’s legal department have taken this concept to dizzying new heights. Either that, or their site has been hacked by an angry mob with torches and pitchforks. Is it seriously possible that a company would publicly hate on their own product this badly?

6.1 When using the licensed programs, in order to avoid damage that may be caused to other programs or stored data being used simultaneously, the Customers shall in good time before using/utilising the licensed programs back up the programs and data involved, and not use programs of this kind in actual operation before he has verified the flawless quality of these programs by a test routine.

(emphasis mine)

That’s copied straight out of the actual SmartInspect license agreement on their actual Web site.

So let me see if I understand this. I am contractually obligated to assume their software is horribly broken, until and unless I am able to form and execute a test plan to prove otherwise. In other words, their entire Quality Assurance department consists of their paying customers.

(I have a hard time believing that that’s actually the case, but that’s certainly the message they’re going to great pains to send.)

Does anyone happen to use CodeSite, and have a copy of their license agreement that they could send me? If so, please get in touch. If Raize actually shows some confidence in their own product, I’d be sorely tempted to return SmartInspect and go with the slightly more expensive, but presumably tested before shipping, competitor.

SmartInspect and the End Usufructuary License Agreement

We just purchased a couple of licenses for Gurock Software’s SmartInspect. I’ve gotten as far as the license agreement.

Their license agreement aspires to dizzying new heights in legalese. Take this sentence, from the third paragraph of Section 1 (Subject-matter of the conditions):

The downloading or delivery of the licensed programs and the granting of usufructuary rights to them shall be explicitly tied to compliance with these General Business and Licensing Conditions.

(emphasis mine)


Turns out it’s an actual word. According to Google’s snippet for the World Wide Words site (but not, oddly enough, according to the World Wide Words site itself):

‘Usufructuary’ is a technical term in law for a person who has the right to enjoy the products of property he does not own.

So, you don’t own the software, but you can still use it and gain the benefit of it. Familiar concept, grotesque word.

As Kyle, one of my co-workers, pointed out, there’s a slightly less obtuse word for that: a user. Perhaps someone should suggest that to Gurock’s lawyers…

DGrok 0.8.1: multithreading, default options, GPL

Version 0.8.1 of the DGrok Delphi parser and tools are now available for download. Download DGrok 0.8.1 here.

What is DGrok?

DGrok is a set of tools for parsing Delphi source code and telling you stuff about it. Read more about it on the DGrok project page.

What’s new in 0.8.1?

Quick summary of what’s new (more information below):

  • Now GPL-licensed.
  • Reasonable defaults for {$IFOPT}.
  • Multithreaded parser.
  • Less memory usage when parsing twice.
  • Copy tree results to clipboard.

Now GPL-licensed

Prior versions of DGrok used NUnitLite for their unit tests, and therefore had to ship under the same license as NUnitLite: the OSL (Open Software License). I’ve never been happy about that. The world really doesn’t need yet another tiny variation on the GPL, especially when that variation isn’t GPL-compatible.

So for this release, I dumped NUnitLite and switched to NUnit. That let me drop the OSL and switch to an industry-standard open-source license, the GPL (GNU General Public License).

There are a few downsides. NUnit has tremendous overhead; on my laptop, it takes about fifteen seconds just to start the NUnit console runner and load the tests, plus the time to run them (which is also slower than under NUnitLite). It also adds an extra 321 Kb to the download size. And now I have to clutter my test code with a bunch of stupid [Test] attributes.

If I think that’s a good trade, then apparently the OSL annoyed me more than I thought.

Reasonable defaults for “{$IFOPT}”.

An annoyance in previous versions (even to me) was that, if you were parsing code that contained things like {$IFOPT C+}, you would have to switch to DGrok’s “Options” page to tell it which compiler settings it should consider to be “on” and which are “off”. If it hit an {$IFOPT} you hadn’t told it about, it would fail to parse that source file.

In 0.8.1, that’s no longer the case. DGrok knows about the default compiler options in a clean install of Delphi, and by default, it assumes you’re using those options. You can still use the Options page to override those settings one by one (e.g. if you compile with range checking on, and want DGrok to parse code inside your {$IFOPT R+} sections), but it’s no longer necessary to do it for every single option.

If anyone’s curious, here are the settings DGrok uses. I just opened Delphi (actually Turbo Delphi) and pressed Ctrl+O Ctrl+O, which prefixes the current file with all the compiler directives currently in effect. Then I did a bit of testing on the odd cases, like A and Z (which can have numbers in addition to + or -, and which do have numbers when inserted by Ctrl+O Ctrl+O). Here’s what I wound up with:

B-, C+, D+, E-, F-, G+, H+, I+, J-, K-, L+, M-, N+, O+, P+, Q-, R-, S-, T-, U-, V+, W-, X+, Y+, Z-

You may notice that A isn’t listed. A is an oddball case, in that it’s treated as neither on nor off. That is, {$IFOPT A+} and {$IFOPT A-} will both evaluate as “false”. There’s a compelling reason for that: it’s what Delphi does under the default settings! So don’t blame me; I’m just being compatible with the real Delphi compiler.

Multithreaded parser

When you use the DGrok demo app to parse a source tree, it now spins up multiple threads to do the parsing. There’s a setting on the “Options” tab to control how many threads you want it to use.

I actually implemented this a few months back, and since then, it’s occurred to me that I was making the problem too complicated — life would be simpler if I’d just used the ThreadPool, and queued a work item for each file I wanted to parse. Oh well; what’s there seems to work. I’ll probably do the thread-pool thing in the future, though.

Less memory usage when parsing twice

I’m embarrassed by this one. In previous versions, if you clicked “Parse” more than once in the same program run (e.g. if you were tweaking the “Options” to deal with {$IFOPT}s), DGrok would temporarily take twice as much memory as it needed to. That’s because I built the new list, and then stored it in the top-level variable… so the old list (stored in that same variable) was still “live” as far as the GC knew, up until the point when I overwrote its reference at the very end.

It’s better now — it nulls out the reference before it starts parsing, so the old list gets GCed as soon as the new parse run starts allocating gobs of memory. So if you regularly parse a million-line code base (like I do), you’ll notice significantly less thrashing.

Copy tree results to clipboard

Pretty simple. There’s a “Copy” button under the tree that shows the parse results. This is mainly useful when you’ve used DGrok to search for, for example, all the with statements in your code, and now want to copy that list into Excel for easy sorting and printing.

Happy parsing!

What does it mean when CodeGear “announces” Delphi 2009?

Just wondering. What exactly is CodeGear “announcing” today? And how is it different from what they’ve been doing for months now?

They’re not announcing what the product will be; they’ve already done that. Sure, they’re filling in a few details, but you can’t really announce something that everybody already knows about. (Mind you, I’m not complaining about the early blogging — I love transparency. But it doesn’t leave much for the marketing droids to, you know, announce.)

And they’re not announcing that the product is feature-complete, or ready to ship, or anything. It isn’t yet. They’re only taking pre-orders. No mention is made of when electronic downloads will begin… or of how much longer after that it’s going to take before they start shipping physical product… or of how much longer after that it’s going to take for customers to start actually getting their copies.

So what, pray tell, is all the fuss about?

Low-lock multithreading

I ran across a great article about how to do multithreading without (or with very few) locks.

If you’ve done concurrency, you already know about locks. You probably also know they’re expensive, and you’ve probably wondered how to squeeze out more performance by avoiding locks. This article tells you how to do it safely — and, more importantly, when to avoid it (which is most of the time).

Memory Models: Understand the Impact of Low-Lock Techniques in Multithreaded Apps

Warning: this is hardcore geek stuff. I think I understood more than half of it.

Here’s the executive summary: Low-lock multithreading is hard. If you don’t understand everything in the article up to and including a given technique, don’t use it. Processor caches and read and write reordering make it more complicated than you thought it was. (Don’t take my word for it — read the first half of the article, before he even starts outlining the first technique.)

He didn’t say it in the article, but I’ll add my two cents: Never optimize anything (including adding low-lock techniques) until you’ve run a profiler and proven that you know where the bottleneck is. Any optimization without a profile is premature optimization.

Always remember M. A. Jackson’s Two Rules of Optimization:

  • Rule 1: Don’t do it.
  • Rule 2 (for experts only): Don’t do it yet.