Joe White’s Blog

Life, .NET, and Cats


Archive for April, 2007

Delphi editor tip: Find from Here without changing options

Saturday, April 28th, 2007

If you’re used to searching from the top of the file, here’s a way to do the occasional “search from here” in the Delphi IDE, without messing up the options for next time.

In Delphi’s “Find” dialog, we almost always leave the Origin option set at “Entire scope”, so that whenever we do a search, it starts at the top of the file. Occasionally, someone will change that to “From cursor”, and it always throws me off the next time I try to do a search. I usually have to re-try my search a couple of times before I figure out that’s what the problem is.

But this doesn’t have to be a problem, because it’s easy to search from the current cursor position without ever changing the “Origin” option. Here’s all it takes:

  1. Highlight some text. Select something in the editor. This works especially well if you highlight the text you’re going to be searching for.
  2. Open the Find dialog (Ctrl+F). If you highlighted the text you’ll be searching for, you can just click OK. Otherwise, either type the search text, or press the Down arrow to re-select your most recent search text.
  3. Click OK (or press Enter). Leave the “Origin” option set to “Entire scope”; you won’t need to change it.

When you click “OK”, the IDE will see that you have text selected, and will search within that selection, instead of searching from the top of the file. So effectively, you’re searching from the current cursor position; but you’re doing it without changing the “Origin” option, so “Origin” will still be set to “Entire scope” the next time you search.

(Side note: incremental search [Ctrl+E] also searches from the current cursor position. But if your cursor is already at the thing you want to find the next occurrence of, selecting the word and hitting Ctrl+F Enter is better.)

(Another tip: Ctrl+K T selects the current word.)

Avoiding sentinel values

Thursday, April 26th, 2007

A “sentinel” is any value that means something special. For example, Math.SameValue takes an epsilon parameter, but zero means “pick an epsilon for me”. So a value of zero doesn’t actually mean zero.

Sentinel values, I have come to realize, are a code smell. They’re saying, “here’s a parameter that has more than one responsibility.”

Last night, I was working on epsilons in DUnitAssertions. TValue, my universal value type, can compare two numbers with an epsilon, and it follows the SameValue convention of “zero means ‘pick for me’”. But I wanted to add another option:

Specify.That(Foo, Should.Equal(45.0).Exactly); // no epsilon

Really, this should pass zero as its epsilon, but that’s already spoken for. So I picked another sentinel (NaN) to mean “exact comparison”, and started writing the tests and making them pass.

It actually took a few minutes before I realized how ridiculous this was. I mean, the body of the method had three completely different code paths, depending on the sentinel values and nothing else. Hello? Polymorphism!

So I’m replacing the epsilon with a comparer. I’ve made an IValueComparer interface with a CompareExtendeds method, and it’s going to have several different implementations. One is the default, which picks an epsilon for you (and never takes an epsilon parameter at all). One is the epsilon-based comparer, which takes an epsilon in its constructor. I probably don’t even need another class for the “exact” comparer, since an epsilon of (really and truly) zero will serve quite nicely. And the sentinel logic will all go away.

I’ll even put another method on IValueComparer for comparing strings, and have a class that does case-insensitive comparisons. I’d been wondering how I was going to plumb the case-sensitivity option into the comparison, and now I know. (Since no single call to Specify.That will compare both a string and a float, this doesn’t pose any duplicate-inheritance-hierarchy problems.) And this will address that nagging doubt I’ve felt all along about passing a numeric epsilon even when I’m comparing strings. That’ll go away entirely; I’ll just be passing a comparer.

Now that I think about it, this is exactly the Strategy pattern. I’m refactoring to patterns and not even realizing it. Cool!

One thing that is nice about sentinel values — as opposed to, say, making several well-named methods — is that sentinel values are easy to plumb through several layers of code. But a strategy object has the same benefit, and it’s more expressive.

So when you see a sentinel value, ask yourself whether a strategy would be better. You might be really pleased with the result.

Footnote: Almost five years ago, I wrote a huge YAGNI at work. It was a function that divided one number by another. That’s it, really. But it took three, count ‘em, three optional parameters (that’s five total parameters to divide two numbers), and those three optional parameters were overloaded to bursting with sentinel values. I pulled out all the stops: not just positive and negative infinity, but NaN as well, all had special meanings. Meanings that you could kind of puzzle out, sure, but they were sentinels nonetheless. But it was worth it (so I thought at the time) because this thing was the ultimate in flexibility. It could clamp its results to a range, it could throw exceptions or not, it could return special values when you divided by zero, it even made Julienne fries.

How many lines of code does it take to divide two numbers? Forty-nine. All but nine lines of that was just there to check for sentinels. Those other nine lines were the ones that set the function’s return value.

I looked today. Total number of places that actually called this function? One. Another utility function in the same unit, which I had made less readable when I introduced my YAGNI to it five years ago.

So I did some spring cleaning. And that unit is shorter now.

Managed Spy: a read-write PropertyGrid for other processes

Wednesday, April 25th, 2007

There’s a debugging tool that lets you inspect .NET WinForms controls from other processes, in a PropertyGrid. And not only can you inspect controls’ properties, you can change them, too. And log which events get fired in that app. And it’s got an API you can use for writing automated GUI-testing tools.

Wow.

The tool is called Managed Spy, and it was the topic of an article in MSDN Magazine, which means that not only does it come with full source code, it also comes with an article that explains how it works. (It also comes with a mile-long Microsoft license agreement, which basically amounts to “you’re on your own, don’t sue us, and don’t open-source our stuff”.)

I ran it, and it couldn’t find any .NET top-level windows running. But from skimming the article, I knew that it excluded its own process ID from the list. So I launched another copy of the app, and the second one could see the first one just fine. (I love skating around design decisions.)

It’s got a few limitations. It can only see types that are serializable; so, for example, you can’t inspect (or modify) the nodes in the treeview, because they can’t be binarily serialized. But most of the properties, it can view just fine. It’s certainly a fun toy to play with (hey, I can see the sub-controls inside a PropertyGrid!).

I spent a good five or ten minutes just changing controls’ background colors and border styles. Is this great, or what? Entire applications become big coloring books!

Oh, and it might also be useful for testing someday.

What I find really interesting is that they’re injecting a managed DLL into another process’s process space, and it works just fine. The injected managed DLL has no problem talking to the CLR inside that process. That just blows my mind. And it comes with full source code, and its own API. Sweet.

One nitpick: the article’s author refers to using the Managed Spy API to “unit test” applications. I’m sorry, but if you’re testing at the GUI level, that is not a unit test. It’s a GUI test, or an integration test; and it still falls under the category of “automated test”. But a focused test for a single class by itself, it ain’t.

Still. Sweet toy tool.

Via an article from Stefan Cruysberghs called Freeware and open source developer tools for Delphi and .NET. (In turn via DelphiFeeds.)

Poor man’s Find References

Tuesday, April 24th, 2007

gabr wrote a post yesterday titled When changing semantics, make sure that existing code will break: basically another post about leaning on the compiler. It reminded me of a trick I often use in Delphi, which I thought I’d pass along.

Delphi’s refactorings don’t work. At least, not for us. Delphi’s refactorings only seem to pay attention to files that have been explicitly added to the project, which in our case is about six out of thousands. We don’t add every single file to the project, because we have thirty-something different projects that share a great deal of code, and it would be impractical for us to manually keep track of which projects use which files. The compiler can keep track of it for us just fine. The IDE’s refactoring tools, of course, don’t actually use the compiler, so they’re busted.

(Yes, we should use packages. It’s not feasible at the moment, for historical reasons that we are gradually eliminating. But I digress.)

Delphi’s Find References doesn’t do squat for us. So when I must find references, and a grep isn’t good enough (because the method name is too common of a word), I like to do a “poor man’s Find References”. There are two variations on the theme:

  • Rename the method. If it’s called Foo, rename it to Foo2. Consider making the new name something that will be unique enough to grep on.

  • Add an extra parameter. I like to add a parameter at the end called X: Boolean. This one is handy if you’re planning to change the parameter list anyway, and are doing a Find References to see where all you’d have to change.

Once you’ve made the change, then you start making everything compile. Everyplace the compile breaks, you change to call the new method signature. Once you can build everything with no errors, your Find References is done.

If you did all this on a clean checkout, then all you need to do to see your Find References is to view the diffs (if you’re using Subversion, then you probably want either TortoiseSVN > Show Modifications or Tools > Svn Modifications). Or, if you chose the “rename” path, then hopefully the new name is unique, and you can grep for it.

Then you probably revert all of those messy changes you just made. But (hopefully) you learned what you set out to learn.

In occasional cases, Poor Man’s Find References is actually better than a working IDE Find References would be. The fact that you actually touch every call site can make it easier to do viral reference searching, i.e. tracing it back multiple levels where you need to. A standard Find References doesn’t give you a good way to branch out — e.g., you can’t say “okay, take these three places it’s used, and find references to all of them now”. A standard Find References is one search at a time; no branching, no backtracking to continue an earlier search. But a Poor Man’s Find References gives you all the depth (and backtracking) you need.

(Mind you, most of the time I’d rather have an IDE that can actually do a Find References.)

DUnitAssertions goes behavioral

Sunday, April 22nd, 2007

The more I look at dSpec, the more I like its syntax. Should.Equal and Should.Be.AtLeast are much better than my Tis.EqualTo and Tis.GreaterThanOrEqualTo.

And I like the behavior-driven bent: encouraging the coder to think in terms of specifications (“specify that it should do this”) rather than tests (“test to see if it does this”). I was actually leaning a bit in that direction already — that’s why I picked Expect instead of Check or Assert — but I like Specify better yet.

So I’m stealing it.

None of these are implemented yet (I still need to rip out the operator overloads and do a bit more refactoring before I can move on to this), but here’s how I see the new syntax for DUnitAssertions*:

// All specifications support messages:
Specify.That(Foo, Should.Equal(45), 'Frequency');

// All specifications support Should.Not:
Specify.That(Foo, Should.Not.Equal(45));

// Floating-point comparisons support "ToWithin":
Specify.That(Foo, Should.Equal(45.0)); // default epsilon
Specify.That(Foo, Should.Equal(45.0).ToWithin(0.00001));
Specify.That(Foo, Should.Equal(45.0).Exactly); // no epsilon

// Equality and inequality:
Specify.That(Foo, Should.Equal(45));
Specify.That(Foo, Should.Be.AtLeast(45));
Specify.That(Foo, Should.Be.AtMost(45));
Specify.That(Foo, Should.Be.From(45).To(48).Inclusive);
Specify.That(Foo, Should.Be.From(45).To(48).Exclusive);
Specify.That(Foo, Should.Be.GreaterThan(45));
Specify.That(Foo, Should.Be.LessThan(45));

// Type:
Specify.That(Foo, Should.Be.OfType(TComponent));
Specify.That(Foo, Should.DescendFrom(TComponent));
Specify.That(Foo, Should.Implement(IBar));

// String:
Specify.That(Foo, Should.Contain('Bar'));
Specify.That(Foo, Should.StartWith('Bar'));
Specify.That(Foo, Should.EndWith('Bar'));

// All string comparisons support IgnoringCase:
Specify.That(Foo, Should.Equal('Bar').IgnoringCase);
Specify.That(Foo, Should.Contain('Bar').IgnoringCase);
Specify.That(Foo, Should.StartWith('Bar').IgnoringCase);
Specify.That(Foo, Should.EndWith('Bar').IgnoringCase);

// Numeric:
Specify.That(Foo, Should.Be.Composite);
Specify.That(Foo, Should.Be.Even);
Specify.That(Foo, Should.Be.Negative);
Specify.That(Foo, Should.Be.Odd);
Specify.That(Foo, Should.Be.Positive);
Specify.That(Foo, Should.Be.Prime);

// Specific values:
Specify.That(Foo, Should.Be.False);
Specify.That(Foo, Should.Be.Nil);
Specify.That(Foo, Should.Be.True);
Specify.That(Foo, Should.Be.Zero);

* If it’s going to be for behavior-driven development instead of test-driven development, then it really needs a new name now…

Delphi IDE: Folding and Next Error from the keyboard

Saturday, April 21st, 2007

I wish one of the CodeGear bloggers would blog about new keystrokes they’re adding to the IDE with each new version. Once I finally managed to dig up the Help for keyboard shortcuts (such as it is), I found some that I never knew about. Put these in some RSS, man!

Better yet: I wish one of the CodeGear bloggers would blog about new keystrokes they’re thinking about adding, before they actually add them, so there’s some sort of comment period. These keystrokes are atrocious! Honestly, Ctrl+Shift+K T? Think about the amount of finger travel on that one, as your left hand has to move from the farthest left it can go (to hold down Ctrl and Shift) to the farthest right it can go (to press T). Seriously, somebody needs to be publicly flogged for that one.

So here are some keystrokes you probably never knew about. Did you know you can (allegedly) control code folding from the keyboard? And that Delphi actually has a keystroke to jump to the next compiler error?

Here are the most important code-folding keystrokes (Expand All, Collapse All, and Toggle Current, the only ones I’ve ever seen anyone actually use):

  • Toggle Current: Ctrl+Shift+K T.
  • Expand All: Ctrl+Shift+K A.
  • Collapse All: They make this one painful; you have to specify what you want to collapse.
    • Collapse all classes: Ctrl+Shift+K C.
    • Collapse all methods: Ctrl+Shift+K M.
    • Collapse namespace/unit: Ctrl+Shift+K N.
    • Collapse nested procedures: Ctrl+Shift+K P.
    • Collapse regions: Ctrl+Shift+K R.

Note that none of the code-folding keystrokes seem to actually do anything at all, but those are the keys you’re supposed to have to hit, according to the documentation.

And here are the keystrokes for jumping to the next compiler error. They’re also rumored to work for Find in Files results, if you use that instead of GExperts Grep. (And I do, on rare occasions, since Delphi’s regular expression support sucks less than GExperts’, as long as you have the Delphi 6 Help files where Delphi’s goofy regex syntax was actually documented.)

  • Next error / next item in Messages window: Alt+F8.
  • Previous error / previous item in Messages window: Alt+F7.

Note that the “next / previous error” keystrokes aren’t actually documented (at least in Delphi 2006); the only way I was able to find out about them was to file an enhancement request, and then read the reason why it was closed. (Although later I did find them included in a list of interesting, and mostly undocumented, Delphi IDE keystrokes.)

The Twister award, for most useful yet most contorted keystroke, would have to go to Ctrl+Shift+K T, if it worked. But these are all pretty lousy. Alt+Function key, for something you would do repetitively? Eww. Couldn’t CodeGear take a page from ReSharper, and make the most-used keystrokes easier to type?

Using descriptive data types

Friday, April 20th, 2007

gabr just posted about using descriptive variable names, e.g. to show units: a “_kb_s” suffix if the variable is in kb/s, for example, so you can easily spot places where you’re assigning a kb/s measurement into a bits/s variable.

We’ve done one better: make descriptive data types, and lean on the compiler.

Basically, the idea is this: if you find yourself making a certain kind of mistake, then do something so the compiler will check for you. Or, better yet, make it so that you can’t even type in code that has that mistake.

In our app, we have a grid that does some heavy lifting. It’s basically a miniature spreadsheet, with heavy integration into our app’s stored data. So we do a lot of work with coordinates: rows and columns.

Now, to most of us, the words naturally go in that order: “rows and columns”. But the grid control we use tends to put the parameters in the other order: “AColumn, ARow: Integer“. This impedence mismatch led to a few subtle bugs over the years.

Another source of subtle bugs was the fact that the data we were displaying was logically a two-dimensional, zero-based array. But in the grid, that data was all in one-based coordinates, because the row and column header cells took up index 0 in the grid. So we were forever chasing bugs where we forgot to add one or subtract one.

So finally, a couple of years ago, we got fed up and wrote TRowCol. I think this was shortly after we upgraded from Delphi 6 to Delphi 2005, because that’s when the compiler gave us records with methods.

TRowCol’s name makes it clear what order the parameters go in: row first, then column. That was the first big win. The pair that first put TRowCol into the code base did not immediately update everyplace in the code that ever used rows and columns, but it wasn’t long before TRowCol dominated the field, as other pairs spread its usage.

The second big win was that, by putting methods and properties onto TRowCol, we could fix that thing of forgetting to add or subtract one. TRowCol stores zero-based “driver coordinates” internally, but it can present itself as either driver or GUI coordinates:

RowCol := TRowCol.FromGui(ARow, ACol);
InsertRow(RowCol.DriverRow);

This became an even bigger win when it became clear that the grid control we were using couldn’t really cope with hidden rows and columns. It had support for hiding, but it was half-assed at best. So we gave up on its hiding logic, and wrote our own coordinate mapper to map between “real indexes” and “visible indexes”.

The great thing was, now that nearly everything was using TRowCol, there was a single inflection point. Only one piece of code knew how to convert between driver and GUI coordinates, and that was TRowCol itself. So we started passing the mapper object into the TRowCol.FromGui, TRowCol.GuiRow, and TRowCol.GuiCol methods. It went in quite smoothly for such a fundamental change — and we found one or two as-yet-undiscovered bugs while we were doing it!

Since then, we’ve put in a few other records-with-methods for fundamental concepts in our code. For example, we now have TVersion, which unifies all the different ways we used to represent program and data versions, and can convert itself from and to any of the different formats we used to have — no more dozens of idiosyncratic conversion calls that we used to have to chain together in strange ways.

We do not yet have one to wrap up the concept of “a month and a year”, which we have at least four different ways of representing. But we’ll get there, I’m sure.

The idea is: if you have to worry about the units your data is expressed in, and making sure you convert it from one form to another when you need to, make the compiler help you. Naming your variables to show what units they’re in is a good idea, but they require visual inspection. So does TRowCol, but only at the endpoints, never in the middle.

And who knows? You may find more operations that belong on this new type. We certainly have.

Note: If you use records with methods, beware the dreaded (and sporadic) compiler Internal Errors. These mean (loosely translated): “The people who write Delphi do not actually use records with methods themselves, so they never see the compiler bugs.” Here’s a hard-won hint: break up long expressions using temporary variables.

dSpec for readable DUnit assertions

Thursday, April 19th, 2007

dSpec, from Jody Dawkins of delphixtreme.com, has the same goal as DUnitAssertions: readable tests. Here are some examples from her post titled “dSpec Update“:

Specify.That(Foo).Should.Equal(3);
Specify.That(Bar).Should.Be.OfType(TBlueWidget);
Specify.That(Baz).Should.Be.GreaterThan(50).And.Be.LessThan(60);

Thanks, Jody, for letting me know about dSpec. It looks pretty cool.

dSpec and DUnitAssertions take somewhat different approaches. dSpec uses what I would think of as more of a RhinoMocks-style syntax, whereas DUnitAssertions is more NUnitLite-like. But dSpec has one huge advantage over DUnitAssertions: it’s released. (I’m still waiting for paperwork from Legal. Plus I’m not done yet.)

I’ll probably keep working on DUnitAssertions, since I think there might be benefits to both approaches; perhaps someday we’ll see something that combines the best of both. But I may well start using dSpec. If you’re looking to write readable DUnit assertions today, dSpec is your ticket.

Thanks, Jody!

A Rolls for the Man with a Price on His Head

Wednesday, April 18th, 2007

Just had to pass this on. Via the rotating news links at the top of Gmail.

A Rolls for the Man with a Price on His Head and a Taste for Absolute Luxury

The Phantom Armoured, engineered to the highest current international protection rating, will stand up to an AK-47 attack from 10 feet away

Here’s the full article. They don’t mention how much it costs.

DUnitAssertions: Are compound constraints worth it?

Wednesday, April 18th, 2007

I’ve written enough of DUnitAssertions that you can do Expect.That(Foo, Tis.EqualTo(Bar)); You can also use and and or to combine multiple Tis constraints, so you could write an assertion like:

Expect.That(Foo,
Tis.GreaterThan(20) and
Tis.LessThan(45));

This kind of and and or stuff seemed like a great idea. You could write your own TisBetween method, implement it in terms of existing constraints, and start using it: Expect.That(Foo, TisBetween(20, 45)); And you would just magically get a meaningful failure message like:

Expected: > 20
and < 45
but was: 99

Well, it did sound cool at one point. But the more I work on it, the more problems I run into.

  • To start with, “between”, my prime example, is a bad example. “Between” is just a bad test. It’s sloppy, kind of like an expected-exception check that passes even if a descendant exception is thrown. You should know what the value is supposed to be, and that’s what your test should check for. Using something like “between” means you could introduce an off-by-one error in a botched refactoring, and your tests wouldn’t tell you about it. (I even hesitated before I added Tis.GreaterThan and friends, for this very reason. Why was I so gung-ho about and and or?)

  • For that matter, giving people the ability to put ands and ors and nots into their tests is just begging the less-unit-test-savvy among them to rewrite their program logic in their tests. That kind of duplication is the last thing the programming world needs. This alone isn’t an argument against the feature, but it’s a point to consider.

  • What should the failure message look like when you have a complicated expression with ands and ors and nots? How do you visually represent that in the failure message? You’d need to start adding parentheses to show precedence, and that doesn’t fit with the one-condition-per-line layout. Things would get messy in a hurry.

  • What happens when you get a tug-of-war over what goes into the “but was:” line? Tis.EqualTo compares the value, but Tis.SameInstanceAs compares the pointer, and Tis.InstanceOf compares the class. If you combine those with and or or, which one shows after “but was:”? Do there need to be multiple paragraphs in the output, each with its own “but was:” line? How does that interact with multiple ands and ors and nots and parentheses?

  • Last but not least, ands and ors will complicate NUnit-style string comparisons. If you haven’t seen this, when you use Assert.AreEqual on two strings, NUnit will find the first index where the strings differ, and will show you something like twenty characters of context on either side, with an arrow pointing to the first character that’s different. It’s really sweet, and I want it for DUnitAssertions… but how does it coexist with, say, the ability to use or to compare against multiple different strings? Which one controls the arrow?

I have support for Tis.Not, so you can do things like Tis.Not.Null, and you could use Tis.Not.GreaterThan in those cases where it would increase readability. But beyond that, I’m debating how valuable operator overloads are for constraints, or whether I should just rip them out.

So here’s my question to y’all: If you could combine constraints with and, or, and not, would you? What kinds of things might you use it for? (In other words, can you convince me this feature is worth keeping?)


Joe White's Blog copyright © 2004-2011. Portions of the site layout use Yahoo! YUI Reset, Fonts, and Grids.
Proudly powered by WordPress. Entries (RSS) and Comments (RSS). Privacy policy