What does it mean when CodeGear “announces” Delphi 2009?

Just wondering. What exactly is CodeGear “announcing” today? And how is it different from what they’ve been doing for months now?

They’re not announcing what the product will be; they’ve already done that. Sure, they’re filling in a few details, but you can’t really announce something that everybody already knows about. (Mind you, I’m not complaining about the early blogging — I love transparency. But it doesn’t leave much for the marketing droids to, you know, announce.)

And they’re not announcing that the product is feature-complete, or ready to ship, or anything. It isn’t yet. They’re only taking pre-orders. No mention is made of when electronic downloads will begin… or of how much longer after that it’s going to take before they start shipping physical product… or of how much longer after that it’s going to take for customers to start actually getting their copies.

So what, pray tell, is all the fuss about?

Goal: Pay off the credit cards by year-end

Jennie and I have been in debt since before we were married — progressively more debt as time went on.

Back in 2000, we had our first wake-up call, when my primary income dried up for several months and we didn’t know how to make our payments. We signed up with Consumer Credit of Des Moines, and they negotiated lower rates and payments with all of our creditors and put us on a single-monthly-payment debt-snowball plan.

Then we went out and borrowed more money. Clearly, we didn’t get it yet.

Or rather, we didn’t have an emergency fund yet. So when emergencies came up, borrowing more money seemed like our only option. “Just this one time,” we’d tell ourselves, “and then everything will be back on track.” With yet another plan that left no room to breathe.

And then, three years ago, another wake-up call: I added it all up and found that our minimum payments were more than my take-home pay. Back to the drawing board. Called Consumer Credit and bumped our payment back down to then-current minimums. Rolled new debt into the plan. Cut deals with other creditors. And started making a budget.

There have been other bruises along the way, and a few miracles. And we’re finally seeing the light at the end of the tunnel. Consumer Credit called us a few months ago and said we’d have all the credit cards paid off by the end of next February.

(That’s not all the debt — just the credit cards. We have a few other debts, like the car loan and student loans and mortgage, that we’re just not even worrying about right now. Paying off the credit cards is going to be a huge milestone for us.)

Well, I just got a raise, and it’s going to kick in on my August 15th paycheck. And a few days ago, I was thinking about that, and wondering. By all rights, I should take that money and start putting it into a 401(k) — that 100% return on investment, from the employer match, is tough to beat. But… what if I took that money and started paying extra on our debts? That, plus a little more we could find somewhere, might be enough to accelerate things… we might be able to pay off the credit cards by year-end.

So I spent all weekend crunching numbers, and I found out that Consumer Credit was wrong. They just estimate our current balances, and don’t contact our creditors for up-to-date balances until the month they think something’s ready to be paid off. They don’t get the actual statements; so when an interest rate goes up, or their payment cycle doesn’t match up with the credit card’s billing cycle and we get late fees tacked on, they don’t know about it. The reality is that, at the current rate, we won’t be paid off until April.

That really let the wind out of my sails.

But Jennie and I talked about it yesterday. And she got out the budget. What if we took this money here, and this, and this, and put them toward debt instead? What about this? What if we did that? Put this off a little longer. Stretch this.

We wrangled for about twenty minutes, and something magical happened. We found the money.

Our payment to Consumer Credit is our second-largest expenditure every month, after the mortgage. We’ve got a plan to make it the biggest expenditure every month, by a fair margin. And we will be paid off by the end of the year.

And somehow, we did it without cutting groceries, or gas. We’ll still have some wiggle room every month. We’ve still got the emergency fund if we need it, and some money in savings for things like car repairs. The money was there. We just had to look for it.

Lately, our progress has seemed so slow. I was getting discouraged. I had no idea how far we’d come; how much freedom we’ve gained; how much money was there in the budget, just waiting to be found. It’s good to be reminded of these things. The money is there. The budget will provide.

And once the credit cards are gone for good?

Don’t know yet. Jennie’s got a few ideas, and I’ve got a few ideas. But the one thing we know for sure is, we are going to throw a party.

T minus eight credit cards, and counting.

WEBrick: Web-browser GUI for Ruby apps

I do a fair bit of Ruby hacking. Usually I’m either writing a one-off app, or automating some process I do all the time, and in both cases, I don’t need any fancier GUI than STDOUT (via SciTE’s output pane, of course). But occasionally, I’ll want a script I’ll use more than once — say, to sift through another app’s output, and slice and dice the results different ways. And for those sorts of things, sometimes I’d rather have a nice GUI.

Rather than learn any of the GUI libraries for Ruby, any of which would probably be a lot of work if I wanted to do anything fancy, I decided to try my luck with WEBrick, a lightweight Web server that ships with Ruby. I can still write my little one-off apps, but now they can show their output in HTML.

With WEBrick, you write a Ruby script, and when you run your script, it starts listening for HTTP connections. Stop the script (Ctrl+Break or close the console window), the server stops with it. No Apache to configure. It’s easy. I like easy.

The “it’s all one script” thing also means it’s trivial to load data and cache it in memory. I’m planning to use that to write a really fast grep through our source code, with a Web GUI. (I’ve long since lost count of how many times I’ve written a better grep. I’ve written grep more times than all other programs put together.)

Here’s what I think is a good starting point for WEBrick. It consists of two files: startup and servlet. The startup just starts the Web server. You put the actual logic in the servlet file. The separation is so that the servlet file can get automatically reloaded from disk every time you refresh the page in the browser (idea swiped from here).


require 'servlet'
require 'webrick'
include WEBrick

server = HTTPServer.new(:Port => 80, :BindAddress => 'localhost')
server.mount '/', Servlet
trap('INT') { server.shutdown }


require 'webrick'
include WEBrick

class Servlet < HTTPServlet::AbstractServlet
  def self.get_instance config, *options
    load __FILE__
    new config, *options
  def do_GET(req, resp)
    resp['Content-Type'] = "text/html"
    resp.body = "Hello, world!"

This will run on port 80 (edit the :Port to change this, e.g. if you already have a Web server running), and it will only be accessible on the local machine (remove the :BindAddress => 'localhost' to make it accessible from the network).

Put those files in the same directory, run startup.rb, and browse to http://localhost/. Hello, world! Then just change the code in do_GET, save the source file, and reload the browser — and there’s your new page.

But it’s still the same process, still the same script. So you can save global state in a @@class_variable or a $global_variable or some such, and it’ll still be there the next time you reload the page — even if you’ve changed the source code in the meantime. I like static languages for some things, but try doing that in Delphi or C#.

Dark Knight

I need to write more.

Jennie and I went to see The Dark Knight last night. (No real spoilers here, just previews.)

First reaction: it was freaking long. It was two movies. At least. One of my co-workers commented that it really needed an intermission. So unless you’ve got a watch (I didn’t), don’t believe it when it starts to feel like they’re building up toward the ending. And don’t drink your entire large Coke during what feels like the first half. (I wasn’t too thirsty, so I made it through okay.)

It was also definitely dark. The magic trick with the pencil could have come straight out of The Crow. I’m a bit amazed that this movie snuck by with a PG-13, and I doubt I would willingly let anyone younger than about 15 see it. There was no blood, at least, but there were some scenes I won’t be forgetting any too soon.

The Joker was… beyond anything you’ve ever known the Joker to be. Every Joker before him pales. (The movie ones, mind you — I’ve never read the comics, so I can’t compare.) He was so far over the top he came back out the bottom. They turned all the knobs up to 11, both in the script and in the acting, and he was freaky. I’m going to be a little bit jumpy for probably another day or two. He was that good.

The Bat-Cycle wasn’t quite as cool as it looked in the still photo I saw in the paper, although that wheelie-180-on-the-building was pretty sweet, and the way he ended the semi chase… adjectives fail me, it was that cool. I’m not usually the biggest bang-bang-shoot-’em-up fan there ever was, but that semi thing took some names and kicked some ass.

And the movie had quite its share of memorable scenes in human terms. The blackmail threat had the whole theater laughing. And then there was Fox’s ultimatum… and the tension on the boats, and the thing you never saw coming… and Rachel’s rescue, about which I shall say no more… and Gordon after the shootout, and wondering how the hell they could have done it.

The story wasn’t as good as Batman Begins — I’m enough of a story geek to really appreciate how well the pieces fit together in BB, how artfully things were tied together, and this just didn’t have the same level of craftsmanship. But it was good just the same.

And the special effects were nothing short of phenomenal. I know, I know, people have said that sort of thing before. But just you go see it, and watch that building collapse, and tell me that isn’t seven kinds of amazing. Not enough dust kicked up, which of course was on purpose so you could actually see what was going on, but apart from that it was… wow. Wow.

And Two-Face. Oh. My. God. I’ve seen CGI that was incredibly hokey, that didn’t even try to fit in with the movie around it (remember the race through the Cave of Wonders in Aladdin?). But this was wrenchingly, blisteringly real. If I didn’t keep wondering how he kept his eye from drying out, I wouldn’t even be able to tell it wasn’t the real thing. Every muscle, every nuance, every twitch was dead-on. I know they had a mind-blowing budget, and 272 special- and visual-effects people on staff (not counting the puppeteer), plus the six other special-effects companies they hired… but even knowing that it took all that to do it, I’m still geeking out about it. It’s a bit scary, knowing that the technology exists to make visions so convincing, to make fantasies live, to lie so utterly.

We really must go to the movies more often.

Low-lock multithreading

I ran across a great article about how to do multithreading without (or with very few) locks.

If you’ve done concurrency, you already know about locks. You probably also know they’re expensive, and you’ve probably wondered how to squeeze out more performance by avoiding locks. This article tells you how to do it safely — and, more importantly, when to avoid it (which is most of the time).

Memory Models: Understand the Impact of Low-Lock Techniques in Multithreaded Apps

Warning: this is hardcore geek stuff. I think I understood more than half of it.

Here’s the executive summary: Low-lock multithreading is hard. If you don’t understand everything in the article up to and including a given technique, don’t use it. Processor caches and read and write reordering make it more complicated than you thought it was. (Don’t take my word for it — read the first half of the article, before he even starts outlining the first technique.)

He didn’t say it in the article, but I’ll add my two cents: Never optimize anything (including adding low-lock techniques) until you’ve run a profiler and proven that you know where the bottleneck is. Any optimization without a profile is premature optimization.

Always remember M. A. Jackson’s Two Rules of Optimization:

  • Rule 1: Don’t do it.
  • Rule 2 (for experts only): Don’t do it yet.

TechEd 2008 notes: Evolving Frameworks

This session was aimed at people who write frameworks: low-level code used by thousands of people. When you’re writing a low-level framework, you have to be very cautious about how you change APIs, lest you break code in the field. If nobody outside your department consumes your code, and you compile all your code every time you build — probably the most common case — then most of this stuff is only of academic interest to you. But it’s interesting nonetheless.

This is my last session-notes post about TechEd 2008. I’ll probably post more about the con later — I think it’d be interesting, for example, to contrast the philosophies different presenters had about unit-testing best practices — but it’ll probably be a few days or weeks before I get back to that; writing 22 blog posts that are up to my own editorial standards is a lot of work, and I need a break!

Once again, those of you reading via DelphiFeeds are only getting the posts about general programming topics. If you want to also see the ones about features not likely to be directly relevant to Delphi anytime soon (e.g., lambda expressions, expression trees, LINQ, .NET-style databinding, add-ins, F#, the Provider pattern), you can look at my entire list of TechEd 2008 posts.

Evolving Frameworks
Krzysztof Cwalina
Program Manager, .NET Framework team

Team has dual charter:

  • Basic APIs (used to be on BCL team, now on higher-level application model, cross-cutting features)
  • Architectural and design quality of the whole framework
  • Framework produced by many (over 1,000) people. Goal to make it look like it was designed by one person. Consistency guidelines.
  • More recently looking into evolving APIs and improving the evolution process.

Frameworks deteriorate over time

  • OO design community has already done much research into how to change requirements
  • It’s even worse with APIs
  • Still many forces require changes over time
    • Requirements change
    • Ecosystem changes: new tools, language changes
    • People change

No silver bullet. But there are some techniques to design APIs that will be easier to evolve, and some tricks that allow modifications that used to be breaking.

Slow down framework deterioration

  • With thoughtful architecture
  • With proper API design (micro-design guidelines)
  • With framework evolution idioms

Libraries, Abstractions, Primitives

  • Three different kinds of types in frameworks

Library types

  • Definition: types that are not passed between components. Instantiate, use, then maybe keep a reference or maybe let the GC collect it.
  • Examples: EventLog, Debug.
  • Easy to evolve: leave old in, add new one.
  • Cost to consumers, of introducing duplication, is nonzero. Shouldn’t be done lightly, but is doable.

Primitive types

  • Definition: types that are passed between components and have very restricted extensibility (i.e., no subtype can override any members).
  • Examples: Int32, String, Uri
  • Hard to evolve
  • Little need to evolve. Usually very simple. Not much policy went into designing them.


  • Definition: types that are passed between components and support extensibility (i.e., interfaces or classes with members that can be overridden)
  • Examples: Stream, IComponent
  • Lots of policy; contracts usually quite strict
  • Hard to evolve
  • Unfortunately, there’s quite a bit pressure to evolve abstractions
  • Extremely difficult to design abstractions out of the blue
    • The most successful abstractions in the .NET Framework are those that have been around for many years
    • “What should a stream do?” is pretty well established.
    • Interface with too few members won’t be useful. Interface with too many members will be hard to implement.

Evolving libaries

  • Can write a new class and tell people to start using it. Problematic if there isn’t a good migration path.
  • Architecture
    • Dependency management
  • Design
  • Toolbox
    • Type forwarders — lets you move a type from one assembly to another without breaking binary compatibility
    • EditorBrowsableAttribute
    • ObsoleteAttribute
  • Some people say a library should be at least 10 times better before you should consider replacing the old one.

Dependency management

  • Mostly applicable to APIs with more than one feature area, esp. if they evolve at a different pace or are used for different scenarios.

Framework Layering

  • Within each layer, have “components” (groups of classes) that each evolve together
  • Manage dependencies between the components
  • Lower layers shouldn’t depend on higher layers

Basics of dependency management

  • API dependency: A depends on B if a type in B shows in the publicly accessible (public or protected) API surface of a type in A. Might be parameter type, base type, even an attribute.
  • Implementation dependency: type in A uses a type in B in its implementation.
  • Circular dependency (including indirectly)
  • Dependency going to a lower layer: OK
  • Dependency going to a higher layer: Not allowed
  • Dependency within a layer: discussed by architects to see if it makes sense

Design principles

  • Focus on concrete customer scenarios
    • Much easier to add to something simple
    • Does this minimal component meet your needs?
  • Keep technology areas in separate namespaces
    • Mainly applies to libraries
    • Single namespace should be self-contained set of APIs that evolve on the same time schedule and in the same way
  • Be careful with adopting higher level APIs (usually libraries) for lower layers
    • E.g., Design a high-level API, then realize you can make it general, so you try to move it to a lower layer.
    • This rarely works when it’s not thought through from the beginning.
    • Don’t do it just because you can.
  • Don’t assume that your library is timeless
    • XML DOM should not be in System.Xml namespace

Toolbox: Type forwarders

  • Lets you move a type to a different assembly without breaking already-compiled code
  • Put in assembly where the type used to be
  • Forces a compile-time dependency on the assembly the type has been moved to
    • Can only be used to move a type down?

Toolbox: ObsoleteAttribute

public void SomeMethod() {...}
  • Take the API out of developers’ minds. Present simplified view over time of “This is the framework”.
  • Caution: many people think Obsolete is non-breaking, but that’s not entirely true because of “Treat warnings as errors”.
    • “Yes,” you may say, “but that’s only when you recompile.” True, but some application models, like ASP.NET, recompile on the fly.

Toolbox: EditorBrowsableAttribute

  • Hides from Intellisense, but you can still use it without warnings.
  • Often this is good enough.

Evolving primitives

  • Minimize policy (keep them simple)
    • Int32 should be no more than 32 bits on the stack
  • Provide libraries to operate on primitives
    • Consider using extension methods to get usability

Extension methods and policy

// higher level assembly (not mscorlib)
namespace System.Net {
    public static class StringExtensions{
        public static Uri ToUri(this string s) {...}
  • Policy-heavy implementation in a library that’s isolated from the primitive
  • High usability because it’s an extension method

Evolving abstractions

  • HARD!
  • Plan to spend ~10x as long designing abstractions as you do designing policies or libraries
  • Right level of policy
  • Right set of APIs

Interfaces vs. abstract classes

  • Classes are better than interfaces from an evolution point of view
  • Can’t add members to interfaces, but can add them to classes
  • That’s why it’s Stream instead of IStream
  • Were later able to add timeouts to streams, and it was much easier to add than it would have been with an IStream.
  • Imagine that it had been IStream from the beginning, and later they’d decided to add timeouts.
    • Adding members to an existing framework interface is never allowed.
    • When adding timeout, would have had to make a new descendant interface ITimeoutEnabledStream.
    • Wouldn’t need CanTimeout.
    • Problem is, base types proliferate (e.g. Stream property on a StreamReader). So casts would proliferate as well. And your “is it the right type” is effectively your CanTimeout query.
    • Less usability, since new member doesn’t show up in Intellisense.


  • Primitives, abstractions, libraries
  • Dependency management
  • Controlling policy
  • API malleability
    • Classes over interfaces, type forwarders, etc.


Q: Have there been times you did an abstract class and later wished it had been an interface?
A: Really not yet; he’s still waiting to hear from a team who’s done a class and later wishes they hadn’t. There are some situations where you do need interfaces (e.g. multiple inheritance). Sometimes it’s still a judgement call.

Q: Guidance on when to use extension methods?
A: Working on some guidelines for the next version of the Framework Design Guidelines book. There are some proposed guidelines at LINQ Framework design guidelines (scroll down to section 2, then look for the list of “Avoid” and “Consider” bullet points); if those stand the test of time, they’ll eventually become official guidelines.

Q: When would you split a namespace into a separate assembly?
A: When you design assemblies and namespaces, they should be two separate design decisions. Feature areas have a high correlation with namespaces. Assemblies are for packaging, servicing, deployment, performance. Make the decisions separately.

Q: Why not fix design flaws when moving from 1.1 to 2.0?
A: As current policy, they don’t remove APIs. (Not promising that it will never happen.) They think they can evolve the framework in a relatively healthy way. They’re even brainstorming ways to add more things like type mappers, e.g. moving static methods from one type to another (but no, it’s not in a schedule). Didn’t have some of these mechanisms when they were writing 2.0.

Q: How does the CLR team resolve conflicts when reviewing a design? Consensus? Vote?
A: Many processes at MS revolve around “orb”. One for compatibility, one for side-by-side, etc. Groups of four roles: owner, participants, reviewers, approver (escalation point). Try to concentrate on owner and participants, to reach a conclusion by consensus. When that fails, go to the reviewers, then the approver. Approver rarely has to make the decision; more likely to educate than override.

Q: Long overloaded parameter lists vs. parameter objects?
A: They’ve done overloads in that case. Ideally, each shorter one just loses one parameter from a longer one (be consistent about ordering, etc.) Best if the leading parameters are similar, for Intellisense usability reasons. They do use parameter objects in a few cases, but mostly in cases where you don’t want to, or cannot, have overloads; e.g., an event. Also don’t want an interface with lots of overloads.

TechEd 2008 notes: How LINQ Works

How LINQ Works: A Deep Dive into the Implementation of LINQ
Alex Turner
C# Compiler Program Manager

This is a 400-level (advanced) talk about the implementation of LINQ.

  • What’s the compiler doing behind the scenes? Layers it translates down to
  • Differences between the translation in the object world vs. remote store

Example of LINQ syntax: GetLondoners()

var query = from c in LoadCustomers()
            where c.City == "London"
            select c;
  • They didn’t want to bake any knowledge of how to do queries into the compiler; instead they use libraries, so you could even use your own implementation of Where() if you really wanted to

Where() as it would look with .NET 1.x delegates:

bool LondonFilter(Customer c)
    return c.City == "London";
var query = LoadCustomers().Where(LondonFilter);
  • You don’t really want to make a new method for each filter
  • Solved in .NET 2.0 with anonymous delegates, but they were too wordy to encourage use of functional libraries
  • Rewritten with C# 3.0 lambdas:
var query = LoadCustomers().Where(c => c.City == "London");

Proving what it’s actually compiled to:

  • Use Reflector
  • In Reflector Options, set Optimization to “.NET 1.0”, so it doesn’t try to re-create the LINQ syntax for us
    • Interestingly, it does still show extension-method syntax and anonymous-type instantiations. Have to turn optimizations off entirely to see those, but then you’ll go crazy trying to read the code.
  • Anonymous delegates make:
    • A cache field with a wacky name and a [CompilerGenerated] attribute
    • A method with a wacky name and a [CompilerGenerated] attribute
    • Generated names have characters that aren’t valid in C# identifiers, but that are valid for CLR. Guarantees its generated names don’t clash with anything we could possibly write.
  • Implementing Where: you don’t really want to build a whole state machine. Use iterators instead:
static class MyEnumerable
    public static IEnumerable<TSource> Where<TSource>(
        this IEnumerable<TSource> source, Func<TSource, bool> filter)
        foreach (var item in source)
            if (filter(item)
                yield return item;
  • I didn’t realize .NET 2 iterators were lazy-initialized. Cool.

Side note: You can set a breakpoint inside an anonymous delegate, or on a lambda expression, even if it’s formatted on the same line of source code as the outer call. Put the cursor inside the lambda and press F9; I don’t think you can click on the gutter to set a breakpoint on anything other than the beginning of the line.

Side note: When you step into the c.City == "London" in the LINQ where clause, the call stack shows it as “Main.Anonymous method”.

var query = from c in LoadCustomers()
            where c.City == "London"
            select new { c.ContactName, c.Phone };
  • Anonymous type:
    • Generated with another C#-impossible name, and it’s generic.
    • Immutable.
    • Default implementations for Equals, GetHashCode, ToString.

LINQ to SQL: We don’t want to do any of this anonymous-delegate generation. Instead, want to translate the intent of the query into T/SQL, so the set logic runs on the server.

Side note: Generated NorthwindDataContext has a Log property. Set it to Console.Out and you’ll get info about the query that was generated for us.

Func<Customer, bool> myDelegate = (c => c.City == "London");
Expression<Func<Customer, bool>> myExpr = (c => c.City == "London");
  • The first is just a delegate.
  • The second is a parse tree.
    • C# samples have an Expression Tree Visualizer that you can download.
  • This runs a different Where method. That’s because here we’ve got an IQueryable<T>, rather than just an IEnumerable<T>.
  • Where() takes an Expression<Func<TSource, bool>> predicate. So the compiler generates an expression tree.
  • Where() just returns source.Provider.CreateQuery(...new expression...), where the new expression is a method call to itself, with parameters that, when evaluated, become the parameters it was called with. (Is your head spinning yet?) It basically just builds the expression-tree version of the call to itself, which is later parsed by LINQ to SQL and turned into an SQL query.
  • LINQ to Objects: code that directly implements your intent
  • LINQ to SQL: data that represents your intent

The difference is all in the extension methods.

Cedar Rapids’ Flood of 2008

Absolutely unreal.

I was in Cedar Rapids for the Flood of ’93. I remember vividly one night when I drove a friend home, and on the way, we drove past a neighborhood park. There was a river coming out of the park, and flowing gently across the road. We couldn’t even see where the curb was supposed to be.

I had been planning to drop my friend off and head back home, but I changed my mind. I called my parents when I got there and said I was going to be spending the night.

The flooding now makes that look like a rain puddle.

None of my family live near the river, thankfully. Mom and Dad just have a trickle of water in their basement, and Jon and Darcy have damp carpet, nothing more.

But the public library where I used to work had water up to its windows yesterday afternoon, and the water was still rising. They never thought the water would get anywhere near that high, and by the time it did, it would have been far too late to start moving books upstairs — you couldn’t even get near the building anymore. It breaks my heart to think of all the damage to those books, historical records, everything. Not to mention the library itself — the city keeps cutting the library budget, to the bone and beyond, and I don’t know how they’re going to get the funding to repair the damage. I pray they have flood insurance, but I wouldn’t count on it.

Flood stage is 13 feet, but the levees are built to handle 19 feet of water. The river hit 20 feet in the Flood of ’93. This time around, the river is expected to crest at 32 feet. No typo.

I spent over an hour watching news coverage online last night. KCRG TV-9 had been doing “wall-to-wall” news coverage most of the day — no programming, no commercials, no interruptions, just news. It doesn’t look like they’re newscasting this morning, and I wonder if they had to evacuate their news studio — they were inside the mandatory evacuation area, and last night they had gotten special permission to stay, because they’re providing a public service, but they were keeping a close eye on conditions and ready to leave if they had to.

Absolutely unreal. May’s Island isn’t there — just a City Hall sticking up out of the water. The police office and jail had to be evacuated. The downtown Dairy Queen is totally submerged. 8,000 people were evacuated from neighborhoods near the river, and firefighters (in boats) were rescuing the idiots who ignored the mandatory evacuation. Video footage of boats going under the downtown skywalks. The railroad trestle collapsing, despite the 20 railroad cars filled with rocks that were left on the trestle to try to weight it down. Downtown a lake. One of the emergency shelters full — sounds like they’ve got cots filling the hallways. People at the shelters having to leave their pets in the cars outside, because the shelters couldn’t accommodate any animals other than service animals. Over 14,000 people without power, and the word is they’ll probably be without power for a week. Power out as far out as Coe College, over a mile from the river. I think it was Coralville where power was out and they couldn’t even get to the power station to start repairing the damage. Part of I-80 closing, east of Iowa City — that’s a major transportation route, and it’ll hurt. Only one bridge in CR open, and that’s I-380, and traffic moving at a crawl because there’s only one through lane open each direction — other lanes reserved for emergency vehicles. People stopping their cars on I-380 to gawk and take pictures. (The news crew said, “Don’t. We can guarantee, we’ve got better cameras than you do.”) The city’s water supply down to 25% of capacity, because three of the four wells are underwater; people being asked to use drinking water only; people being asked to come out to fill sandbags to protect the remaining well (no longer needed — that effort is complete). The library, the Czech museum, the museum of art, the Science Station — all flooded. One of the two hospitals evacuated. Both hospitals without power, running on generators.

It’s hard to swallow. And it’s really hard, right now, to be so far from home.

TechEd 2008 notes: How not to write a unit test

How not to write a unit test
Roy Osherove
Blog: ISerializable.com

All the things no one ever told you about unit testing.

Will have two parts: presentation-like (what really works), and interactive (questions, prioritized).

Early questions:

  • Data access
  • Legacy code (what not to do)
  • Duplication between unit tests and functional tests
  • Testing non-.NET code, e.g. ASP.NET
  • Testing other languages, e.g. F#, IronRuby)
  • Unit tests and refactoring
  • Testing UI
  • How do you mock the world?
  • How important are tools? Mocking tools, refactoring, etc. Can you write unit tests with just what VS provides?
  • Did you bring your guitar? — No. Wanted as much time for information as possible.
  • Where did you get your T-shirt? — Was being given away at another conference.

A unit test is a test of a small functional piece of code

  • If a method returns a boolean, you probably want at least two tests

Unit testing makes your developer lives easier

  • Easier to find bugs.
    • That’s the common line. But not necessarily — e.g. if your test has bugs, or if you’re testing the wrong things
    • If you can’t trust your tests to find bugs (and especially if you don’t know you can’t trust them), then the opposite may be true — you may be confident you don’t have bugs when you do.
    • If you don’t trust them, then you won’t run them, they’ll get stale, and your investment in writing them was wasted.
  • Easier to maintain.
    • But 1,000 tests = 1,000 tests to maintain
    • Change a constructor on a class with 50 tests — if you didn’t remove enough duplication in the tests, it will take longer than you think to maintain the tests
    • We will look at ways to make tests more maintainable
  • Easier to understand
    • Unit tests are (micro-level) use cases for a class. If they’re understandable and readable, you can use them as behavior documentation.
    • Most devs give really bad names to tests. That’s not on purpose.
    • Tests need to be understandable for this to be true.
  • Easier to develop
    • When even one of the above is not true, this one isn’t true.

Make tests trustworthy

  • Or people won’t run them
  • Or people will still debug for confidence

Test the right thing

  • Some people who are starting with test-driven development will write something like:
public void Sum()
    int result = calculator.Sum(1, 2);
    Assert.AreEqual(4, result, "bad sum");
  • Maybe not the best way to start with a failing test
  • People don’t understand why you want to make the test fail
  • Test needs to test that something in the real world is true: should reflect the required reality
  • Good test fails when it should, passes when it should. Should pass without changing it later. The only way to make the test pass should be changing production code.
  • If you do TDD, do test reviews.
    • Test review won’t show you the fail-first. But you can ask, “So can you show me the test failing?”

Removing/Changing Tests

  • Don’t remove the test as soon as it starts passing
    • If it’s a requirement today, chances are it’ll still be a requirement tomorrow
    • Duplicate tests are OK to remove
    • Can refactor a test: better name, more maintainability
  • When can a test fail?
    • Production bug — right reason (don’t touch test)
    • Test bug (fix test, do something to production code to make the corrected test fail, watch it fail, fix production code and watch it pass)
      • Happens a lot with tests other people wrote (or with tests you don’t remember writing)
    • Semantics of using the class have changed (fix/refactor)
      • E.g., adding an Initialize method that you have to call before you use the class
      • Why did they make that change without refactoring the tests?
        • Make a shared method on the test class that instantiates and Initializes
    • Feature conflict
      • You wrote a new test that’s now passing, but the change made an old test fail
      • Go to the customer and say, “Which of these requirements do you want to keep?”
      • Remove whichever one is now obsolete

Assuring code coverage

  • Maybe unorthodox, but Roy doesn’t like to use code-coverage tools
    • 100% code coverage doesn’t mean anything. Finds the exceptions, but doesn’t prove the logic.
  • Better: change production code and see what happens
  • Make params into consts
  • Remove “if” checks — or make into consts (if (true)). Will a test fail? If not, you don’t have good coverage.
  • Do just enough of these kinds of tweaks to make sure the test is okay.
  • Test reviews are still valuable if you do pair programming, just maybe less often. Good to bring in someone else who didn’t write the code, with an objective eye.
  • Quick test review of yesterday’s code at end of stand-up meeting?

Avoid test logic

  • No ifs, switches or cases
    • Yes, there are always exceptions, but it should be very rare
    • Probably only in testing infrastructure
    • Most of the time, there are better ways to test it
    • Sometimes people write conditionals when they should be writing two tests
    • Don’t repeat the algorithm you’re testing in the test. That’s overspecifying the test.
  • Only create, configure, act and assert
  • No random numbers, no threads
  • Test logic == test bugs
  • Fail first also assures your test is correct

Make it easy to run

  • Integration vs. Unit tests
  • Configuration vs. ClickOnce
  • Laziness is key
  • Should be able to check out, run all the unit tests with one click, and have them pass.
  • Might need to do configuration for the integration tests, so separate them out.
  • Never check in with failing tests. If you do, you’re telling people it’s okay to have a failing test.
  • Don’t write a lot of tests to begin with, and have them all failing until you finish everything. If you do that, you can’t check in (see previous point). Write one test at a time, make it pass, check in, repeat.

Creating maintainable tests

  • Avoid testing private/protected members.
    • This makes your test less brittle. You’re more committed to public APIs than private APIs.
    • Testing only publics makes you think about the design and usability of a feature.
    • Publics are probably feature interactions, rather than helpers.
    • Testing privates is overspecification. You’re tying yourself to a specific implementation, so it’s brittle, and makes it hard to change the algorithm later.
    • Sometimes there’s no choice; be pragmatic.
  • Re-use test code (Create, Manipulate, Assert) — most powerful thing you can do to make tests more maintainable
  • Enforce test isolation
  • Avoid multiple asserts

Re-use test code

  • Most common types:
    • make_XX
      • MakeDefaultAnalyzer()
      • May have others: one already initialized, with specific parameters, etc.
    • init_XX
      • Once you’ve already created it, initialize it into a specific state
    • verify_XX
      • May invoke a method, then do an assert on the result. Pulling out common code.
  • Suggestion: by default, the word new should not appear in your test methods.
    • As soon as you have two or more tests that create the same object, you should refactor the new out into a make method.

Suggestion: don’t call the method directly from Assert.AreEqual(...). Introduce a temp variable instead. (This relates back to the 3A test pattern.)

Aside: Test structure

  • One possibility: Each project, e.g. Demo.Logan, has tests Demo.Logan.Tests. Benefit: they’re next to each other in Solution Manager.
  • Test files: one file per tested class?
    • That’s a good way to do it. Aim for a convention like MyClassTests so it’s easy to find.
    • If you have multiple test classes for one test, make multiple classes.
    • Consider nested classes: making a MyClassTests, and putting nested classes, one per feature. Make the nested classes be the TestFixtures.
      • Be careful of readability, though.
      • Roy said his preference would be to keep one test class per source file, rather than using nested classes to put them all in one file.
      • Decide for yourself whether you’d prefer one class per file, or all tests for one class in one place.

Enforce test isolation

  • No dependency between tests!
  • If you run into an unintended dependency between tests, prepare for a long day or two to track it down
  • Don’t run a test from another test!
  • You should be able to run one test alone…
  • …or all of your tests together…
  • …in any order.
  • Otherwise, leads to nasty “what was that?” bugs
  • Almost like finding a multithreading problem
  • The technical solution (once you find the problem) is easy. That’s why God created SetUp and TearDown. Roll back any state that your test changed.

Avoid multiple asserts

  • Like having multiple tests
  • But the first assert that fails — kills the others. If one test fails, the others still run. Asserts aren’t built that way.
  • Exception: testing one big logical thing. Might have three or four asserts on the same object. That’s possible, and doesn’t necessarily hurt.
  • Consider replacing multiple asserts with comparing two objects (and also overriding ToString so you can see how they’re different when the test fails).
    • My experience: this doesn’t work well when the objects get really big and complicated. Could work well for small objects, though.
  • Hard to name
  • You won’t get the big picture (just some symptoms)

Don’t over-specify

  • Interaction testing is risky
  • Stubs -> rule. Mocks -> exception.
  • Mocks can make you over-specify.
  • “Should I be able to use a stub instead of a mock?”
    • A stub is something you never assert against.
  • There’s only one time when you have to use mocks: when A calls a void method on B, and you have no way to later observe what B was asked to do. Then you have to mock B and verify that it was called with the right parameter.
    • If the other class does return a value, then you can test what your class did with that result. You’re testing your class, after all, not that other object — that’s why you’re faking it.


  • If you do another test that tests basically the same thing, but with different parameters, he suggests appending “2” to the end of the test name. But that’s assuming you already have a really good naming convention for the base test name! (Remember the serial killer.)
  • Bad: Assert.AreEqual(1003, calc.Parse("-1"));
  • Better:
int parseResult = Calc.Parse(NEGATIVE_ILLEGAL_NUMBER);
Assert.AreEqual(NEGATIVE_PARSE_RETURN_CODE, parseResult);
  • If you can send any kind of number, and the specific value you pass doesn’t matter, either use a constant, or use the simplest input that could possibly work (e.g. 1).

Separate Assert from Action

  • Previous example
  • Assert call is less cluttered

TechEd 2008 notes: Best Practices with the Microsoft Visual C# 3.0 Language Features

Still catching up on posting my notes from TechEd last week. I probably would’ve gotten this up last night if I hadn’t been in the basement most of the evening for tornado warnings.

Best Practices with the Microsoft Visual C# 3.0 Language Features
Mads Torgersen
Program Manager for the C# Language

He’s the guy who figures out what features go in the next version of the language, to keep us on our toes.

Goals of this talk

  • Show new features
  • Important do’s and don’ts
  • Introduce LINQ

Despite the name of the talk, more time will be given to C# 3 features than to best practices. Best practices are in there, but they’re not the star of the show. If you’re going to be annoyed by that, start being annoyed now, rather than waiting until the end.

C# 3 in a Nutshell

  • Imperative => Declarative
    • Before: modify state in little bits
    • Leads to a lot of detail in describing how you want things done
    • New: say what you want, rather than how you want it done
    • MS has freedom to give us performance and flexibility

  • How => What
  • Make queries first-class

(Incomplete) list of new features

  • Auto properties
  • Implicitly typed locals
  • Object and collection initializers
  • Extension methods
  • Lambda
  • Queries
  • Anonymous types
  • Expression types
  • …a couple not shown in this talk

Automatically Implemented Properties

  • Just sucking up to programmers’ laziness; nothing deep
class Customer
    public string CustomerID { get; set; }
    public string ContactName { get; set; }
  • Simplify common scenario
  • You can see that they’re trivial
  • Limitations
    • No body -> no breakpoints
    • No field -> no default value
  • There can be serialization issues if you change an automatic property to a real property, since the autogenerated field has a magic name that’s stored in your serialized data

Lure of Brevity: Best practices for auto properties

  • Only use this for things that really are simple get/set properties
  • Hold on to your…
    • Get-only and set-only properties
    • Validation logic
  • Private accessors (get; private set;) are usually not the answer — too easy to forget you didn’t intend for them to be set capriciously, and add code a year from now that sets them in an unsafe way
  • Be careful what you make settable:
// Bad
class Customer {
    public string CustomerKey { get; set; }
    // Key really shouldn't be settable

Implicitly Typed Locals

  • var keyword, type inference
  • I won’t bother quoting his code snippet, you’ve seen it before
  • Intellisense can show you the actual type — hover over the var
  • Remove redundancy, repetition, clutter
  • Allow focus on code flow
  • Great for experimentation: you can change something’s return type and there’s a much better chance that everything will still compile (Roy would probably say there’s more essence and less ceremony)
  • “Surprisingly liberating experience”

Redundancy is not always bad: best practices for var

  • Explicit types on locals (i.e., not using var) will…
    • Improve readability of complex code, esp. if method name doesn’t make its return type clear
    • Allow typechecking on right-hand side (when you want that)
    • Can be more general than the right-hand side
  • Think: Who is the reader?
  • Find your own compromise between the two extremes

Side note: ObjectDumper class from samples (kind of like .inspect in Ruby)

Object and collection initializers

  • Traditionally very imperative. Start with empty collection, then create an empty Customer, then initialize it, then add it.
  • Lots of intermediate results lying around.
static IEnumerable<Customer> GetCustomers()
    var custs = new List<Customer>()
        new Customer {
            CustomerID = "MADST",
            ContactName = "Mads Torgersen",
            City = "Redmond"
  • Can omit empty parens after new if you use an object initializer
  • Code-result isomorphism
    • Structure of code parallels structure of object you want.
  • Expression-oriented
    • Can be used in expression context
  • Atomic
    • No intermediate results
    • Create object and collection in one fell swoop. Don’t need temporary variables. Don’t expose any intermediate states at all.
  • Compositional
  • May not need as many constructor overloads

Constructors are still good: best practices for object and collection initializers

  • Constructors…
    • Show intent
    • Enforce initialization
    • Initialize get-only data
  • Initializers and constructors compose well
var c = new Customer("MADST"){
    ContactName = ...

Extension Methods

  • You’ve seen these demos too (well, maybe not GetLondoners() specifically)
  • Dilemma with very general types: you use them in a specific setting, and sometimes you want a special view on it and wish you could add a couple more methods to the original declaration, just for your use in that setting
  • One really interesting benefit: can add methods to a generic of only certain types, e.g. can have a method on IEnumerable<Customer> that isn’t there on the general IEnumreable<int>. I like this!
  • Declared like static methods, can call like instance methods
  • New functionality on existing types
  • Scoped by using clauses
  • Interfaces and constructed types

Cluttering your Namespace: best practices for extension methods

  • Consider making them optional (separate namespace), so people can use your library without necessarily needing your extension methods (extension methods for working with types from MyNamespace.Foo should be in their own namespace, not right in MyNamespace.Foo)
  • Don’t put them on all objects!
  • Make them behave like instance methods.
namespace System
    public static class MyExtensions
        // Don't do this
        public static bool IsNull(this object o) {
            return o == null;
  • That’s a worst practice. It violates all three of the above guidelines. Don’t do it just because it’s cool.

Lambda Expressions

  • Predicate<T> — function that takes T and returns bool
  • =>: Some call this the “fat arrow”
  • Terse anonymous functions
  • Parameter types inferred from context
  • Closures: capture local state (also true of anonymous methods)

Condensed Power: best practices for lambda expressions

  • Keep them small
    • That’s the point of making them terse
    • Yank them out if they get too big
  • Watch that capture (of local variables, and using them inside the lambda)
    • Can have unexpected results
    • Exposing private state
  • Watch the complexity
    • Functions of functions returning functions…
  • Think: Who executes this lambda, and when?


  • Functional: doesn’t mutate the original collection; instead returns a new collection
  • using System.Linq; == “Linq to Objects”
  • Extension methods give you pipelining: customers.Where(...).Select(...)
  • Language integrated — use anywhere! (if you’re using C#)
  • Query expressions for common uses
  • Mix and match query and method syntax
  • Expect deferred execution (can do ToArray)

Beware monadic complexity hell: best practices for queries

  • Another powerful complexifier
  • Do you need to roll your own query provider?
  • Use query pattern for queries only!
    • Avoid abusing query syntax for other magic
    • Even if you know about monads! (Your users don’t)

Anonymous types

  • select new { Name = c.ContactName, c.City } — smart enough to call the second property City
  • Temporary local results
  • Shallow immutability and value equality
  • Does a nice job on the generated classes
    • Value-based equality
    • Good hashcodes

Keep it local: best practices for anonymous types

  • If you need a type, make one! Don’t use an anonymous type and work around problems. Only use where they don’t limit you.

Expression trees

  • Runtime object model of code
  • Created from lambda expressions
  • Language independent. LINQ to SQL doesn’t know about C#; it just knows about expression trees.
  • Compile back into delegates on demand. .Compile() method — even if you created it with factories instead of a lambda.

The Lure of Doing Magic: best practices for expression trees

  • You can interpret expression trees any way you like.
  • Don’t!
    • Stay close to expected semantics
    • Avoid special magic names, etc.

Final words

  • C# 3 and LINQ can change the way you code…
    • Declaratively: more of the what, less of the how
    • Eloquently
    • And with lots of queries. Don’t think of queries as something heavyweight for external data.
  • …but they don’t have to!