Agile and Extreme Programming: A Pragmatic Approach (part 2)

Random note: Wouldn’t it be fun to work on a project code-named “Sisyphus”?


Some agile methodologies:

  • Extreme programming
  • Scrum
  • Crystal methodologies (different ones for different team sizes)


  • ThoughtWorks has decided XP is the best thing they’ve tried. Always use it for flat-bid projects, and refuse projects that can’t be done with some agile process.
  • XP started in mid-1990s (predates Agile Manifesto)
  • Kent Beck and Ward Cunningham thinking about what made software simple to create, and what made it difficult
  • In March 1996, Kent started a project at DaimlerChrysler using new concepts, called it “Extreme Programming”

The 4 XP dimensions

  • Communication
  • Simplicity
  • Feedback
  • Courage

Can you create a less-scary XP by substituting out the stuff that scares the boss?

  • XP is carefully calibrated for feedback, and relies on feedback to work. (XP is actually a highly-coupled process!)
  • Lots of backward arrows in the XP workflow diagrams (see e.g.
  • If you randomly remove some of those backwards arrows, you lose feedback, and you harm everything beyond that point
  • Anything you replace has to provide approximately the same level of feedback
  • You can’t randomly pick and choose – you’ll create something that’s worse than many other methodologies, and you’re sure to fail

The Planning Practices

  • User stories
    • Not requirements lists
    • Narratives that describe the ideal way in which the user plans to use the system for one piece of behavior
    • Trying to capture that rich face-to-face communication through a narrative
    • Used to create time estimates for release planning and acceptance testing
    • Example: “I can add a customer to the system.”
    • Usually about 3 sentences written by the customer, in the customer’s terminology, without techno-speak. You’re not allowed to use words like “database”.
    • Not nearly as detailed as traditional requirements. You discover things like “which database fields do I need?” as you start to write the code. (You’ve got full-time access to a business analyst, right?)
  • Aside: testing
    • Unit tests for developers
    • Acceptance testing to say whether it’s complete
    • There’s no such thing as “80% done” for a single story. A story is either done or it’s not.
  • Acceptance tests:
    • One story may have several acceptance tests (e.g. to cover validation requirements for that new customer, etc.)
    • Business analyst, or end-user, defines the acceptance test – not the programmer!
    • Worst case: acceptance tests in an Excel spreadsheet, and someone runs the tests manually
  • Can you do XP without index cards?
    • Sure
    • Must be very flexible
    • Must be granular
    • “CaliberRM is an outstanding choice for this”
    • Requirements should be as narrative as possible, not just a dry list of facts
    • Caliber is looking at features to make it more agile-friendly
    • Session on Thursday on using Caliber with agile
    • Neither user stories nor requirements should delve into technical details
  • Aside: wikis
    • ThoughtWorks has two systems for shared document management: Lotus Notes and Confluence
    • VeryQuickWiki (Java servlet)
    • Instiki (written in Ruby!) – Three steps: you download it, run it, and there is no third step
  • Release planning
    • Creates the schedule
    • Used to create iteration plans for each iteration
    • Decisions
      • Technical decisions made by technical people
      • Business decisions made by business people
    • Dev team estimates each user story in terms of ideal programming days/weeks (very rough)
    • Do not change estimates just because management is displeased
      • Can’t tell accounting that they have less time to do the taxes this year
      • But they try to do the same thing with developers
    • Project can be quantified by scope, resources, time, quality
      • Management can pick three; developers pick the other
      • Management usually picks scope, resources, and time, and leaves quality to the developers, because they understand the first three, and can’t understand how to quantify quality.
      • Lowering quality may have impact later in the project
    • Quantifying quality
      • Code coverage statistics (AutomatedQA does this for Delphi/Win32)
      • No “I think it’s pretty high quality” or “it’s 80% done” – give actual statistics
      • If you want to let quality slide, I can’t be responsible for bugs – because if I don’t have tests for it, I don’t know whether it works or not.
      • Doesn’t guarantee it does what the business analyst – that’s what acceptance tests are for
  • Release Planning Substitutions
    • Goal: consensus between devs, management, and business people
    • Don’t stop until everyone is happy (or at least equally unhappy)
    • Don’t leave out the business stakeholders
    • Don’t leave out the developers (you wouldn’t change accounting practices without accounting there, would you?)
  • Small Releases
    • Frequent small releases to customers (or customer proxies)
    • These are *releases*, not “90% done”. Done has to be binary (either done or not), not a percentage. That’s the only way you can gather meaningful statistics. (See
    • It’s releasable – and done – when all the acceptance tests pass
    • Do important stuff first. The longer you wait to add an important feature, the less time you’ll have to fix it.
    • Never slip an iteration date. Stuff can hang over (if it hasn’t passed its acceptance tests yet), but the date can’t move.
    • Hard to substitute. Can simulate by doing small releases within your department, but if you slip into “90% done” mode, you’re sunk.
    • Try hard to get User Acceptance Testing in.
    • If management says “Finding bugs is your job”, say “I’ve found all the technical bugs. I need you to find the business bugs.”

Leave a Reply

Your email address will not be published. Required fields are marked *