Joe White’s Blog

Life, .NET, and Cats


Archive for July, 2006

Agile 2006: Working Effectively with Legacy Code

Monday, July 31st, 2006

Working Effectively with Legacy Code
Michael Feathers

  • Traditional definition of “Legacy Code”: if you need to find your compiler on Ebay
  • Feathers’ definition: Legacy code = code without tests
    • How you test around your code influences what you can do with that code
    • If you have no tests, you pay a price for that
    • Hard to modify, hard to refactor

Unit testing

  • Allows rapid change of existing code
  • Meaningful regression
  • Always been around, but we’ve heard a lot more about it lately because of Agile
  • Enables refactoring (safety net)

Testability

  • “I thought I was a good designer until I tried to take arbitrary classes I’d written and place them under test”
  • Easy to talk about cohesion and coupling, but unit testing forces you to deal with them
    • Cohesion = are things that should be together together?
    • Coupling = are things that should be able to vary independently separate?

How testable is your code?

  • Most real-world (and non-unit-tested) code has very tangled dependencies: “glued together”
  • We usually don’t know this until we try to test pieces in isolation

Kinds of Glue

  • Singletons aka Global Variables
    • If there can only be one of an object, you’d better hope that one is good for your tests too
    • Problem: it’s something your unit can touch that’s outside that unit, and that can affect that unit. Hard to test a single unit of code.
  • Internal instantiation
    • When a class creates an instance of a hard-coded class, you’d better hope that class runs well in tests
    • What if that class modifies global state?
    • Opens a socket? (The Internet isn’t part of a unit.)
    • Watch out for setup and speed
  • Concrete Dependency
    • Depending on a concrete class instead of an interface (see Dependency Inversion Principle, Dependency Injection)
    • When a class uses a concrete class, you’d better hope that class lets you know what’s happening to it.
  • How to fix these kinds of things?
    • Design for tests
    • TDD pushes this

A Set of Unit Testing Rules

  • A test is not a unit test if:
    • It talks to the database
    • It communicates across the network
    • It touches the file system
    • It can’t run at the same time as any of your other unit tests
    • You have to do special things to your environment (like editing configuration files) to run it
    • These things can all happen in integration tests, etc., but keep them out of unit tests as a general rule

Breaking Dependencies

  • Seams: Places where you can plug in different functionality
    • Object seam / virtual function dispatch: use virtual methods
    • Macro expansion seam: can use #define to redefine a function
    • Template instantiation seam
    • Link seam: can link to a different library
    • Some can be varied on a test-by-test basis, some can’t be varied without recompiling

Uninstantiable class #1

  • Constructor depends on a constructor that depends on a constructor that depends on a constructor that depends on a constructor that depends on a…
  • Foo constructor takes a Config, whose constructor takes a Reader, whose constructor takes a FileProvider…

Uninstantiable class #2

  • Constructors that do something nasty that shouldn’t happen in a test
  • Example: Foo constructor creates a database connection
    • Code smell: This violates Single-Responsibility Principle

Uncallable method

  • Parameters you can’t instantiate or things you shouldn’t do in a test

Breaking dependencies

  • Manifest dependencies
    • Dependency which is present at the interface of a class. You can see some parameter of some method being problematic.
    • Extract interface / extract implementer (make the class actually implement the interface)
    • Adapt parameter (add a wrapper)
    • Choice between these two may depend on whether you have the source, and can change the class to implement a new interface
  • Hidden dependencies
    • Things hidden inside the class that you don’t see at the interface
    • Introduce static setter
    • Parameterize constructor / method
    • Extract and override getter
    • Introduce instance delegator

Working with Care

  • “Oh crap, I could be breaking that!”
  • Mitigating practices
    • Lean on the compiler (aka Ask the compiler)
      • Introduce deliberate errors, so you can find places you need to make changes
      • Example: Change one B reference to IB, then compile, and fix everything that breaks: change types, add methods to the interface, etc.
      • Problematic when you have a descendant class that shadows the method from the base class (and even worse in Java, where overrides are problems too)
    • Preserve signatures
      • Copy/paste parameter lists verbatim. Minimal editing.
      • Prevents errors
      • Makes it easy to change calls to X into calls to a delegating method
      • Don’t do too much at once
    • Pair programming

Parameterize Constructor

  • Inject dependencies through a constructor
  • Consider using a default constructor to avoid affecting non-test clients

Encapsulation

  • We’re breaking encapsulation. This object that used to read its own configuration file now takes the config as a parameter. Callers now need to care about how this class is implemented.
  • We’re going to break encapsulation even more as we break dependencies
  • Why is encapsulation important?
    • Encapsulation helps you understand your code
    • So do tests, so it’s not that bad a tradeoff

Parameterize Method

  • Same as Parameterize Constructor: if a method has a hidden dependency on a class because it instantiates it, make a new method with an argument
  • Call it from the other method
  • Steps:
    • Create a new method with the internally created object as an argument
    • Copy all the contents into the new one, deleting the creation code
    • Delete the contents of the old one, replacing with a call to the new one

Extract and Override Getter

  • If a class creates an object in its constructor and doesn’t use it, you can extract a getter and override it in a testing subclass
  • Make the getter lazy-initialize the field and return it
  • Change everyone to use the getter instead of the field
  • Override the getter in the testing subclass, and do something different
  • Parameterize Constructor is usually preferable

Extract and Override Call

  • When we have a bad dependency in a method and it is represented as a call
  • Can extract the call to a method and override it in a testing subclass
  • E.g., replace Banking.RecordTransaction(…) with delegating method RecordTransaction(…), with same name and same arguments (Preserve Signatures)

Expose Static Method

  • If the class is hard to instantiate, you may be able to extract some of the logic into a static method
  • Is making the method static a bad thing?
    • No. The method will still be accessible on instances. Depending on the language, clients of the class might not even know the difference.

Introduce Instance Delegator

  • Static methods are hard to fake because you don’t have a polymorphic call seam (unless you’re using class methods in Delphi)
  • Add an instance method that calls the static method
  • Have clients call the instance method instead

Development Speed

  • Mars Spirit Rover
  • Signal takes 7 minutes to get to Mars, 7 minutes to get back
  • In languages like C++, build times can be like this
  • But if you break dependencies so you can unit-test each class in isolation, you can build those unit tests quickly and remove the whole problem

Supercede Instance Variable

  • Supply a setter, instead of passing to the constructor
  • Good way to get past the construction issue
  • Seems horrible from an encapsulation point of view, but becoming more popular among the greenhorns who use Dependency Injection
  • Supply defaults
  • “SupercedeFoo” naming convention, to suggest that it’s not an ordinary set-me-anytime setter

Proactively Making Design Testable

  • TDD

“The Deep Synergy”

  • Whatever you do to make your code testable invariably makes it better
    • Forces decoupled construction
    • Forces fine-grained factoring (which leads to better cohesion)
    • Global state is painful
    • The list goes on
    • A test is a microcosm

Avoid “iceberg classes”

  • Number/weight of private methods overcomes the public methods. Most of its mass is underwater.
  • Usually (always?) has single-responsibility violations
  • You can /always/ extract a class.
  • Again, encapsulation is good, but it is not an unqualified good.

Rules for designing an API

  • Leave your users an “out”
    • Provide them with interfaces and stubs if you can
  • Make sure you provide sufficient seams
  • Be very conscious of your use of:
    • Non-virtuals
    • Static methods (esp. creational methods)
    • Mutable static data
    • Classes that users must inherit
  • Don’t just write tests for your code; also write tests for code that uses your code

Agile 2006: Refactoring Databases: Evolutionary Database Design

Monday, July 31st, 2006

Refactoring Databases: Evolutionary Database Design
Scott W. Ambler, IBM
Pramod Sadalage, ThoughtWorks

I was interested to note that our shop already has most of the infrastructure for this. But it had never occurred to me to think of it in terms of ordinary refactoring, i.e., rename = add a new thing that delegates to the old, then migrate stuff, and finally remove the old. Makes it a lot less scary (but does require things like triggers to keep the two fields in sync).

Okay, enough retrospective introduction. On to the actual notes:

This stuff works in practice. It is not theory.

Lot of stuff throughout the session about politics between devs and DBAs. Glad we don’t have to deal with that…

If the devs are delivering working software every iteration, so must everyone else, including the data people. Goal is to give the data community the techniques, and eventually the tools, to do this.

What you’ll see

  • Proven techniques for
    • Evolutionary database development
    • DBAs to be effective members of agile teams
    • Improving quality of data assets
    • Organizations to improve their approach to data

Can you rename a column in your production database and safely deploy the change in a single day?

  • Most people have no expectations of their data people being able to accomplish the trivial
  • The trivial should be trivial

The Rename Column Refactoring

  • Renaming the column would break everything. So don’t do that.
  • Instead, refactor it:
    • Define a transition period. (Specify an end date? esp. if internal app)
    • Add the second column, copy the data, add triggers, etc. At this point, from the point of view of people who care, the column is already renamed.
    • After the transition period ends, remove the old column and the scaffolding.
    • If people whine about the transition period being too short, e.g. we can’t possibly update our mission-critical app in two years, then how mission-critical is it, really?
    • Can’t put your business at risk, but sometimes you’ve got to motivate people to change
  • What happens if there are apps you don’t know about?
    • You’ll take a hit on this. You’ve gotta take that hit.
    • Someone in the organization has to make the conscious decision to stop screwing around.
    • If people are doing data stuff, you as an IT professional should be supporting them.
  • Change will be easier if you have encapsulation via e.g. stored procs. But what if you want to rename the stored proc?
  • Performance hit?
    • Yes, but you’re already taking a complexity hit, because your apps already have code to clean up the data. By cleaning up the schema, you can get rid of that code.

The Traditional Database

  • Continuous changes not allowed
    • Tools are not that good
  • Production changes are rare
  • Migration is a huge project, if it’s attempted at all
  • Non-existent change management
  • Non-existent testing
  • Very few companies are doing database testing
    • RDBMSes have been out for what, 30 years, and we still don’t have basic stuff like testing tools?
    • There are a few, like SQLUnit, but they don’t have wide acceptance
    • If this up-front stuff doesn’t work, and let’s face it, we’ve given it 20-30 years and devs still go around the DBAs, then let’s admit it doesn’t work.

The Agile Database

  • Slowly and safely changing database
  • Functionality added in increments
  • Change management
  • Facilitates automated testing
  • Knowledge by developers of the functionality
    • DBAs pair with developers
  • Acknowledged importance of team interaction
  • DBA = Role != Person

Philosophies of Agile Data Method (www.agiledata.org)

  • Data is important, but it’s not the center of the universe.
  • Enterprise issues are important. Our apps have to fit with those of other teams. Follow standards.
  • Enterprise groups need to be flexible. It is the kiss of death to try to have a consistent repeatable process.
    • Repeatable processes have nothing to do with software development success. You want repeatable results.
  • Unique situation. Individuals are unique, therefore teams are unique. Problems are unique.
  • Work together
  • Sweet spot

Everyone should get their own copy of the database. www.agiledata.org/essays/sandboxes.html

First time deployment

  • Should be handled almost exactly the same as deploying code
  • Database master script to create production instance
  • Branch if necessary

Change scripts for later changes, collected by day/week/iteration/etc.

Example

  • New person joins the team. Make it easy.
    • Checks out from source control
    • Use make/rake/ant/etc.
      • “dbcreate” target to create a new, empty database
      • “dbinit” target to create the tables and such
  • The code is under source control; why not the schema?
  • Should be run every time you test your application

Agile Model Driven Development (AMDD)

  • BDUF only works in academic fantasy-land
  • Cycle 0: Initial modeling sessions (days, /maybe/ a week)
    • Initial requirements modeling
    • Initial architectural modeling
    • Reduce the feedback. Get working software sooner.
  • Cycle 1..n:
    • Model storming (minutes)
    • Implementation (ideally test driven) (hours)
  • Nobody should be doing big models up front

Database refactoring

Why DB refactoring is hard

  • Coupling
    • Your app
    • Other apps you know about
    • Other apps you don’t know about
    • Persistence frameworks
    • Other databases
    • Test code
    • File imports
    • File exports

“Is the design of the code the best design possible to allow me to add this new feature?”

Example: change customer number to an alphanumeric customer ID

  • Add CustomerID varchar field
    • But you’ve got to be able to convert back and forth, to keep the old field in sync
    • So you don’t allow alphanumerics during the transition period. The new field could hold them, but the old field can’t.
  • After you are finally able to delete the old field, then you can start using alphanumeric IDs

Would TDD be a good idea for these refactorings?

  • Theoretically, yes.
  • Practically, tools may not be there yet.
  • If I want to thoroughly test the DB…
    • Data going in, data going out: DBUnit works for this
    • What about constraints? What about triggers? Can’t assume that this stuff works.

When you’re trying to remove a field, you can try removing it, then re-creating your entire schema. The “ant dbinit” will fail if you have e.g. a view that still references that field. (Obviously you still need to find code dependencies.)

Two things to do to your SQL scripts when you make a change:

  • Fix the script that creates all schema from scratch
  • Add a script that upgrades from build 635 to 636. Should be hand-coded, and should be done when you make the change (so you still remember what you did).
  • Alternative: don’t do #1. Your “schema from scratch” is your latest released version, and your automated tests create that and apply all the upgrade scripts to it. (Volunteered from the audience.)
  • Change scripts need to be in source control.
  • Ant target “dbupgrade”. Give it a from version and a to version, it gets that version of the “make fresh” script, and runs all the “upgrade” scripts against it.

Ruby on Rails has really good database migration support. You define a migration, and you can go forward and backward. Database-agnostic.

Merge Columns

  • Add the new column to be the merged data
  • Remove the old columns

Database things are trivial for the most part, so let’s make them trivial.

If you’re not putting it into source control, why are you creating it? If you’re not testing it, why are you creating it?

Other refactorings

  • Remove View
  • Introduce Surrogate Key
  • Replace One-To-Many with Associative Table

Refactoring is continuous, deployment is controlled (the way they do it, anyway)

Allow developers to experiment

  • Try multiple designs
  • Change the model in their local schema
  • DBA should be involved in design of functionality
  • DBA should help devs design tables, indexes, tune queries for performance
  • DBA and devs should pair all the time

Database threats

  • Putting bad data in / getting bad data out
  • Database schema (what if someone drops a constraint?)
  • Vendor should have a regression testing tool to sell you if not give you

Encapsulate the database

  • Coupling is enemy #1
  • Encapsulation strategies:
    • Brute force (embedded SQL)
    • Data access objects
    • Persistence frameworks
    • Services

Infrastructure

  • Start converting data when replacing legacy app; day 1 of the project
  • If feasible, use lightweight database, e.g. mySQL, in-memory database
  • Write a data generator, when needed

Thinking forward

  • New releases can be acceptance tested against current production data
  • Migration is a snap, so why not deploy weekly?

Resources

Agile 2006: Crushing Fear under the Iron Heel of Action

Monday, July 31st, 2006

Crushing Fear under the Iron Heel of Action
Ron Jeffries & Chet Hendrickson
Wednesday afternoon

“As far as we know, a workshop on fear has never been done by us before.”

Fear and what we can do about it. In most cases you can do something about it, if only run away.

Brainstorm some topics (mostly work-focused, e.g., we’re not looking for things like “tigers”) under which we sense fear / cause you to act in a way you would not normally have acted

  • Deadlines, because they’re too close and I won’t get done
  • Big requirements
  • Legacy code
  • Crappy code
  • Layoffs
  • Politics
  • Failure
  • Unsupportive customer / manager
  • Quality
  • Change
  • Writing code
  • Losing control
  • Too much responsibility
  • Abusive people
  • System crashes
  • Ambiguity or something like that
  • Unknown
  • Vague requirements
  • New technologies
  • Old technologies
  • Technology
  • Integration
  • Offshoring
  • Success
  • Security holes
  • Remote teams
  • Performance
  • Scalability
  • Usability
  • Sabotage
  • Pair Programming / working with other people
  • Communication
  • Inadequacy
  • Budget / cost overruns
  • Project cancellation
  • Rejection by peers

Rejection by peers / Pair programming / Inadequacy

  • New team
  • Senior people working with junior people
    • Fear of exposure: I get paid more, but I’m not as good as this junior guy
    • Fear of being obsolete
  • How am I really doing?
  • Unrealistic expectation of perfection

Group example:

  • “Dunce cap”
  • You’re stuck
  • Feel like you shouldn’t be stuck, because any fool in the room but you could do this
  • Don’t feel like standing up and saying “I need some help over here”

What you can do about it, part 1: Jedi mind tricks

  • Psyching yourself up
  • One fix: Remember a case where a teacher asked for help
  • Another (partial) fix: Make a rule that if someone asks for help, they must get it
  • Only way you learn is by screwing things up
  • Recognize that others stumble too. They might have the answer to yours, you might have the answer to theirs later.
  • Remember that you know something, just maybe not this.
  • Swap someone in from another pair, and if he doesn’t know, make *him* stand up and ask for help
  • When you’re stuck, you might as well be doing anything else
  • Write an article on your Web site and get people to tell you how to fix it
  • Get used to being an idiot
  • What’s the worst that could possibly happen? (Be careful if you have a really good imagination)
  • Open workspace
  • Jedi mind trick on ourselves: “This is not the problem you are worried about.”

What you can do about it, part 2: Action

  • Promiscuous Pairing and Beginner’s Mind
    • Relates to asking for help
    • Maybe could say whenever someone stands up and says “Tired”, you switch pairs
  • Ask one particular person instead of the whole room (private question)
  • Practice. Ask for help even when you don’t need it
  • Conversation. Talk about the system, even if you think you understand it. Someone may surprise you.
  • Kent Beck: Whenever a conversation goes on for more than ten minutes (or as soon after that as you notice you’re stuck), you must settle the issue by doing an experiment.
    • Two people involved go off and pair, and try both solutions.
    • Usually, whichever one you start on seems to be OK.
    • Often you’re going to the same place via a different route.
    • Or, often your concerns about the other guy’s solution turn out not to be a big deal.
  • Divide & conquer.
    • Maybe start by working on the part you do know…
    • maybe start by working on the part you don’t understand (the part you know you don’t know).
  • Wheels on chairs
  • Interrupt yourself / have someone else interrupt you
    • When a pair is silent, they’re probably stuck.
    • Go talk to them
    • Toss a Nerf ball at them (twisting the “throw a Nerf ball when someone is being too loud” rule)
    • Pull off a member of the stuck pair
  • Turn it into a competition to see who can ask the dumbest question of the day
    • “Hi, idiot here.”

There followed an exercise where we broke into groups. Each group picked one or more fears, and brainstormed what we could do about them.

What’s remarkable, as Ron said during his sum-up of the session (more eloquently than I’m doing here), is that we presumably all picked fears we were personally interested in, and in every group, in every case, we came up with things we could actually do about those fears.

Agile 2006: Coaching Software Development Teams

Thursday, July 27th, 2006

Coaching Software Development Teams
Michael Feathers
Wednesday morning, second half

Not a Grand Unified Theory of Coaching. Just observations.

Coaching and Change

  • Coach
    • Team advocate: thinks about what’s good for the team
    • Coaches aid change and provoke change
    • Coaching has ethical responsibilities

Coaching

  • All coaching relies on some model of human behavior
    • It’s hard to talk about human behavior frankly without offending people
    • Have to live with the fact that there are “Things You Can’t Say” — Paul Graham
  • Pitfalls
    • Objectifying people
    • Labeling — disrespectful, barrier to communication

The Dilemma of Work

  • Asks self: Would I like to work in this team?
  • The necessary evil? The necessary good?
    • If you didn’t need the money, would you work as you do now?
    • Why? Why not? What about your coworkers?
      • Some people become passionate about the work they have to do
      • Others just see it as a job
      • Can’t assume that everyone will be passionate about what they do
    • Who are we at work?
      • What matters to this person? What motivates them? Can I get them tied into the passion of doing this stuff?

Organizational and Personal Values

  • In any social setting, there can be conflicts between our personal values and those of our surrounding organization
    • Company can assume that you’re available to the company all the time
    • These conflicts determine the character of organizations and how well they function
    • They are hard to talk about
    • Can be a core issue with morale

Introducing “Pat”

  • Emergent personality
  • From a teacher: “Every class has its own personality”
  • Backronym: Personal Anthropomorphization of the Team
  • When we’re together, we’re something different than what we are separately
  • Super-Organism theory:
    • “A group is its own worst enemy” — Clay Shirky
    • The Lucifer Principle — Howard Bloom
  • Is Pat happy today? Is Pat sad? Apprehensive?
    • Individual people might be happy, but the group might be depressed
  • “Pat” does not embody organizational goals; “Pat” is an amalgam of the team
  • People outside your department do view your department as a single entity

Organizational Stance

  • Who does “Pat” work for?
    • Core values (inferred from action)
    • Typical behaviors
    • Reasonableness of Expectations
      • E.g., overtime, blogging and smoking

Personal Stance

  • Who we are (individual members of the team):
    • Our personal lifecycle, our personal experience (how old? have a family? what kind of commitments outside work?)
      • Can make educated guesses, but really need to talk to people
    • Our history in the organization
    • Our level of commitment
      • No judgment here
      • In many organizations, default assumption is that everybody is totally committed to the organization
      • But if people are doing what they need to do and aren’t getting in the way, they can be effective even if they aren’t totally committed

Anatomy of Learning

  • Learning involves a Tension/Release Cycle
    • Needing to know something, not knowing what to do: tension
    • Learning how to do it: release
    • As a coach, need to surf on that wave
    • There’s usually a source of tension you can work with
  • The thing we work for, we remember
    • Self-investment
    • If the team wants to work to get something done, let them
    • If necessary, show them how to fix it. They won’t be as invested in the result, but you do still get that release.
    • Pay attention to the mood of the room. Find a tense moment to help introduce something, and help them learn.
  • The job of a coach is to find teachable moments, and help team members release the tension productively
    • Places where people are receptive
  • Chaos aids learning
    • Can be more effective to tell someone to eat right when they’re in the emergency room after a heart attack

Conflict Identification

  • One of our jobs as coaches is to identify the conflicts / problems
  • Think about the team, think about its health, and identify problems that may not yet be recognized
    • When you identify, you can
      • Address and fix it, or
      • Lead people to find and fix it themselves
      • To know which, ask “Pat”
  • Page on Ward’s Wiki, has “Genius” in the title. Points toward this: you may have a solution, but it’s better if you can lead others to find it themselves.

Go Sideways

  • Recognizing when people are stuck, and offer alternatives
  • Keep some distance, but watch progress
  • Problems are enticing and captivating, but every moment you spend captivated is wasted if you’re going down the wrong path
    • When problems don’t yield to pressure, help people switch gears, to try a similar but smaller problem
    • Often this is enough to make the original problem yield
  • Help people step back; sometimes some distance will solve the problem

Go Home

  • Well, not really, but…
  • Pay attention to progress on problems.
  • Cultivate a sense of when people or problems are overloaded.
  • “Guys, you can’t have fifteen people working on this.”
  • Sometimes you have too many people working on too few things. Help redirect them to other things.

“Antennae Up”

  • When you’re a coach, you have to develop a sensitivity to what’s actually happening.
    • Who’s working with whom, how they’re interacting
    • What work is being avoided
    • Just be aware of these things
    • Can be draining to do this as well as work on software

Pair Coaching

  • Have someone else who knows the team available to bounce ideas off of
    • You might not have the right interpretation
    • In consulting, this works very well
    • Internal coaches can use peers on other teams, or trusted members of the team

Ask the Room

  • When the team adopts a new rule, ask them to call a huddle when tempted to break it, to see if there are alternatives
    • Leverages the whole team and builds a sense of how the team works

Make It Physical

  • A key coaching technique is to take the abstract and make it tangible
    • Information Radiators
    • Design: CRC cards, etc.

Active Listening

  • Single most powerful thing you can do as a coach
  • It is hard
  • Listening with minimal judgment is harder
    • Keep it centered on work
    • Balancing judgment (“Pat” vs. Person)
    • Pitfalls
  • When you listen and it isn’t recognized, you identify resistance
  • Listening is deep respect.

Advance / Retreat

  • Work with someone initially on some task, but selectively withdraw support
  • When you know how to do something, it’s easy to just do it. Resist temptation, especially when you’re trying to teach practices.
  • A way of gauging engagement and aiding initiative
  • By letting them take the lead, you’re helping them develop initiative in other situations

Tending “Pat”

  • Imagining what Pat is like right now. Visualization. Is s/he tired? Scared, relaxed? What’s the feel of the room?
  • What is Pat afraid of?
    • Losing job?
    • Extra work?
    • Is Pat nervous?
  • May miss things if you’re looking at individuals

Personal Encouragement / Discouragement

  • Most coaching work is one-on-one
  • You can’t address “Pat” directly
  • Coach has to be able to address things no one wants to address
    • Active Listening and Respect are your tools
      • Know the person
      • Know the feeling
      • Feel it first
      • Address

Name It

  • Often the first step in solving a problem is naming it
  • May not even have to say what needs to be done in response
  • If it’s an “elephant in the living room”, this is doubly true
  • Name problems and coach others to name them
  • This teaches the team reflection
  • Don’t always have to have a solution right away

“The Flounce”

  • Identifying the “Elephant in the Living Room”
  • Obeys “tension / release”
    • Pointed questions, soliciting comments… ending with silence. (tension) Then stark honest assessment of the problem, usually with emotional gravitas. (release)
  • There’s some drama to doing this

Team Surgery

  • The most effective way to change “Pat” is surgery
    • Most companies loathe the idea of moving people from one team to another
  • Team surgery is hard: politics, fiefdoms
    • Internal sub-teaming
  • The surgery that changes “Pat” the most is “Add Person”
    • Removing people doesn’t force as many differences in relation
  • Unfortunately, results are unpredictable
    • Try moving someone to a different task for a while, see what happens to the team

Push in the Water

  • Coach has to be able to ask people to go beyond their limits
    • Part of “Tending Pat”

Self-Disintermediation

  • If coach knows more than the rest of the team, team can rely on coach for answers. Pull yourself out of the loop.
  • “I think Sara knows about that, check with her.”
  • Can work well with a tension/release cycle
    • “Of only we spoke to Sara first”
  • Be aware of your desire to be in all loops
  • When the work gets done without drama and you didn’t do any coaching, that’s success
  • Most important when you’re dealing with a bunch of individuals, as opposed to a team

Cheerleading

  • Doesn’t have to be very overt
  • Part of being a coach is identifying what has gone well
  • There are successes on all teams in all situations. Tie back to goals.
  • Before cheerleading, decide whether it’s appropriate. Think about how it’s going to be received. Don’t come across as a Pollyanna.

Cultivate Respect

  • Especially in a dysfunctional team
  • People on teams will objectify each other (build a picture of another person based on a few experiences)
  • They will attempt to develop intimacy by complaining to you about others on the team
  • Your reaction is *important*.
    • Make it clear (without pushing them away) that you can’t look at people that way
    • Have to see them as a person too. Where are they coming from?

Ethical Questions in Coaching

  • When is “Pat” asking for too much?
  • When is an organization asking for too much for a team?
    • Fall back on honesty
    • Worst thing a coach can do is accept an organizational goal they don’t believe in, and go back to the team as if they believe it
    • Say you think it’s problematic, but we still have to do the best we can
  • Manipulation
    • Not every intervention has to be completely above board
    • Advance-Retreat.
    • Don’t always need full disclosure about why you’re doing a particular thing, when it’s in the best interest of the team.

Team Health

  • Intimacy and Unguarded Moments
    • Do team members know each other’s kids’ names? Do they talk about what they’re doing on the weekend?
    • When people know a bit more, things are a bit more at ease at work
  • No Emotional Steady-State
    • In a healthy team, the emotional state will change
    • If the emotional state is stuck, something’s wrong
  • Goal Achievement Record. How often do they meet their commitments?
  • Have people personalized their space?
  • Do people solve problems, or wait for someone to come save them?
  • The Role of Rules
    • Zero tolerance for zero tolerance

Dealing with Resistance

  • Sometimes people will act counter to the team’s interest
  • Advance/Retreat
  • Ignore. Some things will sort themselves out. Takes a lot of judgment.

Dealing with Personality Conflicts

  • Levels of relatedness:
    1. could hang out and talk about things outside work
    2. can work with for a couple of hours a day
    3. No, get me away from this guy/gal.
  • What is the relationship like from the other point of view?
  • Does everyone else have the same problem with that person?
  • Remedies:
    • Team Surgery
    • Align on communication boundaries
    • Let them go
  • Hiring is the most important decision an organization ever makes

Politics and Cliques

  • “Us versus Them” is natural, get used to it.
  • Generally shouldn’t happen inside the team.
  • Doesn’t have to be pathological.
    • Can be galvanizing
    • If the surrounding organization is dysfunctional, might not be all bad
    • Can be dissolved at times (team surgery)

Agile 2006: Agile Estimation Techniques

Thursday, July 27th, 2006

Agile Estimation Techniques
Owen Rogers
Wednesday morning, first half

Distilling experiences of estimation on agile projects. Specifically estimating stories.

  • Mike’s session was “fabulous” yesterday.
  • Mike has a book, “Agile Estimating and Planning”

Disclaimers:

  • This session contains statistical jargon, charts with curvy lines, and gross generalizations.
  • Worked for me. YMMV.
  • If this stuff contradicts what you’ve heard from other people at ThoughtWorks, don’t hold it against them.

Accuracy: How close to target
Precision: How big the target is
Decrease precision = increase accuracy. Inverse.
So, one way to increase accuracy is to go less graunlar.

Anatomy of an estimate

  • What you know
  • What you think you know, but don’t
    • E.g., Integration, customer’s intentions
  • What you know you don’t know.
    • E.g., sick time, new technology, changes in requirements.
  • What you don’t know you don’t know
  • Estimate what you know, derive what you don’t. The first 3 are what you know.
  • Pick an “average” story, call it a 3.
  • Everyone holds up some number of fingers. 2 = slightly easier than average. 5 = there’s sufficient uncertainty that I really can’t estimate; we need to split the story.
  • Relative, not necessarily linear. If you split a 5 into smaller stories, they won’t necessarily be a 3 and a 2.
  • Interdependencies? Up to team to decide. If there are interdependencies, make sure they’re communicated to the customer.
    • Possibility: estimate as if they’re not interdependent.
    • Possibility: Contingent estimates. If this, then that.
  • Estimate size. This includes both time and complexity.
  • We’re talking about how much work, not how long it would take a particular person to do it.

Handout: Suggested way to do a planning game.

  • Primary goal: produce estimates for a set of work.
  • Secondary goal: understand what that work entails.
  • Factor out things that don’t contribute to those goals, e.g. technical discussions.
  • Customer introduces a story.
  • Estimators ask business-related questions
  • When ready to estimate, hold out fist on top of other flat palm. (Don’t bias others with your estimate)
  • If everyone picks same number, the estimate stands. Move on to next story.
  • If there’s disagreement, have a 2-minute time-boxed discussion.
  • If we still don’t agree, pessimist wins.
  • If a story can’t be estimated in 10 minutes, drop it from the iteration, and add a spike card focused on splitting the story.
  • MeSoLonely: on-line dating service
    • As a suitor, I want to register for a new account (username, password, and email address). Estimate: 3.
    • As a suitor, I want to change physical details (height, weight, hair color, build, tastes). Our estimate: 3.
    • As a suitor, I want to search for a match by physical details. Our estimate: 4.
    • As a pharmceutical agent, I want to spam users with ads. Our estimate: 1?
    • As a suitor, I want to upload my photo
    • As a suitor, I want the system to send an e-mail message to my match

Easy to get people comfortable producing estimates, even if they don’t know much about the system. With only five possibilities, you can’t be too far off. You also build coordination skills.

Can’t necessarily compare estimates made far in advance with those made just before you do the work. Different amounts of uncertainty within the project.

What are story points?

  • Uncertainty -> probability distribution. Actually a lognormal distribution (like a normal distribution, but with a long tail).
    • Mike’s book has an explanation for why it’s lognormal.
    • Paper by Todd Little: confirmed that it’s lognormal. Don’t know the paper’s title, but it was in last month’s IEEE Software.
  • Mode (highest point) and mean are different.
  • Story points inherently encompass this distribution. Each point value is its own probability distribution; the larger ones have a wider variance, with the mode and mean farther apart.
  • 4 + 4 != 8 — Beware of the scaling fallacy. A 4 isn’t necessarily 30% larger than a 3. That’s why we limit the upper bound to 5 — so the estimates don’t get too large.

Could drop precision, and say that all stories are the same size. If stories are all about the same order of magnitude, this actually works fairly well.

  • The most important estimate is the 5. The others don’t matter as much.
  • But, wouldn’t recommend doing this right off the bat.
  • When splitting, do it collaboratively with the customer. That way, they’ll learn how to do it.

Minimizing what you think you know, but don’t

  • System = overlap between Business Domain and Technical Domain.
  • The more we understand about both, the less likely we are to have misunderstandings
  • Become a generalizing specialist. Don’t just be a DBA.
  • Leverage the Wisdom of Crowds. Let the testers estimate. Let the customer estimate.
    • When the customer estimates, that communicates what they expect the size to be. Can lead to interesting discussions. If the customer thinks it’s a 4 and the customer thinks it’s a 2, the customer goes away happy. If it’s the reverse, maybe the developers don’t understand the whole story.

Who estimates? The more people with different perspectives you can involve, the better your information will be. So by all means, include QA, documentation, support.

Histograms

  • Mostly 3s (i.e., mostly a normal distribution of estimates): Stories fairly well-understood. Lower risk.
  • Want to be able to assume the mean, because the mean is what provides you with safety.

Create a histogram of your release plan. Split stories until you end up with an approximately normal distribution, so you get more safety. Can do this over the course of the release.

  • Consistency is more important than accuracy.
  • Another argument for splitting: More stories = more consistent velocity.

Agile 2006: xUnit Test Patterns and Smells

Thursday, July 27th, 2006

xUnit Test Patterns and Smells
Gerard Meszaros and Greg Cook
Tuesday afternoon

Book: “XUnit Test Patterns” — currently in second-draft review. Hopefully coming out this fall.

xunitpatterns.com

This session is going to be hands-on with a fair number of exercises.

Terminology

  • Test vs SUT vs DOC
    • Test, which verifies the
    • System Under Test, which may use
    • Depended-on Component(s)
  • Unit vs Component vs Customer Testing
    • Unit testing: single class
    • Component: aggregate of classes
    • Customer: entire application
  • Black Box vs White Box Testing
    • Black box: know what it should do. xUnit tests are usually black-box tests of very small boxes.
    • White box: know how it is built inside

What does it take to be successful?

  • Programming experience
  • + xUnit experience
  • + Testing experience (What test should I write?)
  • != Robust Automated Tests

It’s crucial to make tests simple, and easy to write

Expect to have at least as much test code as production code!

  • Can we afford to have that much test code?
  • Can we afford not to?
  • Challenge: How to prevent doubling the cost of software maintenance?

Testware must be easier to maintain than the production code. The effort to maintain should be less than the effort saved by having tests.

Goals of automated tests:

  • Before code is written
    • Tests as Specification
  • After code is written
    • Documentation
    • Safety net
    • Defect localization (minimize debugging)
  • Minimizing cost of running tests
    • Fully automated
    • Repeatable
    • Robust: should work today, tomorrow, next month, should work in leap years, etc. Not brittle. Shouldn’t have to revisit tests unless the code it tests has changed.

What’s a “Test Smell”?

  • Set of symptoms of an underlying problem in test code
  • Smells must pass the “sniff test”
    • Should be obvious that it’s there (not necessarily obvious why — trust your gut)
    • Should “grab you by the nose”
    • Symptom that may lead you to the root cause
  • Common kinds of smells:
    • Code smells — visible problems in code
    • Behavior smells — test behaves badly
    • Project smells — project-level, visible to Project Manager
  • Code Smells can cause Behavior Smells can cause Project Smells

Patterns: Recurring solutions to recurring problems

  • Criterion: Must have been invented by three independent sources
  • Patterns exist whether or not they’ve been written up in “pattern form”

Examples of test smells:

  • Hard to understand
  • Coding errors that result in missed bugs or erratic tests
  • Difficult or impossible to write
    • No test API
    • Cannot control initial state
    • Cannot observe final state
  • Sniff test: Problem is visible (in your face)
  • Conditional test logic (if statements in tests)
  • Hard to code
  • Obscure
  • Duplication
  • Obtuse Assertion, e.g. AssertTrue(False) instead of Fail().
  • Hard-wired test data. Can lead to fragile tests.
  • Conditional test logic. Ridiculous example: if (condition) { … } else Assert.Fail(); Why not just do Assert(condition)?
  • Most unit tests are single-digit lines of code.
  • Smell: Obscure Test
    • Verbose
    • Eager (several tests in one method)
    • General fixture (setting up stuff you don’t need for every test)
    • Obtuse assertion
    • Hard-coded test data
    • Indirect testing when there’s a simpler way
    • Mystery Guest
  • Conditional Test Logic
  • Test Code Duplication

Patterns used so far:

  • Expected Objects: compare entire objects, rather than each individual property
  • Guard Assertions: if you’ve got test code that won’t work under certain cases, assert those cases
  • Custom Asserts
    • Improve readability
    • Simplify troubleshooting

Get the conditional code out of the test method where you can’t test it

Finally and delete/free/etc. inside a test method: Is our test really about testing the destructors? (C++Unit, at one point, checked for leaks, so this might matter.) Housekeeping code doesn’t add value to the test, because we don’t know whether it works, it creates maintenance, and it creates coupling between what happens inside the test and the test. Housekeeping code is a test smell.

Naive solution: Move housekeeping code to teardown. Does mean you have to move everything to fields. Don’t do this just out of habit.

Automated Fixture Teardown: AddTestObject() and DeleteAllTestObjects(), which frees everything (or deletes test data from the database, or whatever) even if there are exceptions.

Transaction Rollback Teardown: Another way to do cleanup. Start a database transaction in SetUp, roll it back in TearDown. But make sure the system under test doesn’t commit. (And only do this if you’re already using a database. Don’t use database in your test just to use this pattern!)

  • Complex Undo Logic
    • Complex fixture teardown code
    • More likely to leave test environment corrupted, leading to Erratic Tests
  • Patterns used: Inline Teardown, Implicit Teardown (hand-coded), Automated Teardown, Transaction Rollback Teardown

Hard-coded test data / Obscure test. Creating objects that are only there because you have to pass them to other objects. Can also cause unrepeatable tests, especially if you’re inserting a customer into the database to run your test.

If the values don’t matter to the test, you can just generate them. Call GetUniqueString(), etc. This tells the person reading the test that it doesn’t matter.

But if it’s irrelevant, why even have it there? Don’t create an Address and pass values to it, just make a CreateAnonymousAddress(), and then a CreateAnonymousCustomer(), etc.

If it’s not important to the test, it’s important that it not be in the test.

Smells:

  • Obscure test because of irrelevant information.
  • Patterns: Generated values, creation method (anonymous and parameterized), testcase class per feature, custom assertions

Suggestion: Call a method “AssertFoo” if it just asserts, “VerifyFoo” if it does some setup and then asserts.

Hard to Test Code

  • Too closely coupled to other software
  • No interface provided to set state, observe state
  • Only asynchronous interfaces provided. E.g., GUI.
  • Root cause is lack of design for testability
  • Temporary workaround: Test Hook. E.g., if (IsTesting) { … } else { … }

Test Double Patterns (different things we replace real code with when we’re testing)

  • Kinds of Test Doubles
    • Test Stubs return test-specific values
    • Mock Objects also verify method calls and arguments
    • Fake Objects provide (apparently) same services in a “lighter” way, e.g., in-memory database for speed
  • Need to be “installed”
    • Dependency Injection
    • Dependency Lookup

Testability Patterns

  • Humble Object
    • Objects closely coupled to the environment should not do very much
    • Should delegate real work to a context-independent testable object
    • Example: Humble Dialog. Don’t test the dialog itself, instead have it immediately create a helper that has the logic.
  • Dependency Injection: client passes depended-on objects
  • Dependency Lookup: code asks another object for its dependencies. Service Locator, Object Factory, Component Registry.
  • Test-specific subclass. Descend from the real object, and override the stuff you want to fake.

Test Logic in Production

  • if (Testing) { … }
  • Test code gets compiled in because production code depends on it

Behavior Smells

  • We don’t have to look for them. They come knocking. Usually at the most inopportune time.
  • Tests fail when they should pass (or pass when they should fail)
  • Problem is with how tests are coded, not a problem with the code under test
  • Occasionally compile-time, usually test-time

Slow Tests

  • Tests must run “fast enough”.
  • Impact
    • Impacts productivity.
    • Lost quality due to running tests less frequently.
  • Causes
    • Using slow components (e.g., database)
    • Asynchronous
    • Building a general fixture (that sets up too much stuff)

Brainstormed ways to avoid slow tests

  • Share the fixture between tests
  • Avoid DB / external service access
  • Fake Objects
  • Eliminate redundant tests (fewer tests)
  • Run slow tests less often
  • Make individual tests run faster
  • Faster hardware
  • Multi-threaded execution
  • Smaller datasets (fixtures or inputs)
  • Faster production code
  • Run fewer tests (subset)
  • Test logic more directly
  • Faster test execution (high-performance language)
  • Run less production code in each test (more focused)

Shared Test Fixture

  • Not recommended
  • The theory: Improves test run times by reducing setup overhead
  • Forces a standard test environment that applies to all tests
    • which probably forces the fixture to be bigger
  • Bad smell alert: Erratic tests

Smell: Erratic Tests

  • Interacting tests (depend on side effects of earlier tests)
  • Unrepeatable tests (later test changes the initial state that an earlier test depends on, so you get different behavior when you run all the tests twice)
  • Test Run War (if e.g. everyone uses the same database: random test failures when multiple people run tests at once)
  • Non-Deterministic Test (passes, then run it ten minutes later and it fails)
  • Resource Optimism (all pass on my PC, but fail on build machine)

Persistent Fresh Fixture

  • Rebuild fixture for each test and tear it down
    • At end of this test
    • At start of next test that uses it (just in time)
  • Build different fixture for each test (e.g., different key value)
  • Give each developer their own database sandbox
  • Don’t change shared fixture
    • Immutable Shared Fixture
    • What constitutes a “change” to a fixture? Which objects need to be immutable? If you add a flight, is that a change to the airport?
  • Build a new Shared Fixture each time you run the test suite

Fragile Tests

  • Stops working after a while
  • Interface Sensitivity
    • Every time you change the code, tests won’t compile or start failing
    • Need to modify lots of tests to get things green again
    • Greatly increases cost of maintaining the system
  • Behavior sensitivity
    • Behavior of code changes but should not affect test outcome
    • Cuased by depending on too much of code’s behavior
  • Centralize object creation, so when you change the constructor signature, you only have to fix it once
  • Date sensitivity (aka Fragile Fixture)
    • If your test depends on what data is in the database, then someone else can change it and your tests will fail
  • Context Sensitivity
    • Something changes outside the code (date/time, contents of another app)
    • If current date is this, then expected result is this; else …
  • Use stable interfaces
  • Bypass GUI
  • Encapsulate API from yourself (creation methods, etc.)
  • Minimal fresh fixture
    • Custom design for each test
  • Test stubs

Assertion Roulette

  • Failure shows up in output, and you can’t tell which assertion failed
  • When you can’t reproduce in your IDE, you have no idea what’s going wrong
  • Solution: Add assertion messages

Agile 2006: Delivering Flawless Tested Software Every Iteration

Thursday, July 27th, 2006

Delivering Flawless Tested Software Every Iteration
Alex Pukinskis
Tuesday morning

Wow. Standing-room only. (Well, sitting-on-the-floor-at-the-sides-of-the-room room only, anyway.)

Recommended reading:

Random notes from throughout the session:

  • If the devs get done with a story, and the customer says “Oh, but I wanted it to do this” that wasn’t part of the initial conversation: if that additional change won’t fit into the current iteration, it should probably be another story.
  • Under Agile, we experience the same old problems with quality, just more often. Instead of once or twice a year, now the problems surface once a week.
  • Common organizational problem: Not enough testers for the size of the development group. If this is a problem under agile, it was probably a problem before; but again, agile makes it visible more often.

Core principles of Agile testing

  • When do we make sure the system really runs?
    • Traditional: end of project (wait to address the risk until late in the project)
    • Transitional: A few key milestones
    • Agile: The system always runs.
      • If it’s not running, we don’t know how long it will take to get it running again. Solution: never put it into a non-running state to begin with.
  • When are tests written and executed?
    • Traditional: After development, at the end of the project
    • Transitional: After development, each iteration (may write tests concurrently with development, but run them after)
    • Agile: No code is written except in response to a failing test. This is supposed to mean 100% code coverage. (And then there are those of us in the real world… but it’s a good goal to aspire to.)
  • How do we manage defects?
    • Traditional: Identify during the testing phase; log in a bug tracking tool
    • Transitional: Identified during development; logged to fix later
    • Agile: (Want) zero post-iteration bugs; new features are never accepted with defects
  • Question put to audience: do we think “zero post-iteration bugs” is possible? Why not?
    • Some answers: Tests might be badly coded, or might be testing the wrong thing; test authors might not know what all to test.
  • Ideal: Don’t sign off on the story if there are any bugs, even low-severity. Slippery slope.
  • If a story has a bug, fix it right away. If it’s the end of the iteration, the customer needs to say either “fix it first thing next iteration” or “forget it — back out the whole story”. Yes, the customer can bail on features if they get too expensive.

Communicate

  • Even in an Agile project, analysis is still going on much like it does in Waterfall. It’s just communicated face-to-face instead of being put into a binder.
  • Agile development depends on disciplined communcation around requirements and testing.

Commit

  • If a team doesn’t commit to delivering flawless software every iteration… they won’t deliver flawless software every iteration!
  • At the end of the planning game, ask the developers how willing they are to commit to the plan: come hell or high water, we’re getting this story set done, running, tested, and bug-free.
  • Use the “fist of five”: everyone holds up 1-5 fingers
    • 5 fingers: Great idea! I’m going to send an e-mail out to everyone to say how great we did on this iteration plan!
    • 4 fingers: Good idea.
    • 3 fingers: I can live with and support this idea.
    • 2 fingers: I have some reservations.
    • 1 finger: I can’t support this. (BTW, use the index finger, please!)
  • Walking out of the iteration plan, you want everyone to be at 3+, saying that they’ll make this happen, putting in overtime if necessary to get it done with quality.
  • Alter scope if needed to get everyone to a 3+.

Automate

  • Without significant automated-test coverage, it’s too hard to know whether the system is running.
  • Problems:
    • Not enough information to write tests while the coders are coding
    • Brittle tests
    • Testing the GUI
    • Waiting for someone else to write the automation
    • Infrastructure
    • Legacy code
    • People who would rather make more features
      • If you give in to this temptation, you will spend all your time debugging

Untested code has no value

  • …because we don’t know what’s wrong with it, so we don’t know how bad it is, so we don’t know if we can ship it
  • Ron’s “Running Tested Features” metric
  • If you spend time writing code with no tests, you have tied money up in features we can’t get value from yet. Compare to lean production.

Traditional balance of tests:

  • Many manual GUI acceptance tests. Easy to create, familiar, but slow.
  • Some automated GUI tests. Need specialists to create.
  • Few unit tests

Agile: Mike Cohn’s Testing Pyramid

  • Few GUI acceptance tests. Small number of these, most of them automated.
  • Some FitNesse tests, driving development and acceptance.
  • Many unit tests. Use Test-Driven Development.

Acceptance vs. Developer tests

  • Acceptance: Does it do what the customer expects? Build the right code.
  • Developer: Does it do what the developer expects? Build the code right.
  • We won’t talk about developer tests or TDD today. It’s a separate topic, it’s hard, and besides, you can’t leverage your TDD success if you’re writing the wrong code.

Acceptance Tests define the business requirements

  • Also known as Customer Tests, Functional Tests
  • Can include Integration, Smoke, Performance, and Load tests
  • Defines how the customer wants the system to behave
  • Executable requirements documents
  • Can be run any time by anyone
  • Specify requirements, not design

Creating acceptance tests is collaborative

  • There’s “acceptance criteria” and then there’s “acceptance tests”. Criteria are things written down for people to look at. Tests are things in executable test frameworks for the computer to run.
  • Acceptance criteria specified by Customer at start of iteration
  • If you’re serious about lean, acceptance tests will be written just before the developers implement the feature
  • When possible, the Customer writes the acceptance tests
  • Shift in thinking about how projects run
  • Many Customers don’t know how to write acceptance tests
  • May require more information than we have when we’re estimating the story
  • Involves the testers earlier in the process

Testing can improve gradually

  • First, work on getting good stories…
  • Then, work on getting good acceptance criteria (in advance)…
  • Then work on getting acceptance tests in advance.
  • Want to have acceptance criteria by the end of the planning meeting.

Stories drive the development process.

  • Tasks aren’t worth accepting
  • Use Cases are too large
  • Use Case Scenarios are OK
  • User stories are just right

What is a User Story?

  • Small piece of business value that can be delivered in an iteration
  • As ________, I want to ________ so that I ________.
  • 3 parts:
    • Card: placeholder so we remember to build this feature
    • Conversation: get details before starting coding. What does this mean?
    • Confirmation: tests to confirm we did it right

Guidelines

  • Start with Goal stories (As ________, I…)
  • Write in “cake slices” (vertical slices)
  • Good stories fit the INVEST criteria
    • Independent
    • Negotiable
    • Valuable
    • Estimatable
    • Small
    • Testable

Common Blunders

  • Too large / small
  • Interdependent
  • Look like requirements docs
    • Should be a conversation starter, not a requirement
    • When you start specifying details, the coder might assume you’ve specified all the details, and not build what’s not in the spec
  • Implementation details
  • Technical stories (lose the ability to prioritize)

Each story includes acceptance criteria

  • Customer makes first pass at defining criteria before iteration planning
  • During the plan, discuss the criteria
  • Negotiate between implementers and customer
  • Should be short, easy to understand

Question: Why do this at the iteration planning, rather than waiting and doing it just-in-time before the developers start coding? Answer: knowing the criteria will clarify the scope, so we can give better estimates; and it will help catch big misunderstandings sooner.

How to document acceptance criteria?

  • Can write on the back of the card
  • Can use a wiki, etc. (We could probably use Trac.)

Don’t make the iteration longer to accommodate tests. If you do, you’ll just get feedback less often.

  • If you used to get 5 stories done each iteration, and the first iteration you go all-out on testing you only get 2 done, that’s fairly normal. It will speed up over time.

Story (apocryphal?) about a manufacturing plant that used to be owned by GM, then GM closed the plant and Toyota bought it and hired all the same people. Quality went up.

There was a wire people could pull that would stop the entire production line. Under GM, people were told “never pull that wire”. So if someone didn’t have time to get something done, they’d just do a quick patch — if there’s no time to put on a bolt, then leave it off, or do a spot-weld instead — since QA would catch it, and would redo it properly.

Under Toyota, people were told “pull that wire any time something isn’t right”. It took them a month to get that first car out the door.

But later, once they were up to full speed again, someone was touring the Toyota plant, and said, “Your production line is only taking up about a quarter of the space in this building. What’s the extra for?” The answer: “Oh, that’s where QA used to be.”


Getting acceptance criteria during planning

  • If the Customer can’t answer the devs’ questions on “how do we know when we’re done?”, then we can’t commit to building it.
  • These discussions take time, especially at first when the Customer doesn’t know how to write acceptance criteria at the right level of detail.
  • As we identify the acceptance criteria, they may give us a way to break stories down.

FIT

  • Can test whatever you want; you just need to write a fixture that does what you want, and FIT calls it
  • Tests can be hooked in at any level
    • Testing from outside the UI is possible, but brittle
    • Great to test just under the UI level (test what the controller calls)
    • Test at service interfaces
    • If you’ve hooked your fixtures in at the wrong place, it’s easy to change without invalidating the tests
    • Acceptance tests should test that the whole system works (minimize use of mocking)
  • Column Fixture for calculations. A column for each input, a column for each expected output. Multiple rows = multiple tests.
  • Row Fixture to test lists.
    • Verifies that search / query results are as expected
    • Check that group, list, set are present
    • Order can be important or unimportant
  • Action Fixture tests a sequence of actions

Bringing it all together

  • Using sequences of tables
  • Build / Operate / Check pattern (fitnesse.org)
    • One or more tables to build the test data (ColumnFixture)
    • Use a table to operate on the data
    • Use a ColumnFixture or RowFixture to verify the results

FitLibrary provides additional fixtures

  • Common fixture patterns abstracted into a framework
  • Allows more sophisticated tests
    • DoFixture — testing actions on domain objects
    • SetupFixture — repetitive data entry at start of test
    • CalculateFixture — alternative to ColumnFixture
    • ArrayFixture — ordered lists handled automatically
    • SubsetFixture — test specific elements of a list
  • Can operate directly on domain objects without writing custom fixtures

Writing Fixtures

  • Fixture code should be as simple as possible
  • Simple objects inherit from appropriate fixtures
  • Public member variables and methods (AAAGH!?!)
    • Yes, the “AAAGH!?!” was in the original slide
  • Develop System Under Test using TDD
  • When development is done, wire up the fixtures and run the FIT tests

Organizing FIT tests

  • Note: “Suites” in FitNesse are basically a way to include other tests (or other suites).
  • Maintain a suite of regression tests (called something like RegressionSuite) from past iterations. These should always pass.
  • Run regression suite with the build.
  • Maintain a suite of “in progress” tests for the current iteration
    • Begin the iteration with all tests failing
    • End the iteration with most tests passing
  • At the end of the iteration, move newly passing tests into the regression suite
    • Beware the FitNesse “Refactor / Move” command. At one point in time, it was horribly broken. Unknown whether it’s fixed.

FitNesse Configuration Tricks

  • (note: I know very little about FIT, and stuff from this point on didn’t get demoed during the session; so I don’t know any details about this stuff.)
  • Use root page for classpath settings
  • On developer machines, set root page to import its subpages from the central wiki (http://localhost:8080/?properties)
  • Manually exit XML files
  • Use variables for hardcoded filenames

Day-to-day with FitNesse

  1. Product owners / tests / analysts create tests on the central wiki
  2. Developers verify that table design makes sense and fixtures ccan be written
  3. Developers update their local replicas of FIT tests
  4. Developers use TDD to implement features
  5. Developers try to run FIT tests locally
  6. Developers work with testers to get FIT tests passing locally (negotiate if needed)
  7. Developers integrate changes
  8. Continuous build verifies that changes work centrally

Other Agile testing tools

  • Open source Web UI testing tools
    • WATiR
    • Selenium
    • Canoo Web Test
  • Go the “last mile” to verify things fit together
  • Tests written and maintained incrementally
  • Tend to be more brittle

Meta-Agile 2006: How much detail do you guys want to see?

Tuesday, July 25th, 2006

So you’ve probably noticed that some of my posts from the con are fairly focused, just touching on the most important points from the session, while others are massive brain-dumps of just about every word said.

My question to you: Which is better?

I tend to think that there’s value in both, in certain circumstances. The braindumps have the benefit of being comprehensive, but the downside is that no sane person is going to sit down and read one of those braindump posts from beginning to end. At best they’re going to skim, but probably not even that.

I like the summaries because they do give me a chance to pick the points I think are most valuable, and the points I’m most interested in taking back to our team. But they take considerably more time, and they lose a lot of the detail.

What do you guys think? More braindumps? More summaries? How about all the detail of a braindump, but broken apart into bite-sized posts? What would you find most useful?

Promiscuous Pairing and Beginner’s Mind

Tuesday, July 25th, 2006

I had heard that someone had done some research on optimal pairing times. Last night I found the original research paper, entitled “Promiscuous Pairing and Beginner’s Mind” [PDF]. I found it because there’s another research paper being presented this year, on someone who tried the same thing, and is presenting their own results.

The original team found they achieved the highest velocity when they:

  • Assigned tasks to the least-qualified person
  • Swapped people out as the tasks went on, i.e., there was no one person who stayed with the task from start to finish
  • Switched pairs every 90 minutes

New people ramped up very quickly. A new hire, after three weeks, was expert enough to mentor another new hire. Wow.

The devs reported that they felt frustrated with the 90-minute pair switching, as if they weren’t getting much done, but in fact they got the highest velocity that way.

One of the things that I found really interesting was that, when they were switching pairs every 90 minutes, they didn’t get tired. But when one person on their (then-small) team went on vacation, they couldn’t swap, so they just stuck with the same pair all day. And they couldn’t sustain it, because it was just exhausting.

I can attest that pairing all day can be very draining. Would shorter pair cycles work for us, at keeping the energy up? Have to talk to our team about it when we get back.

This year, there’s an Experience Report from a team at Microsoft who tried it. It reads a bit like “We Tried Baseball and It Didn’t Work“, but they ended on a positive note, saying they think it would’ve worked if outside issues hadn’t forced them to stop early. But they did say they thought it was a 400 or 500-level technique, not good to try unless your team was already really good at pairing. They compared it to a roller-coaster ride.

Our team has been doing pairing for about a year and a half now. Would this stuff work for us? Definitely going to talk to the team when we get back, and weigh the pros and cons. If we do try it, I’ll definitely let y’all know how it works.

Agile 2006: The Lego XP Game

Monday, July 24th, 2006

They put us in a small room, with only 6 tables. And the room isn’t full. Probably only about 30 people in here. Tough to believe that you can give a thousand geeks a chance to play with LEGOs, and this is all that shows up. [Later I found out that they asked for a small room on purpose, so the game and the acoustics would be manageable.]

The setup

So here’s how the game worked:

  • You work for AnimalCo…
  • …and are helping develop a brand new animal!
  • The presenters are your customers.
  • You are Agile!

We went in 10-minute iterations (those are 10 ideal minutes, mind you — there was actually a good bit of talking between each stage, but hey):

  • Story estimation — 3 minutes. We acted as developers, estimating how much time / how hard we thought each story would be. Each story got rated as Easy, Medium, or Hard.
  • Story signup (planning game) — 3 minutes. In this stage, we acted as the business. Each card conveniently had a business value (in dollars) written on it. Based on the value and the difficulty, we decided which cards we would commit to for the iteration. (Yes, in real life, this is supposed to be done by the customer.)
  • Development — 4 minutes. To your LEGO!

It was a bit of an odd situation, since we were picking which stories should go in, but we did have customers defining the acceptance criteria. Usually those would be combined, but I’m sure they did it on purpose, so we’d get the flavor of everything.

How it went

In the first iteration, there were stories for things like “Give the animal two legs”, “Give the animal wings”, “Give the animal a head”, “Give it at least two eyes”, etc. In later iterations, we got interesting monkeywrenches thrown in, like “Make the animal striped” (which was one of the ones at the maximum business value of $500). Imagine taking an existing LEGO animal and changing its color, especially when you’re in a room with five other teams, and the LEGO bricks are a limited resource. One team did an incredible job of it. Ours negotiated with the customer: “Just how much of the animal needs to be striped? Can we just do the legs and torso?”

(Yeah, our group ended up with a torso. We originally just attached the head directly to the legs, which the customer was fine with, but later we added a torso for the “make the animal at least 10 cm high” story. When the “give the animal a second head” story came along, we didn’t have a convenient place to attach it, so it wound up with a second head growing out of its feet. The customer was OK with this too, although he didn’t sign off on the card the first time because the second head didn’t have a recognizable chin.)

It took us a while to get the hang of the “customer” role they had in mind. Some debate ensued, for example, when Team A did not attach the legs to anything, since there was no story for adding a body — so they just made legs. (The customer did not accept that story as done. I’m pretty sure they got it in the second iteration.) It also turned out that they had expected us to get the customers’ signoff during the iteration, rather than waiting until the end. (In four minutes they want us to build and get signoff? When each customer’s time is divided between two tables? But then we do know the story’s going to get signed off on.)

Thing was, it worked. Again, I’m pretty sure that it was on purpose that they didn’t tell us everything in advance, because it made it a lot more memorable. If they had told us to get the customer’s signoff as soon as each story was done, then we would have done it that way. But as it was, not only did we find it out, and not only did we find it out in a more memorable way than having it buried in “here’s the umpteen things we want you to do”, but we also found out why it matters: because the customer might want those wings to actually support the animal’s weight, and might pick it up by the wings to test it. (It did pass, just barely.)

Retrospectives

We did three iterations, and we did a retrospective at the end of each one. At the retrospectives, we didn’t just talk about random things we noticed; they gave us a couple of tools for structuring the retrospective. These are definitely take-homes. I have no idea whether they would or wouldn’t work for our team, but we should definitely try them.

Tool #1: Draw two columns, and fill in:

  • What’s going well?
  • What do you want to change?

(As opposed to “what’s going badly”. Keep it positive.) We used this tool at the end of the first iteration. The “going-well” list included things like simplicity, teamwork, early feedback from the customer (of course, some groups did better at that than others). The “change” list included explicit sign-off, common understanding, simplicity again. They included the customers in the retrospective as well, and one of them said (jokingly, I think) that he wanted to change his whole team (grin). I think that was the team that didn’t attach the legs…

Tool #2: Draw five spokes, to divide the writing area into five pie slices. Label them

  • What do we want to start doing?
  • What do we want to do more of?
  • What do we want to keep doing?
  • What do we want to do less of?
  • What do we want to stop doing?

I only jotted down one of the things we came up with this time: for “what do we want to stop doing?”, one group said “antagonizing the customer”. (I think it was a different group this time.)

One interesting idea they suggested was what I think they called “retrospective acceptance tests”. For things you say you want to do differently (or that you want to keep doing, for that matter), ask “When we revisit this, how can we tell whether it worked?” That’s going to be an interesting question to consider.

Other stuff

Big Visible Charts — “Information Radiators”

  • Being surrounded by information is part of XP.
  • We should always know how this iteration is going, and how well the release is going.
  • Big Visible Charts are a simple and easy way to report progress.

Iteration Burn-Up Chart: Target vs. Progress. How much would we expect to have finished by the end of the day on Wednesday? How much did we have finished?

Watch the velocity. If it dips, either stories are getting harder, or you need to refactor.

Agile methodologies are not prescriptive. Most agile failures involve people reading an agile book and taking it as a rulebook, and applying it rigidly. Not so much the point of being agile. (Back to practices being easier than principles.)

Best way to introduce agile into a big waterfall shop: Use it on pilot projects. Gain their trust.

Here’s an interesting suggestion for something to keep an eye on while estimating: a RAID log. RAID stands for Risks, Assumptions, Issues, and Dependencies. Capture assumptions during estimation, escalate (to the customer) those that we think need it. Same with risks. When does a risk become an issue? Make this stuff big and visible. Put it on a wall.

When you’re blocked waiting for the customer’s input: Defer decisions to the last responsible moment. Making the wrong decision amounts to wasting the customer’s money, but so does doing nothing. If it becomes necessary, make a decision, and confirm with the customer as soon as possible.

Conclusions

  • Agile methods like XP should not be overly prescriptive. Again, this is coming back to the whole “practices are easy, principles are hard” thing. One page I happened across today referred to cargo-culting, which I think is an extreme manifestation of the same thing.
  • The team itself, not a book, should define how best it should work
    • But knowing many agile methodologies will help
  • Agile = empowering the team to constantly strive for improvement, and creating an environment in which constant improvement is possible


Joe White's Blog copyright © 2004-2011. Portions of the site layout use Yahoo! YUI Reset, Fonts, and Grids.
Proudly powered by WordPress. Entries (RSS) and Comments (RSS). Privacy policy