Agile 2006: Delivering Flawless Tested Software Every Iteration

Delivering Flawless Tested Software Every Iteration
Alex Pukinskis
Tuesday morning

Wow. Standing-room only. (Well, sitting-on-the-floor-at-the-sides-of-the-room room only, anyway.)

Recommended reading:

Random notes from throughout the session:

  • If the devs get done with a story, and the customer says “Oh, but I wanted it to do this” that wasn’t part of the initial conversation: if that additional change won’t fit into the current iteration, it should probably be another story.
  • Under Agile, we experience the same old problems with quality, just more often. Instead of once or twice a year, now the problems surface once a week.
  • Common organizational problem: Not enough testers for the size of the development group. If this is a problem under agile, it was probably a problem before; but again, agile makes it visible more often.

Core principles of Agile testing

  • When do we make sure the system really runs?
    • Traditional: end of project (wait to address the risk until late in the project)
    • Transitional: A few key milestones
    • Agile: The system always runs.
      • If it’s not running, we don’t know how long it will take to get it running again. Solution: never put it into a non-running state to begin with.
  • When are tests written and executed?
    • Traditional: After development, at the end of the project
    • Transitional: After development, each iteration (may write tests concurrently with development, but run them after)
    • Agile: No code is written except in response to a failing test. This is supposed to mean 100% code coverage. (And then there are those of us in the real world… but it’s a good goal to aspire to.)
  • How do we manage defects?
    • Traditional: Identify during the testing phase; log in a bug tracking tool
    • Transitional: Identified during development; logged to fix later
    • Agile: (Want) zero post-iteration bugs; new features are never accepted with defects
  • Question put to audience: do we think “zero post-iteration bugs” is possible? Why not?
    • Some answers: Tests might be badly coded, or might be testing the wrong thing; test authors might not know what all to test.
  • Ideal: Don’t sign off on the story if there are any bugs, even low-severity. Slippery slope.
  • If a story has a bug, fix it right away. If it’s the end of the iteration, the customer needs to say either “fix it first thing next iteration” or “forget it — back out the whole story”. Yes, the customer can bail on features if they get too expensive.

Communicate

  • Even in an Agile project, analysis is still going on much like it does in Waterfall. It’s just communicated face-to-face instead of being put into a binder.
  • Agile development depends on disciplined communcation around requirements and testing.

Commit

  • If a team doesn’t commit to delivering flawless software every iteration… they won’t deliver flawless software every iteration!
  • At the end of the planning game, ask the developers how willing they are to commit to the plan: come hell or high water, we’re getting this story set done, running, tested, and bug-free.
  • Use the “fist of five”: everyone holds up 1-5 fingers
    • 5 fingers: Great idea! I’m going to send an e-mail out to everyone to say how great we did on this iteration plan!
    • 4 fingers: Good idea.
    • 3 fingers: I can live with and support this idea.
    • 2 fingers: I have some reservations.
    • 1 finger: I can’t support this. (BTW, use the index finger, please!)
  • Walking out of the iteration plan, you want everyone to be at 3+, saying that they’ll make this happen, putting in overtime if necessary to get it done with quality.
  • Alter scope if needed to get everyone to a 3+.

Automate

  • Without significant automated-test coverage, it’s too hard to know whether the system is running.
  • Problems:
    • Not enough information to write tests while the coders are coding
    • Brittle tests
    • Testing the GUI
    • Waiting for someone else to write the automation
    • Infrastructure
    • Legacy code
    • People who would rather make more features
      • If you give in to this temptation, you will spend all your time debugging

Untested code has no value

  • …because we don’t know what’s wrong with it, so we don’t know how bad it is, so we don’t know if we can ship it
  • Ron’s “Running Tested Features” metric
  • If you spend time writing code with no tests, you have tied money up in features we can’t get value from yet. Compare to lean production.

Traditional balance of tests:

  • Many manual GUI acceptance tests. Easy to create, familiar, but slow.
  • Some automated GUI tests. Need specialists to create.
  • Few unit tests

Agile: Mike Cohn’s Testing Pyramid

  • Few GUI acceptance tests. Small number of these, most of them automated.
  • Some FitNesse tests, driving development and acceptance.
  • Many unit tests. Use Test-Driven Development.

Acceptance vs. Developer tests

  • Acceptance: Does it do what the customer expects? Build the right code.
  • Developer: Does it do what the developer expects? Build the code right.
  • We won’t talk about developer tests or TDD today. It’s a separate topic, it’s hard, and besides, you can’t leverage your TDD success if you’re writing the wrong code.

Acceptance Tests define the business requirements

  • Also known as Customer Tests, Functional Tests
  • Can include Integration, Smoke, Performance, and Load tests
  • Defines how the customer wants the system to behave
  • Executable requirements documents
  • Can be run any time by anyone
  • Specify requirements, not design

Creating acceptance tests is collaborative

  • There’s “acceptance criteria” and then there’s “acceptance tests”. Criteria are things written down for people to look at. Tests are things in executable test frameworks for the computer to run.
  • Acceptance criteria specified by Customer at start of iteration
  • If you’re serious about lean, acceptance tests will be written just before the developers implement the feature
  • When possible, the Customer writes the acceptance tests
  • Shift in thinking about how projects run
  • Many Customers don’t know how to write acceptance tests
  • May require more information than we have when we’re estimating the story
  • Involves the testers earlier in the process

Testing can improve gradually

  • First, work on getting good stories…
  • Then, work on getting good acceptance criteria (in advance)…
  • Then work on getting acceptance tests in advance.
  • Want to have acceptance criteria by the end of the planning meeting.

Stories drive the development process.

  • Tasks aren’t worth accepting
  • Use Cases are too large
  • Use Case Scenarios are OK
  • User stories are just right

What is a User Story?

  • Small piece of business value that can be delivered in an iteration
  • As ________, I want to ________ so that I ________.
  • 3 parts:
    • Card: placeholder so we remember to build this feature
    • Conversation: get details before starting coding. What does this mean?
    • Confirmation: tests to confirm we did it right

Guidelines

  • Start with Goal stories (As ________, I…)
  • Write in “cake slices” (vertical slices)
  • Good stories fit the INVEST criteria
    • Independent
    • Negotiable
    • Valuable
    • Estimatable
    • Small
    • Testable

Common Blunders

  • Too large / small
  • Interdependent
  • Look like requirements docs
    • Should be a conversation starter, not a requirement
    • When you start specifying details, the coder might assume you’ve specified all the details, and not build what’s not in the spec
  • Implementation details
  • Technical stories (lose the ability to prioritize)

Each story includes acceptance criteria

  • Customer makes first pass at defining criteria before iteration planning
  • During the plan, discuss the criteria
  • Negotiate between implementers and customer
  • Should be short, easy to understand

Question: Why do this at the iteration planning, rather than waiting and doing it just-in-time before the developers start coding? Answer: knowing the criteria will clarify the scope, so we can give better estimates; and it will help catch big misunderstandings sooner.

How to document acceptance criteria?

  • Can write on the back of the card
  • Can use a wiki, etc. (We could probably use Trac.)

Don’t make the iteration longer to accommodate tests. If you do, you’ll just get feedback less often.

  • If you used to get 5 stories done each iteration, and the first iteration you go all-out on testing you only get 2 done, that’s fairly normal. It will speed up over time.

Story (apocryphal?) about a manufacturing plant that used to be owned by GM, then GM closed the plant and Toyota bought it and hired all the same people. Quality went up.

There was a wire people could pull that would stop the entire production line. Under GM, people were told “never pull that wire”. So if someone didn’t have time to get something done, they’d just do a quick patch — if there’s no time to put on a bolt, then leave it off, or do a spot-weld instead — since QA would catch it, and would redo it properly.

Under Toyota, people were told “pull that wire any time something isn’t right”. It took them a month to get that first car out the door.

But later, once they were up to full speed again, someone was touring the Toyota plant, and said, “Your production line is only taking up about a quarter of the space in this building. What’s the extra for?” The answer: “Oh, that’s where QA used to be.”


Getting acceptance criteria during planning

  • If the Customer can’t answer the devs’ questions on “how do we know when we’re done?”, then we can’t commit to building it.
  • These discussions take time, especially at first when the Customer doesn’t know how to write acceptance criteria at the right level of detail.
  • As we identify the acceptance criteria, they may give us a way to break stories down.

FIT

  • Can test whatever you want; you just need to write a fixture that does what you want, and FIT calls it
  • Tests can be hooked in at any level
    • Testing from outside the UI is possible, but brittle
    • Great to test just under the UI level (test what the controller calls)
    • Test at service interfaces
    • If you’ve hooked your fixtures in at the wrong place, it’s easy to change without invalidating the tests
    • Acceptance tests should test that the whole system works (minimize use of mocking)
  • Column Fixture for calculations. A column for each input, a column for each expected output. Multiple rows = multiple tests.
  • Row Fixture to test lists.
    • Verifies that search / query results are as expected
    • Check that group, list, set are present
    • Order can be important or unimportant
  • Action Fixture tests a sequence of actions

Bringing it all together

  • Using sequences of tables
  • Build / Operate / Check pattern (fitnesse.org)
    • One or more tables to build the test data (ColumnFixture)
    • Use a table to operate on the data
    • Use a ColumnFixture or RowFixture to verify the results

FitLibrary provides additional fixtures

  • Common fixture patterns abstracted into a framework
  • Allows more sophisticated tests
    • DoFixture — testing actions on domain objects
    • SetupFixture — repetitive data entry at start of test
    • CalculateFixture — alternative to ColumnFixture
    • ArrayFixture — ordered lists handled automatically
    • SubsetFixture — test specific elements of a list
  • Can operate directly on domain objects without writing custom fixtures

Writing Fixtures

  • Fixture code should be as simple as possible
  • Simple objects inherit from appropriate fixtures
  • Public member variables and methods (AAAGH!?!)
    • Yes, the “AAAGH!?!” was in the original slide
  • Develop System Under Test using TDD
  • When development is done, wire up the fixtures and run the FIT tests

Organizing FIT tests

  • Note: “Suites” in FitNesse are basically a way to include other tests (or other suites).
  • Maintain a suite of regression tests (called something like RegressionSuite) from past iterations. These should always pass.
  • Run regression suite with the build.
  • Maintain a suite of “in progress” tests for the current iteration
    • Begin the iteration with all tests failing
    • End the iteration with most tests passing
  • At the end of the iteration, move newly passing tests into the regression suite
    • Beware the FitNesse “Refactor / Move” command. At one point in time, it was horribly broken. Unknown whether it’s fixed.

FitNesse Configuration Tricks

  • (note: I know very little about FIT, and stuff from this point on didn’t get demoed during the session; so I don’t know any details about this stuff.)
  • Use root page for classpath settings
  • On developer machines, set root page to import its subpages from the central wiki (http://localhost:8080/?properties)
  • Manually exit XML files
  • Use variables for hardcoded filenames

Day-to-day with FitNesse

  1. Product owners / tests / analysts create tests on the central wiki
  2. Developers verify that table design makes sense and fixtures ccan be written
  3. Developers update their local replicas of FIT tests
  4. Developers use TDD to implement features
  5. Developers try to run FIT tests locally
  6. Developers work with testers to get FIT tests passing locally (negotiate if needed)
  7. Developers integrate changes
  8. Continuous build verifies that changes work centrally

Other Agile testing tools

  • Open source Web UI testing tools
    • WATiR
    • Selenium
    • Canoo Web Test
  • Go the “last mile” to verify things fit together
  • Tests written and maintained incrementally
  • Tend to be more brittle

Leave a Reply

Your email address will not be published. Required fields are marked *