TechEd 2008 notes: “Design Patterns” in Dynamic Languages

These notes are probably fairly coherent. They’re definitely worth the read if you’re interested in design patterns and know something about Ruby, although I don’t know whether my notes would stand up very well if you don’t already know Ruby.

“Design Patterns” in Dynamic Languages
Neal Ford
Software Architect / Meme Wrangler
ThoughtWorks

Pattern solutions from weaker languages have more elegant solutions in dynamic languages

Ruby on Rails is running on IronRuby (as of RailsConf last week)

“Design Patterns: Making C++ Suck Less” by GoF

  • Really good for two things
    • Good definition for common kinds of problems we encounter
    • Insomnia cure
  • Examples in C++ and Smalltalk
  • People mistakenly thought it was a recipe book

Patterns define common problems. Dynamic languages give you better tools to solve those problems.

Not “static vs. dynamic”. It’s “ceremony vs. essence”. Statically-typed languages like C# require a lot of ceremony to get things done; long distance between intent and result. Dynamic languages have less distance.

ITERATOR

Side note: Neal isn’t a big fan of UML, because it’s not technical enough for technical people, and too technical for non-technical people. This is the only talk where you’ll ever see him use UML.

Provide way to access elements of an aggregate sequentially without exposing the details.

container.each {|i| puts i}

Built-in iterator: each. Can also make an iterator object. C# has very similar things: IEnumerable/IEnumerator and foreach

COMMAND

Encapsulates a request as an object.

commands = []
(1..10).each do |i|
  commands << proc { count += i }
end

Any language with closures already has Command built-in. There's a trend here: languages are adding built-in patterns. C# has this too.

But: Command pattern may also support undoable operations.

class Command
  def initialize(do_proc, undo_proc)

Dynamic languages don't preclude structure, but they let you delay it until necessary.

See blog post / rant: "Execution in the Kingdom of Nouns". Command was essentially designed to give you a way to pass verbs around.

BUILDER

Separate construction process so the same process can create different representations.

Dynamicize with ad hoc combinations: method_missing. So instead of calling several methods in a row to set up different features, you can do things like parsing the method name: add_cd_and_dvd_and_turbo ("And" is a "bubble word" in DSL terms: it's there for readability only.)

Used by Rails for find methods: find_by_column1_and_column2_...

INTERPRETER

Given a language, define a grammar and interpreter. (Tough to use as a recipe!)

Admission that the language you're writing your app in isn't expressive enough to get the job done.

Greenspun's Tenth Rule:

Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.

GoF book assumes you're writing a parser, but an increasingly common solution is an internal DSL (e.g. LINQ).

recipe = Recipe.new "Spicy bread"
recipe.add 200.grams.of "Flour"
recipe.add 1.lb.of "Nutmeg"

Need to be able to add methods to numbers. No problem. Open class. More flexible than extension methods in C#, because you can change or remove methods.

class Numeric
  def gram
    self
  end
  alias_method :grams, :gram
  def pound
    self * 453.59237
  end
  alias_method :pounds, :pound
  alias_method :lb, :pound
  alias_method :lbs, :pound
  def of ingredient
    if ingredient.kind_of? String
      ingredient = Ingredient.new(ingredient)
    end
    ingredient.quantity = self
    ingredient
  end
end

Side note: I might have done of by making a to_ingredient method on both String and Ingredient, and call that. Wonder if that would actually be better or not, though it would avoid the kind_of? call.

Semi-external DSL:

ingredient "flour" has Protein=11.5, Lipid=1.45, Sugars=1.12, Calcium=20, Sodium=0

Do a couple of substitutions to change that into a legal line of Ruby code, and then do instance_eval.

Internal DSL == embedded interpreter. Meets the intent of the GoF interpreter pattern better than lex/yacc, especially since it's less prohibitive. Ruby on Rails is essentially a collection of internal DSLs.

FACTORY

Define interface for creating an object; let subclasses decide which class to instantiate.

Trivial in Ruby:

def create_from_factory(factory)
  factory.new
end

DECORATOR

Attach additional responsibilities dynamically. Flexible alternative to subclassing.

module Decorator
  def initialize(decorated)
    @decorated = decorated
  end
  def method_missing(method, *args)
    args.empty? ?
      @decorated.send(method) :
      @decorated.send(method, args)
  end
end

Whip.new(Coffee.new).cost

Or, make the decorations modules:

module Whipped
  def cost
    super + 0.2
  end
end

x = Coffee.new
x.extend Sprinkles
x.extend Whipped

Or make it even nicer:

Coffee.with Sprinkles, Whip

RECORDER

class Recorder
  def initialize
    @messages = []
  end
  def method_missing(method, *args, &block)
    @messages << [method, args, block]
  end
  def play_back_to(obj)
    @messages.each do |method, args, block|
      obj.send(method, *args, &block)
    end
  end
end

Interesting nuance: what about methods that are already defined on Recorder, e.g. to_s? Anticipated in Ruby 1.8: there's a class called BlankSlate.

ADAPTER

Convert one interface to another. Round hole / square peg.

class SquarePegAdapter
  def radius
    Math.sqrt(((@peg.width / 2) ** 2) * 2)

That's the traditional adapter: a wrapper. But in Ruby, you could solve it a different way:

class SquarePeg
  def radius
    Math.sqrt(((width / 2) ** 2) * 2)
  end
end

Can also add methods to a single instance instead of the entire class.

What if SquarePeg already had a radius method?

class SquarePeg
  include InterfaceSwitching
  def width
    @width
  end
  def_interface :normal, :width

  def width
    @width / 3
  end
  def_interface :holes, :width

  def initialize(width)
    set_interface :normal
    @width = width
  end
end

Then you can use a with_interface method to adapt the class to a particular interface, yield self, then put itself back. Short-lived adapter.

DYNAMIC LANGUAGE PATTERNS

ARIDIFIER

Pragmatic Programmer: "Don't Repeat Yourself."

Ceremonious languages create floods. They create repetition you don't even see because you're so used to it. Essence languages allow aridification.

class Grade
  class << self
    def for_score_of(grade)
      case grade
        when 90..100: 'A'
        when 80..90 : 'B'
        when 70..80 : 'C'
        when 60..70 : 'D'
        when Integer: 'F'
        when /['A-D]/, /[F]/ : grade
        else raise "Not a grade: #{grade{"
      end
    end
  end
end

def test_numerical_grades
  for g in 90..100
    assert_equal "A", Grade.for_score_of(g)
  end
  for g in 80..90
    assert_equal "B", Grade.for_score_of(g)
  end

Lots of repetition. Better:

TestCrades.class_eval do
  grade_range = {
    'A' => 90..100,
    ...

  grade_range.each do |k, v|
    method_name = ("test_" + k + "_letter_grade").to_sym
    define_method method_name do
      ...

No duplication. You write the loop once, inside that call to define_method.

STATE

Door with states: Open and Closed. Model the states as mixins, and extend them. Problem: old methods don't go away.

Solution: Mixology Ruby gem that adds unmix and mixin methods.

Dynamic languages use language facilities to create simpler solution. GoF solutions are all structural; that's why they all have UML diagrams: the way to solve problems is to add structure. You can still do that in dynamic languages, but often don't need to.

Understand patterns for what they are: descriptions of common problems. Don't get caught up in implementation details. Implement solutions that are more elegant and take advantage of your tools.

"Simplexity": taking complicated code and move it to another place; may add more complexity somewhere else, but simplifies other code. Languages with strong metaprogramming.

Q&A

Which of these patterns are implementable in IronRuby? -- They're running IronRuby against the Ruby language specs. Virtually all of these should run now.

Looks like cleverness for cleverness' sake. -- Compare to AOP or LINQ. Not everyone on your team has to understand how it works internally. But don't go hog-wild with this stuff. Metaprogramming was 3.5 - 5% of his team's Ruby code. Use metaprogramming surgically, not as a sledgehammer.

Some programmers don't understand e.g. C++ templates. Is metaprogramming the same kind of problem? -- No, because in C++, templates are in your face all the time; the complexity is hard to hide. In Ruby, it just looks like you're adding keywords to the language.

Why not just write your own language on the DLR? -- Some people probably will, but writing a good, expressive language is really hard, way harder than making a good framework.

At a company that doesn't use any dynamic languages, how to decide which one to use? -- Start using one to do something you already need to do; you'll get a flavor for how it feels and how its concepts map to your view of the world. Start with infrastructure stuff. Start using rake, or writing tests with IronRuby, or do scripting.

Aren't these languages dangerous because you don't have static typing? -- Yes. But: a compiler is a good verifier, but only for a very small surface area. Unit testing, with dynamic languages, is not optional. (Arguably, it's not optional with static languages either.)

Can you create unit tests that cover all possible scenarios? -- Sure. They go for 100% code coverage.

Does he know whether RSpec works in IronRuby? -- Doesn't know, but if Rails works, RSpec likely does.

TechEd 2008 notes: Bill Gates’ keynote

Bill Gates keynote

This is Bill’s last public appearance as a full-time Microsoft employee.

Video: Bill Gates’ Last Full Day

Technology Megatrends

  • Hardware performance: Moore’s Law, multi-core
  • Ubiquitous broadband, RPC
  • Unlimited storage
  • Mobility & new devices
  • Natural user interface: touch screen, speech recognition
  • High fidelity displays

Opportunities for Developers

  • Presentation
  • Business Logic
  • Data
  • Services

PRESENTATION

Silverlight demo (S. Somasegar)

  • Crossfader: social networking media sharing site
  • Silverlight is a true subset of WPF, so Silverlight controls can be reused in WPF
  • Silverlight 2 Beta 2 should be delivered by the end of this week, with a Go-Live license

BUSINESS LOGIC

Modeling demo (Brian Harry)

  • View > Architecture Explorer: class diagram (design as it actually is)
  • Architecture Layer diagram (design as it was meant to be: diagram of layers; can be validated against actual code by right-clicking in designer; can be a check-in policy)
  • Upcoming release of VSTS for Database Professionals: can maintain DB2 databases from Visual Studio
  • Database refactorings: rename field in database, renames in code (available today)
  • Codename “Oslo”: modeling tools, modeling language, model repository

DATA

SQL Server 2008 demo (Dave Campbell)

  • Spatial data
  • File integration: VARBINARY has an attribute saying “store it as a file stream”; still gets backed up
  • Full timezone support: can store timestamps along with the timezone
  • Available in the next month or two

SERVICES

FUTURE OF APPLICATION DEVELOPMENT

  • Design
    • Analysis & design tools
    • Next generation declarative languages
  • Execution
    • Process aware engines/platforms
    • Self-aware system management

Microsoft Robotics demo (Tandy Trower)

  • Robotics are expensive and require specialty maintainers
  • Just emerging at the personal level — about where PCs were in the early 1980s
  • Microsoft Robotics Developer Studio: development for robots, including simulation
  • Self-balancing robot with dextrous arms and a picture of Steve Ballmer’s face
  • RoboChamps competition: program simulated robots, win real robots (www.robochamps.com)

TechEd 2008 notes: Visual Studio Team System seminar

Today was the Team System preconference seminar.

Brief summary: Team System might be a good solution for continuous builds, especially if you don’t do pair programming. But if you’ve already got Subversion and CruiseControl, or if you do promiscuous pairing, you would lose some features by moving to Team System. Up to you to decide whether that’s a good trade.

Here’s a dump of the notes I took during the session, haphazard, disorganized, and undigested. There were a lot of questions and side notes along the way, which I jotted down as they came up, so there’s not a whole lot of structure here. I also didn’t bother taking notes on the stuff I’m not interested in, like Web application testing (which was a good chunk of the afternoon).

I’ll do more blog posts later to add some sensible commentary and more of my own thoughts, including some elaboration on the brief summary above. If you don’t want to wade through pages of bullet points, or if you want some explanation on how I reached my conclusions, you might want to wait for those later posts.

Improve Software Quality with Visual Studio Team System (undigested notes)

Steven Borg
Jeff Levinson
Shad Timm
Northwest Cadence
www.nwcadence.com
blog.nwcadence.com

Process (SDLC)

  • Release: deploy
  • Analysis: gather requirements
  • Architecture: design
  • Construction: build
  • Testing: verify quality
  • Repeat

WHAT CAN GO WRONG

Analysis

  • Poor requirements
    • No mind-reading tools available… yet
  • Lack of traceability
    • What pieces of code relate to a given requirement
    • “Oh yeah, that’ll just take me a couple hours to add”
  • How do you know when a requirement is finished?
    • Not knowing allows for stealthy and subtle scope creep

Construction

  • Requirements are unknown
  • Requirements change w/o devs knowing
  • Poor code quality
  • Poor code maintainability
    • Dev A: “It’s so complex I don’t recognize the code.”
    • Dev B: “But you wrote it, didn’t you?”
  • Code changes not linked to specific requirements or bugs
  • Builds break all the time
  • “It works on my machine!”
  • Devs constantly have to provide status to Project Managers
    • How do you tell how a project is going?

Testing

  • Integration quality (e.g. build breaks)
  • Knowing if the requirement has been met
  • Communication problems between testers and devs
    • Test cases and bugs “thrown over the wall”
    • Testers can’t adequately document test failures for devs
    • Devs can’t find the code that broke (see “it works on my machine”)

Release

  • Not knowing what code is in the release
    • Happens more often than it should
    • Will not pass a SOX audit…
  • Regression bugs
    • Dev: “I fixed Bug #265,357 yesterday”
    • User: “Bug #265,358 is the same as #265,357”
  • Inability to handle hot fixes or maintenance releases in the source code branch structure

DEMO

Sample site: “Kigg” (digg clone)
Built on ASP.NET MVC framework
Many demo demons… all on purpose

Side note: right-click solution, “Check in”
Can associate checkin with a work item
Conflict resolution looks a lot more complicated than with SVN, and takes a lot of time and a lot of dialog boxes

Side note: TFS 2008 has a feature to automatically get latest version of a file when you start typing… but just that one file. How stupid is that? (It’s optional, and off by default. The feature is called “Get Latest on checkout”. Don’t turn it on.)

TFS, like VSS, operates on a check-out system (pessimistic locking). TFS 2005, when you check out, just makes the file writable (may be a historical version).

Always do a Get Latest (on the entire solution) before you start editing.

WHAT IS QUALITY?

How do you measure it?

  • Lack of bugs
  • Determined by the customer
    • Business value
  • How well does it meet the requirements?
  • Requirements document determines quality
    • But… “That’s not what I meant”
  • Relying on requirements is “kind of a deathtrap”
    • Note: session on Thursday on change management will go into this more

Side note: sounds like they may have something to say about bugfixes in a branch, and how to make sure they get merged into branch?

THE BIG DEMO

What do you need?

  • Tests
  • Automated build
  • Functional tests

Demo: Starting with code you inherited… how do you do all this?

IMPLEMENTING AN AUTOMATED BUILD

.vsmdi file — list of tests
Add new test list
Types:

  • Unit tests (can be split into customer vs. developer)
  • Functional / Web tests
  • Load tests
  • Manual tests
  • VSTS has no built-in knowledge of these types. It’s just suggested to make different lists.

Create a build
Walk through a wizard
Select one or more solutions… select debug, release, or both
Select which test lists you want to run
Retention policy… how many do you want to keep? How many failed, how many succeeded, etc. Automatic cleanup.
Select a “stage” directory: there’s a build server, a TFS server, and auser. You tell TFS, “Give me a build”, and it delegates that to the build machine, which has to be told where you want to drop it. That’s the “Builds will be staged to the following share” option.
Select what triggers the build. “None” / “Build each check-in” (queued) / “Accumulate until prior build finishes (fewer builds)” / “Every week on the following days”
Can have as many build servers as you like
Default implementation: one build script will run on one build machine; if you want one script to run across all, you need a third-party tool
We would do our daily / fast / medium / slow builds much the same way as today, with multiple build machines

Can manually queue a build

  • Can specify MSBuild command-line parameters
  • Can select a “build agent” (which build machine you want it to run on)
  • After you queue it, you can monitor its progress
  • Your build will seldom work its first time. Wrong security settings, wrong workspace settings, problems with references, etc. You’ll have to spend some time looking at the log file.
    • Build log can be thousands of lines. There are some hyperlinks that will take you to just a small piece of the log, but it may not always give you all the context you need.
    • References are going to be a frequent problem. Build server wipes out the workspace and rebuilds. References to DLLs that you installed somewhere (e.g. MVC framework) are stored with relative paths that probably won’t work on the build PC, so add those DLLs to source control.
    • Need VS installed on build server if you’re going to run unit tests.
  • TFS has something called “workspace mappings”, which sounds like it’s kind of like an SVN checkout but not really. They got a build error because, as I gather, they tried to make their automated build use a workspace that was for the wrong user, or something.
  • Best practice: Start with an automated build. And start small!
  • Best practice: Put everything in version control. NUnit, MVC framework, etc. Not only does this help with the build, it helps when you need to set up a new development PC.
    • What about dependencies on other projects that other devs are working on? – ideally everything goes into a solution. Solution file doesn’t have to be in version control. Can mix and match branches, etc. Also want a master solution in revision control, which is used for building. Every developer should have the same workspace structure (relative paths).

Build Automation is not F5.
Is…

  • Collecting
  • Assembling
  • Validating
  • Auditing

Helps you avoid integration errors
Team Build 2008

  • VSTS 2008 Team Foundation Serv er Build
  • Core feature of TFS – not upsell
  • Build automation in Team System
  • Provides “F5” experience

Key features in Team Build 2005 (abbreviated)

  • Reports
  • Warehouse support for historical trends
  • Multiple build types

New in 2008 (abbreviated)

  • Continuous integration
  • Build queuing
  • Scheduled builds
  • Build initiated from TFS
  • Prepare build agent and create build numer
  • Sync sources (blow everything away and check out again – by default, can customize e.g. for continuous build)
  • Compile and Analyze (can be anything you want – MSBuild, Perl script, whatever)
  • Tests
  • Update work items
  • Calculate code metrics
  • Build report
  • Copies to drop location
  • Publishes results to TFS
  • Notify subscribers

Team Build uses MSBuild
Build agent loads .proj file

Customizing the build

  • Core build defined in TfsBuild.proj
  • If you want to do anything other than compile, you’ll have to edit that file

Desktop builds

  • Can run the same build that will run against the server, but locally. F5 ran, but the build does extra stuff that I want to test.
  • If you don’t have Team Build Agent locally, you can just do “msbuild TfsBuild.proj”.
  • Doesn’t blow away your workspace.
  • Uses your changes; doesn’t get other changes from the server.
  • Can be a false sense of security (e.g. prerequisites).

Team Build has a managed API. You can queue builds, etc.

Side note: builds with VS 2005 only use one processor, but VS 2008 uses all your processors

Can set policy to not allow checkins on broken CI build. You can override it to fix the build.

Can double-click the failed build, scroll down, and see a list of commits and view the diffs.

“Annotate” = SVN Blame, though a little nicer. The revision numbers in the gutter are hyperlinks, and pop up the list of all the other changes that were made in that commit.

VSTS does have atomic commits. (That means it might actually be a usable option!)

Retention

  • Multiple folders: BuildName_CurrentDate_SequentialNumber
  • Want to keep a few failed builds so people can figure out why and when they failed
  • Probably don’t want to keep any stopped builds
  • “Partial builds” = built, but tests failed. Treat them like failed builds.
  • There’s an option to assign a “quality” to one of these builds, and then mark it as “retain”, where it’ll be kept forever.
  • Note: retention policy is not stored under version control. It’s stored in a database.
    • But you can use the API to create a build.

HOW TO DO UNIT TESTING WITH TFS

[TestMethod]
public void PaycheckTest() // Bi-monthly
{
Worker target = new Worker(“John Smith”);
target.Salary = 52000;
double actual = target.GetPaycheck();
double expected = 2000;
Assert.AreEqual(actual, expected, “Wrong paycheck amount”);
}
(oops – logic is wrong)

ExpectedException attribute
No Assert.Throws
Each test should only test one thing, so Assert.Throws isn’t too necessary (I would dispute this a bit)

TDD:

  • Never write a single line of code unless you have a failing automated test
  • Eliminate duplication

The test represents the requirement

  • If there is no requirement (test), then there’s no code to write

Duplicate code == bad design (DRY)

Side note: in MVC, model tests == programmer tests; controller tests bleed into customer tests

Purpose of programmer test is to define only what is needed. Code should be enough to make the test pass, no more, no less.

Test List

  • When starting a task, make a list of tests you think you’ll need.
    • Make sure it’s complete, but as simple as possible.
  • Interesting idea: Write the test in a way the customer can understand.

Order of execution

  • [AssemblyInitialize]
  • [ClassInitialize]
  • [TestInitialize]
  • [TestMethod]
  • [TestCleanup]
  • [ClassCleanup]
  • [AssemblyCleanup]

Right-BICEP

  • Are the results right?
  • Boundary conditions
  • Inverse relationships
  • Cross-check results using other means
  • Error conditions
  • Performance characteristics

CORRECT boundary

  • Conformance
  • Ordering
  • Range
  • Reference – does it reference anything external not under control of the code itself?
  • Existence
  • Cardinality
  • Time

TDD problems

  • Requires training
  • Writing the wrong test
  • Requires developer to take a functional requirement and put it into code
    • Business functionality = many requirements
    • May need to test a long chain of responsibility. Mock objects help, but they’re limited and you can still run into issues when you get to functional testing.
  • Unit test as much as possible. Can’t really test threading. UI testing is hard.
  • 100% code coverage isn’t good enough.

Side note: VSTS test framework lets you write tests for private methods. Generates reflection code for you.

Can see a list of all tests that are not in a test list yet. Automatically finds unit tests you add and shows them here. So then you decide which test list to add them to. You only get this if you have the Tester edition, though.

Side note: forthcoming version of VSTS has “aftershocks”. When you modify code, it’ll mark tests that are affected by that change.

Side note: they do, now, have a menu item to run all the tests in the solution. You can also right-click inside a method body and select “Run test”, or right-click outside the method to run all the tests in the class.

Can set an option to generate code-coverage info. You have to select which test run you want to see code coverage for; looks like a pain.

Right-click a method, select “Add unit test”. Generates a method name, e.g. “MyMethodTest”, “MyMethodTest1”, etc.

Can write a data-driven test that reads its data from e.g. an Excel spreadsheet. (FIT?)

Adding a new test does not add it to the test list. Eww.

Code coverage metrics will not be gathered if you target “Any CPU”. You have to set the build type to “x86” or “x64” or something specific. On the other hand, if you want to deploy a Web site, you have to set the build type to “Any CPU”.

If a build fails, you can set it up to automatically create a work item, and attach info about which test failed. If tests failed, it adds a file under the issue’s “Links” tab that you can double-click to see details about the test failures. Make sure to set it up to happen on partially failed builds, too.

Bad: By default, Team System allows you to commit without a comment. You need to install a PowerToy to change this. Even if you do change it, anyone can override (though you can hook up an alert e-mail to the whole team if they do override).

Build notifications: red spinning light, vacuum cleaner (to tell you your code sucks).

Side note: Power tools give you an app to listen for build events. Pops up toast alerts for whatever kinds of notifications you want to see.

QA shouldn’t be finding programmers’ bugs; programmers should be doing that. QA should be making sure the app meets the requirements, and finding deep logic problems. QA, in general, shouldn’t be writing unit tests; they should be doing functional, acceptance, and regression testing. Regression testing might use some of the same tools as unit testing, but acceptance and functional testing probably won’t.

I asked whether MS’s testing framework lets you build hierarchical test suites like DUnit, and like is possible with recent versions of NUnit. The answer is no. You can do some stuff like that with test lists, but you can’t do a test-list setup and test-list teardown. (I assume you could get partway there with ordered test lists.)

Related Content (abbreviated)

  • Interactive Theater Sessions
    • DVP05-TLC: Q&A on VSTS Best Practices (Tuesday)
    • DVP09-TLC: Applying Feature Driven Development Techniques to VSTS (Wednesday)
    • DVP01-TLC: Testing using Mock Objects (Wednesday)
    • DVP03-TLC: VSTS Worst Practices (Thursday)
    • DVP07-TLC: Software Configuration Management – Branching Done Right (Thursday)
  • Breaking Sessions
    • ARC303: Designing for Testability (Wed)
    • DVP318: Agile Talk on Agility (Wed)
    • TLA324: How I Became a Team Build Muscleman (Wed)
    • DVP313: Understanding Branching and Merging with VSTS
    • DVP301: Making Your Test Lab Obsolete with Team Test and Virtualization (Thu)
    • DVP204: End to End Software Configuration and Change Management (Thu)

CODE METRICS / CODE ANALYSIS

Static Code Analysis: Analyzing code for patterns / bugs (FxCop)

  • Start small on things that are important to you — don’t check everything at once. Perhaps start with design rules.

Side note: Can set check-in policy that doesn’t let you check in if it violates the policy

  • Some only available with Power Tools, e.g. requiring check-in comment, forbidden patterns, etc.
  • Testing policy: must run test list before checking in
  • Work item: must associate with a work item
  • Work item query: must associate with a work item that’s part of a specified query
  • Custom path policy: only applies policies to a particular folder in the source control tree, e.g. a particular branch
  • Always have option to override policy failure and continue checkin. You’re prompted to type in a reason.

Dynamic Code Analysis: Analyze your code while it’s running to find performance bottlenecks

  • Sampling
    • Can’t do in VPC
    • Stops your code every given interval, figures out what method you’re in, then resumes
  • Instrumentation
    • Inserts probes in your code: one instruction before and one instruction after each line of code
  • Good to start off with sampling, because it doesn’t (significantly) affect your app’s performance. Especially on a running system.
  • Instrumentation gives you better monitoring.
  • For Web site performance testing, start with instrumentation instead. 95% of the time is actually IIS, so sampling likely won’t even stop while your app is running.

Hotpath (new feature in VSTS2008)

  • Automatically finds slowest path (the bottleneck) through your app
  • Denoted by a flame in the calltree view
  • Only for instrumentation

Using results to improve execution time

  • Look for methods taking the longest time (summary page)
  • Examine call stack to find which lines are taking the longest
    • Usu. loops
    • Try to remove boxing / unboxing
  • If memory issues, enable .NET Object Allocation
    • See which objects survive to Gen2 and determine if those can be released – GC is expensive
    • Must be enabled in Performance Session properties

Side note: can performance test your unit tests. Right-click unit tests and select “Create performance section”. Only with Developer Edition.

CODE METRICS

  • Class Coupling. Higher is bad.
  • Class Inheritance. How deep is your inheritance chain? Too high is bad.
  • Lines of Code.
  • Cyclomatic Complexity. How many paths are there through your code? Good indication of how many unit tests you need to write to get full coverage. Higher is bad.
  • Maintainability Index. Overall indication of complexity based on lines of code, cyclomatic complexity, and computational complexity. 0-100, where 60 is horrible and higher is good. Has to be extraordinarily low before MS’s hard-coded thresholds will turn yellow or red.

WORK ITEMS

  • Title
  • Assigned To
  • Description
  • “Scenarios” and “Tasks” – Add a scenario, then you can add a task that’s linked to it

Failing build creates a work item — mark it as resolved, and then when the tests pass, it’s noted that it really is fixed

DATABOUND TESTS

  • Can select a database or a CSV
  • Bind form post parameters to a column in the CSV
  • Specify a property to say “run once for each row in the CSV”
  • Can convert the Web test to code
    • Databinding becomes attributes and Context lookups, so you can do it for unit tests as well

ASIDE: VSTS AND PROMISCUOUS PAIRING
Commits use your Windows login; there’s no way to get prompted for your credentials at commit time. The command line does allow “commit as” another user, but your login has to be granted permissions to commit on that other person’s behalf, which is probably a superset of the permissions to commit yourself (not verified).

Subversion does this nicely, once you get everything set up just so, but VSTS’s automated builds don’t work with third-party revision-control systems.

Side note: when you check in and associate with a work item, it defaults to resolving that work item. You can change that (to just associate with the work item, without resolving it), but it seems like the wrong default to me.

Note: When you add a bug, you can specify which automated build it applies to. When a new build runs and the test passes, “Resolved in build” is automatically filled in.

Side note: Test .proj file is not automatically checked out when you edit. So when you go to save, it’s still read-only. You can choose to overwrite the read-only file, but then it won’t get checked in, because it’s not checked out. (HUGELY deficient compared to SVN.)

“Database Professionals” (part of Team Suite): Can create projects so you can put MSSQL schema in version control, and can create unit tests for stored procedures. Stored-procedure unit tests are code written in Transact-SQL, plus test-success conditions selected from a list in the GUI. Doesn’t quiet yet work with VSTS, though (doesn’t store relative paths the same way as everything else).

Tools exist for importing history. Their recommendation is, if you don’t need history, then just import a snapshot into VSTS and keep your old system in case you need history; but if you do want history in VSTS (and you’re not moving from VSS), you’ll need to do some coding. Issues can just be imported as an Excel file.

BRANCHING

There is no “one size fits all” branching strategy. Depends on release frequency, dependencies. Pick a pattern that fits your company’s business needs.

  • Release by Manifest
    • Avoid if you can
  • Branch by Release
    • Good for maintaining multiple versions
  • Branch by Quality
    • Good for maintaining single versions
  • Branch by Feature
    • Good for long running features and uncertain inclusion

Release by Manifest (aka Release by Label)

  • Time-consuming
  • Single release at a time
  • Doesn’t support SOX compliance
  • Typically requires some custom code
  • Label the code you want to release, then start working on release 2
  • Bug comes in on release 1… oops.
  • Check out code from release 1 on one machine, make the change, build on that machine, and release. Then manually merge it into release 2. Lots of hoping and wishing.

Branch by Release

  • Allows for easily maintaining multiple versions
  • Simple to roll fixes forward from earlier versions
  • Simple pattern — least amount of work to maintain
  • Can be used in combination with other patterns
  • Con: Depending on the application, may require separate testing environments
  • His sample slide shows branching the new release, rather than having a trunk and branching the old release. That’s because of a limitation in MSTS that only makes merges work well between related branches.
  • In MSTS, when you branch, you LOSE YOUR HISTORY. (Well, it’s still there, but the tools hide it by default.) Improved in next version. But that’s another version not to do trunk with VSTS.
  • Trunk still might be good if you’re not maintaining multiple versions, e.g. if you’re really doing agile and you release every two weeks, and don’t maintain old versions. It depends on your release frequency.

Branch by Quality

  • Microsoft-recommended, but not a good pattern if you maintain more than one version
  • Prod -> QA -> Dev. Dev has many changes, QA and Prod only get changed through merges.
  • Merge down, copy up.
  • Test cycles (1)
    • Code has not been released to production
    • Bug is found in code under test
    • Development on next release has begun
    • Release 1 is done in Dev, so you label it and merge up to QA.
    • Start working on release 2 in Dev.
    • QA finds a bug. Can’t fix it in Dev branch. Can’t fix it in QA branch because QA is immutable.
    • Make a hotfix branch off of QA. Fix there. Merge back into QA, re-label, re-test. If test succeeds, merge into R2 branch in Dev.
    • Gets really complicated when you have multiple releases and a lot of bugfix branches. Need to really know what you’re doing with labels, merging, etc.
    • At some point, you have to deal with merge conflicts, which you can’t run your continuous build on. Resolve them on one PC and do all your testing there, and only commit and merge when you know everything works.
    • All QA and release builds come out of QA branch. You don’t build out of the production branch; it’s just there for safekeeping.

Side note: don’t use VSTS labels for auditing. They’re not immutable, and changes to the label aren’t tracked.

Branch by Feature

  • Isolate features for varions reasons
    • Longer time to develop – won’t fit in an iteration
    • Independent of other features
  • Allows for release of features as ready
    • No dependency on when other features are ready
  • Always combined with another pattern for release strategy, e.g. Branch by Release or Branch by Quality
  • When you’re ready to merge into Dev, you first merge all the latest changes from Dev into the Feature1 branch, test, and then copy back into Dev
  • Test cycles are entirely independent of the feature branches
  • When a feature is complete, it’s merged back to dev and the branch is “closed”
  • What happens when you want to modify common code? Not insurmountable by any means, but it’s a question you need to answer.

Side note: viewing branch history is at the folder level. If you branch at a sub-folder, you won’t see that on the Branches tab for the parent folder. So be consistent about where you branch.

Determine Your Needs (suggested)

  • How many versions do you need to support?
    • 1 = Branch by Quality or Release by Manifest
    • 2+ = Branch by Release
  • Second, what length of iterations? Or Waterfall?
    • Waterfall = Branch by Quality (assumes only one release and maintenance)
    • Iterations = Branch by Release. Feature branches may be needed depending on length of release.

Final Thoughts

  • These are the main patterns. Tailor them as necessary.
  • Goal: Least amount of work to meet your needs
  • Remember the impact of schedule on your time frames.
    • Sometimes it’s better to freeze development.
    • Or use shelving (aka make a patch) at opportune times to reduce complexity.

Off to TechEd

I’m headed to Microsoft TechEd, and am planning to blog the sessions while I’m there.

(Note to DelphiFeeds readers: you’ll only see things I think would interest Delphi users. If you want to read the C# stuff too, you’ll need to come to my blog, or just follow my conference posts.)

It looks like there’ll be plenty to keep me busy. I’m going to a preconference seminar on Monday called “Improve Software Quality with Visual Studio Team System”, which is all about Microsoft’s revision-control, automated-test, and continuous-integration software. It’ll be interesting to see if they have anything to offer beyond what Subversion, NUnit, and CruiseControl (or CruiseControl.NET) have to offer; I’ll keep you posted.

The rest of the week, I’ll be going to whatever sessions look interesting. No special plan there, just learn things. And hang out at Universal Studios.

Any of you guys going to be there? Let’s get in touch!

[This blog post was prerecorded. Theoretically, my plane is supposed to be leaving the ground right when this post goes live.]