Tuesday, 11 November 2008

Day 2 - 13:30 : Sense and Testability session (ARC307)

Roy Osherove from Typemock presented this informative session about testability and design and how they relate to each other - specifically how we might design software to make it inherently testable. To be honest there was nothing particularly new about the concepts presented here, but it was nice to have these principles confirmed and reinforced.

What makes a unit testable system?
A unit-testable system is a system where for each piece of coded logic in the system, a unit test can be written easily enough to verify that it works as expected whilst keeping the PC/COF rules, which are;

Partial runs are always possible - you can run all or 1 test, they are not dependent upon each other.

Configuration is not needed - or at least, isolate tests that need configuration from tests that don't so that it is clear.

Consistent pass/fail result. Ensure your test can be trusted - it produces the same result time and again until the code is changed.

Order does not matter - running the tests in different orders won't change the results.

Fast - unit tests should execute quickly so that they remain useful.

A problem posed
Given the following method, how could you write a unit test to check both a positive and negative outcome?

public bool IsRetired(int age)
{
    RulesEngine engine = new RulesEngine();
    if( age >= engine.RetirementAge )
        return true;

    return false;
}

Quite simply you can't. The tight coupling between the IsRetired method and the rules engine prevents us from testing a positive and negative outcome without having knowledge of the values that will be returned from the rules engine itself.

Interface based design decision
So, we change our design to support testability - using interface based design we can de-couple and replace the testable parts of the code as follows;

public interface IRulesEngine
{
    int RetirementAge;
}

public class MyClass
{
   private IRulesEngine _engine;

   public IRulesEngine Engine
   {
        get
        {
            if( _engine == null )
                _engine = new RulesEngine();
            return _engine;
        }
        set
        {
            _engine = value;
        }
   }   
 
   public bool IsRetired(int age)
   {
       if( age > Engine.RetirementAge )
        return true;

        return false; 
   }
}

We can now have a test specific version of IRulesEngine and have this present predictable results to our class and therefore test both positive and negative responses. By having dependencies on interfaces rather than concrete classes, it becomes easy to de-couple the parts and replace functionality with test specific instances.

This in itself leads to problems however. The over-simplistic example above is itself quite cumbersome, so imagine how things would be if the class depended on 5 interfaces and each of those depended on 5 interfaces and so on. Resolving these dependencies could become prohibitive very quickly!

For this reason we use dependency injection.

Having an application centric register of concrete object types referenced against interface types, we can easily implement a locator or factory class that, given an interface being requested, can quickly create an instance of the concrete class. In turn, it would examine the dependencies of the concrete class and resolve these from the registry allowing for complex dependencies to be resolved just by asking the locator/factory for an instance of an interface.

This of course is classic dependency injection. He recommended the use of constructor dependency injection, where dependencies are passed as interface references on the constructor of an object, for non-optional dependencies, and using property setter injection where optional dependencies exist. Eg: I may or may not have a logger depending on my scenario.

Inversion of control (IoC) containers such as spring.net, castle windsor, structure map and the Microsoft unity application block provide these dependency injection facilities and more in an easy to use form out of the box, along with support for defining objects as having different life cycles (singleton for example).

To summarise, design decisions made to support testing so far in the presentation included the use of interfaces to be able to replace objects and the use of a locator/container to help resolve dependencies automatically.

Next, he spoke about testing static methods, properties and other things that haven't been implemented specifically to be testable. Eg, how do you test the paths in a method that uses DateTime.Now?

public bool IsRetired(DateTime dateOfBirth)
{
    // This is just pseudo code....
    if( (DateTime.Now - dateOfBirth) > 60 years )
        return true;
    
    return false;
}

You could solve it with an interface, say IClock, and have everywhere that needs to know the time use the ioc container to get an instance of the clock. This would clutter your code in many places though with lots of constructor parameters in lots of classes that need the IClock.

Instead, it would be easier design the system to wrap the DateTime object with a new static class that is able to return either the value of DateTime.Now, or the value of a nullable DateTime property within it.

// note: not thread safe....
public static class SystemClock
{
    private static DateTime? _forcedDateTime;

    public static void ForceDateTime( DateTime newDateTime )
    {
        _forcedDateTime = newDateTime;
    }

    public static DateTime Now
    {
        get
        {
            if( _forcedDateTime.HasValue ) return _forcedDateTime.Value;
            return DateTime.Now;
        }
    }
}

This approach just means you now have to enforce the policy to ensure developers always use SystemClock.Now instead of DateTime.Now.

This abstraction allows us to control fake results for the purposes of testing, but sacrifices encapsulation - normally you wouldn't have the ability to force the date/time in a design not specifically intent on providing testability.

General design for testability principles
The following were some general principles that Roy advocated;

Avoid big design up front. This doesn't mean avoid all design up front - but don't be too prescriptive. Design the purpose of a component and list the tests that must be satisfied.

Use interface based designs.

Avoid singletons, let a container specify transient/singleton life cycles of objects it creates. Where singletons are needed, use a static wrapper that creates a singleton instance of another class, but allow the wrapped class to still be constructed.

Use IOC containers to resolve dependencies and specify life cycle.

Avoid GOD methods - huge do-it-all methods. These prevent maintenance, are impossible to test. Avoid GOD methods by design - keep the single responsibility principle with calls to small methods.

Have methods virtual by default.

Don't use X = new X(), instead us Factory.MakeX() or Container.Resolve<IX>();

Ensure a single responsibility for classes and methods.

Overall, follow S.O.L.I.D. principles (For more information, see http://butunclebob.com/Articles.UncleBob.PrinciplesOfOod)

SRP - single responsibility principle

OCP - open closed principle - ability to extend without modifying

LSP - Liskov substitution principle - derived classed must be substitutable for their base classes

ISP - The interface segregation principle - make fine grained interfaces that are client specific

DIP - Dependency inversion principle - depend on abstractions not on concretions.

To close out, Roy finished with a catchy song.....about bad design. Obviously not that catchy though as I can't remember any of it.

No comments:

Post a comment