Dependency Injection (DI) & Registration Madness

I recently saw an application that has many services being registered with a DI container as below.

container.RegisterType<ICustomerSearchService, CustomerSearchService>();
container.RegisterType(typeof(IWriteRepository<>), typeof(WriteRepository<>));
container.RegisterType(typeof(IReadRepositories), typeof(ReadRepositories));
container.RegisterType(typeof(ICustomerSearchRepository),
typeof(CustomerSearchRepository));

This is just a few but there are 100s of registrations and it is continue to grow…

registration_page

One of the developers has taken a new approach to split registration across multiple classes. But again there are still loads of type registrations.

This is the registration madness.

This is where the Convention Based Auto Registrations comes into the picture. Basically the team agree on a naming convention and use that convention to auto-wire the registrations as below.

public static void AutoWire(IUnityContainer container)
{
     var assemblies = AppDomain.CurrentDomain.GetAssemblies()
         .Where(s =>
     s.FullName.StartsWith("YouAssemblyNameStartingWith"))
     .ToList();

     var types = from assembly in assemblies
                 from exptype in assembly.ExportedTypes
                 where exptype.Name.StartsWith("TypeNameStartWith")
                 select exptype;

     container.RegisterTypes(types, t => t.GetInterfaces(),
               WithName.Default);
}

There are 2 conventions here – The assembly name and the type that need to be registered.
Both must start with an agreed naming convention. Once the convention is agreed and well defined, the container is almost invisible and importantly registration code is much more maintainable.

The above example uses Microsoft Unity Container. However similar techniques can be applied for other DI libraries.
Note that RegisterTypes only available in Unity version 3 and higher.

Let’s look at some practicality of the convention based approach
While it provides a great way to reduce configuration verbosity, it might not fit for all purposes. There might be situations where that we might still need a non-convention based approaches such as programmatic registrations or even configuration based registrations (in rare cases to support late binding). There is no rule to say that we can only use one approach. I think it is best to employ, first convention based, then programmatic registrations if required.

What if someone forget to use the naming convention?
It would result in a dependency resolution exception. Let’s assume there is no code reviews, and someone in the team had no idea about the convention based registration. Then included a non-convention based/explicit registrations as below..

container.RegisterType<TFrom, TTo>();

Again this is just a start of the same problem we saw before and we wanted to avoid it in the first place.

So how can we avoid it?
I thought of it would be nice to create a custom extension method on the container to verify the conventions are registered accordingly.

container.VerifyRegistrationConventions();

This extension method will look for the types that are already registered and check for the convention that we expect within each registered type.

public static class UnityExt
{
      public static void VerifyRegistrationConventions
                         (this IUnityContainer continer)
      {
           var registrations = continer.Registrations
              .Where(x => !x.MappedToType.Name.StartsWith("Reg") &
              !x.MappedToType.Name
              .StartsWith("IUnityContainer")).ToList();

           if (registrations.Any())
           {
               var builder = new StringBuilder();
               builder.AppendLine("Registrations Convention Mismatch : ");
               foreach (var registration in registrations)
               {
                   builder.AppendLine(registration.MappedToType.Name);
               }
               builder.AppendLine("Registrations with the container
                         expects types name to be start with 'App': ");

               throw new ContainerConventionViolationException
                        (builder.ToString());
           }
      }
}

This extension also assumes all registrations follow convention based registration.

The thrown exception message will contain the types that are not adhere to the specified naming convention.

Where we invoke this extension?
There are 2 possible places..
a. Last possible place that your application uses the DI container for object composition. In other words this extension can be used just after the last possible execution routine within the composition root.
b. First possible place just after the execution exited the composition root. This place may not be obvious as ‘a’, therefor I prefer ‘a’ over ‘b’

In both cases, the developer had every opportunity to construct the object graph and the above extension method would verify the conventions.

If we are making the DI container accessible outside the composition root, then we cannot guarantee that all types are composed within the Composition Root. In that case, we cannot utilise this extension reliably and importantly we are not using the DI container correctly.

.NET Design Patterns – A Fresh Look

Note: This post is also available in .NET Curry Magazine 16th Issue and Site.

.NET Design Patterns…. yes, there are several books and resources written on this topic. When it comes to Software Development, Design Patterns promotes constancy across the code base and allows us to develop better maintainable software.

There are many Design Patterns in Software Development. Some of these patterns are very popular. It is almost true to say that most patterns can be embraced irrespective of the programming language we choose.

This article is published from the DotNetCurry .NET Magazine – A Free High Quality Digital Magazine for .NET professionals published once every two months. Subscribe to this eMagazine for Free and get access to hundreds of free .NET tutorials from experts

Based on the type of application, we may use one or more Design Patterns. Sometimes we may mix and match Design Patterns as well. It is very important to consider that we primarily want to use these patterns as effective communication tools. This may not be obvious at this stage. However during this article, we will look at how Design Patterns are utilised in various applications.

dotnet-design-patterns

In this article we will not just focus on a set of Design Patterns. We well take a fresh view of some of the existing Design Patterns and see how we can go about using them in real world dilemmas and concerns.

Design Patterns – Bit of a background
It is a fact that some developers hate Design Patterns. This mainly because of analysing, agreeing and implementing a particular Design Pattern can be a headache. I’m pretty sure we have all come across situations where developers spend countless hours, if not days, discussing the type of pattern to use. Not to mention, the best approach and the way to go about implementing it. This is a very poor way to develop software.

This dilemma is often caused by thinking that their code can fit into a set of Design Patterns. This is an incredibly hard thing to do. But if we think Design Patterns are a set of tools which allows us to make good decisions and can be used as an effective communication tool; then we are approaching it in the right way.

This article mainly focuses on .NET Design Patterns using C# as the programming language. Some Patterns may be applied to non .NET based programming languages as well. Repeating what I said earlier, most patterns can be embraced irrespective of the programming language we choose.

Abstract Factory Pattern
Wikipedia definition: “The abstract factory pattern provides a way to encapsulate a group of individual factories that have a common theme without specifying their concrete classes.”

While this definition is true, the real usage of this pattern can be varying. It can be based on real life concerns and problems people may have. In its simplest form, we would create instances with related objects without having to specify their concrete implementations. Please refer to the following example:

public class Book
{
    public string Title { get; set; }
    public int Pages { get; set; }

    public override string ToString()
    {
        return string.Format("Book {0} - {1}", Title, Pages);
    }
}

public class Program
{
    public static void Main(string[] args)
    {
        Console.WriteLine(CreateInstance("ConsoleApplication1.Book", new Dictionary()
        {
            {"Title", "Gulliver's Travels"},
            {"Pages", 10},
        }));
    }

    private static object CreateInstance(string className, Dictionary values)
    {
        Type type = Type.GetType(className);
        object instance = Activator.CreateInstance(type);

        foreach (var entry in values)
        {
            type.GetProperty(entry.Key).SetValue(instance, entry.Value, null);
        }

        return instance;
    }
}

As per the above example, creation of instances are delegated to a routine called CreateInstances().

This routine takes a class name and the property values as arguments.

At the first glance, this seems like a lot of code just to create an instance and add some values to its properties. But the approach becomes very powerful when we want to dynamically create instances based on parameters. For example, creating instances at runtime based on User Inputs. This is also very centric to Dependency Injection (DI). The above example just demoes the fundamental of this pattern. But the best way to demonstrate Abstract Factory pattern is to take a look at some real world examples. It would be redundant to introduce something already out there. Therefore if you are interested, please see this Stack Overflow question, which has some great information.

Additional Note: Activator.CreateInstance is not centric to Abstract Factory Pattern. It just allows us to create instances in a convenient way based on the type parameter. In some cases we would just create instances by new’ing up (i.e new Book()) and still use the Abstract Factory Pattern. It all depends on the use case and their various applications.

Cascade Pattern
I’m sure we often see code patterns like the following:

public class MailManager
{
    public void To(string address) {  Console.WriteLine("To");}
    public void From(string address) { Console.WriteLine("From"); }
    public void Subject(string subject) { Console.WriteLine("Subject"); }
    public void Body(string body) { Console.WriteLine("Body"); }
    public void Send() { Console.WriteLine("Sent!"); }
}

public class Program
{
    public static void Main(string[] args)
    {
        var mailManager = new MailManager();
        mailManager.From("alan@developer.com");
        mailManager.To("jonsmith@developer.com");
        mailManager.Subject("Code sample");
        mailManager.Body("This is an the email body!");
        mailManager.Send();
    }
}

This is a pretty trivial code sample. But let’s concentrate on the client of the MailManager class, which is the class Program. If we look at this class, it creates an instance of MailManager and invokes routines such as .To(), .From(), .Body() .Send() etc.

If we take a good look at the code, there are a couple of issues in writing code like we just saw.

a. Notice the variable “mailManager”. It has been repeated number of times. So we feel somewhat awkward writing redundant writing code.

b. What if there is another mail we want to send out? Should we create a new instance of MailManager or should we reuse the existing “mailManager” instance? The reason we have these questions in the first place is that the API (Application Programming Interface) is not clear to the consumer.

Let’s look at a better way to represent this code.

First, we make a small change to the MailManager class as shown here. We modify the code so we could return the current instance of the MailManager instead of the return type void.

Notce that the Send() method does not return the MailManager. I will explain why we did this is in the next section.

Modified code is shown here.

public class Mailmanager
{
  public MailManager To(string address) { Console.WriteLine("To"); return this; }
  public MailManager From(string address) { Console.WriteLine("From"); return this; }
  public MailManager Subject(string subject) { Console.WriteLine("Subject"); return this;}
  public MailManager Body(string body) { Console.WriteLine("Body"); return this; }
  public void Send() { Console.WriteLine("Sent!"); }
}

In order to consume the new MailManager implementation, we will modify the Program as below.

public static void Main(string[] args)
{
    new MailManager()
        .From("alan@developer.com")
        .To("jonsmith@developer.com")
        .Subject("Code sample")
        .Body("This is an the email body!")
        .Send();
}

The duplication and the verbosity of the code have been removed. We have also introduced a nice fluent style API. We refer to this as the Cascade pattern. You probably have seen this pattern in many popular frameworks such as FluentValidation. One of my favourites is the NBuilder.

Builder.CreateNew().With(x =&gt; x.Title = "some title").Build();

Cascade-Lambda pattern
This is where we start to add some flavour to the Cascade Pattern. Let’s extend this example a bit more. Based on the previous example, here is the code we ended up writing.

new MailManager()
    .From("alan@developer.com")
    .To("jonsmith@developer.com")
    .Subject("Code sample")
    .Body("This is an the email body!")
    .Send();

Notice that the Send() method is invoked from an instance of the MailManager. It is the last routine of methods chain. Therefore it does not require returning an instance. This also means the API implicitly indicates that if we want send another mail, we will have to create a new MailManager instance. However it is not explicitly clear to the user what we should do after the call to .Send().

This is where we can take the advantage of lambda expressions and make the intention explicit to the consumer of this API.

First we convert the Send() method to a Static method and change its signature to accept an Action delegate. This delegate takes the MailManager as a parameter. We invoke this action within the Send() method as shown here:

public class MailManager
{
  public MailManager To(string address) { Console.WriteLine("To"); return this; }
  public MailManager From(string address) { Console.WriteLine("From"); return this; }
  public MailManager Subject(string subject) { Console.WriteLine("Subject"); return this;
}

public MailManager Body(string body) { Console.WriteLine("Body"); return this; }

public static void Send(Action action)
{
  action(new MailManager());
  Console.WriteLine("Sent!");
}

In order to consume the MailManager class, we can change the Program as seen here:

Mailmanager.Send((mail) =&gt; mail
            .From("alan@developer.com")
            .To("jonsmith@developer.com")
            .Subject("Code sample")
            .Body("This is an the email body!"));

As we see in the code sample, the action specified by the delegate as an argument to the Send() method clearly indicates that the action is related to constructing a mail. Hence it can be sent out by calling the Send() method. This approach is much more elegant as it removes the confusions around the Send() method, which I described earlier.

Pluggable pattern
The best way to describe the pluggable behaviour is to use an example. The following code sample calculates the total of a given array of numbers.

public class Program
{
    public static void Main(string[] args)
    {
        Console.WriteLine(GetTotal(new [] {1, 2, 3, 4, 5, 6}));
        Console.Read();
    }

    public static int GetTotal(int[] numbers)
    {
        int total = 0;

        foreach (int n in numbers)
        {
            total += n;
        }

        return total;
    }
}

Let’s say we have a new requirement. Although we do not want to change the GetTotal() method, but we would also like to calculate only even numbers. Most of us would add another method, say GetEvenTotalNumbers as shown here.

public class Program
{
    public static int GetTotal(int[] numbers)
    {
        int total = 0;

        foreach (int n in numbers)
        {
            total += n;
        }

        return total;
    }

    public static int GetTotalEvenNumbers(int[] numbers)
    {
        int total = 0;

        foreach (int n in numbers)
        {
            if (n%2 == 0)
            {
                total += n;
            }
        }

        return total;
    }

    public static void Main(string[] args)
    {
        Console.WriteLine(GetTotal(new [] {1, 2, 3, 4, 5, 6}));
        Console.WriteLine(GetTotalEvenNumbers(new[] { 1, 2, 3, 4, 5, 6 }));
        Console.Read();
    }
}

We just copied/pasted the existing function and added the only condition that requires calculating even numbers. How easy is that! Assuming there is another requirement to calculate the total of odd numbers, it is again as simple as copying/pasting one of the earlier methods, and modifying it slightly to calculate odd numbers.

public static int GetTotalOddNumbers(int[] numbers)
{
    int total = 0;

    foreach (int n in numbers)
    {
        if (n % 2 != 0)
        {
            total += n;
        }
    }

    return total;
}

At this stage you probably realize that this is not the approach we should take to write software. It is pretty much copy paste unmaintainable code. Why is it unmaintainable? Let’s say if we have to make a change to the way we calculate the total. This means we would have to make changes in 3 different methods.

If we carefully analyse all 3 methods, they are very similar in terms of their implementation. Only difference is the if condition.

In order to remove the code duplication we can introduce the Pluggable Behaviour.

We can externalize the difference and inject it into a method. This way the consumer of the API has control over what has been passed into the method. This is called the Pluggable Behaviour.

public class Program
{
    public static int GetTotal(int[] numbers, Predicate selector)
    {
        int total = 0;

        foreach (int n in numbers)
        {
            if (selector(n))
            {
                total += n;
            }
        }

        return total;
    }

    public static void Main(string[] args)
    {
        Console.WriteLine(GetTotal(new [] {1, 2, 3, 4, 5, 6}, i =&gt; true));
        Console.WriteLine(GetTotal(new[] { 1, 2, 3, 4, 5, 6 }, i =&gt; i % 2 == 0));
        Console.WriteLine(GetTotal(new[] { 1, 2, 3, 4, 5, 6 }, i =&gt; i % 2 != 0));
        Console.Read();
    }
}

As we see in the above example, a Predictate has been injected to the method. This allows us to externalize the selection criteria. The code duplication has been removed, and we have much more maintainable code.

In addition to this, let’s say we were to extend the behaviour of the selector. For instance, the selection is based on multiple parameters. For this, we can utilize a Func delegate. You can specify multiple parameters to the selector and return the result you desire. For more information on how to use Func delegate please refer to Func.

Execute Around Pattern with Lambda Expressions
This pattern allows us to execute a block of code using lambda expression. Now that sounds very simple and that’s what lambda expressions do. However this pattern is about using lambda expressions and implements a coding style, which will enhance one of the existing popular Design Patterns. Let’s see an example.

Let’s say we want to clean-up resources in an object. We would write code similar to the following:

public class Database
{
    public Database()
    {
        Debug.WriteLine("Database Created..");
    }

    public void Query1()
    {
        Debug.WriteLine("Query1..");
    }

    public void Query2()
    {
        Debug.WriteLine("Query2..");
    }

    ~Database()
    {
        Debug.WriteLine("Cleaned-Up");
    }
}

public class Program
{
    public static void Main(string[] args)
    {
        var db = new Database();
        db.Query1();
        db.Query2();
    }
}

The output of this program would be..

Database Created..

Query1..

Query2..

Cleaned Up!

Note that the Finalizer/Destructor implicitly gets invoked and it will clean-up the resources. The problem with the above code is that we don’t have control over when the Finalizer gets invoked.

Let’s see the same code in a for loop and execute it a couple of times:

public class Program
{
    public static void Main(string[] args)
    {
        for (int i = 0; i &lt; 4; i++)
        {
            var db = new Database();
            db.Query1();
            db.Query2();
        }
    }
}

The Program would produce an output like the following.

Database Created..

Query1..

Query2..

Database Created..

Query1..

Query2..

Database Created..

Query1..

Query2..

Database Created..

Query1..

Query2..

Cleaned Up!

Cleaned Up!

Cleaned Up!

Cleaned Up!

All clean up operations for each DB creation happened at the end of the loop!

This may not be ideal if we want to release the resources explicitly so they don’t live in the managed heap too long before being Garbage Collected. In a real world example, there can be so many objects having a large object graph, trying to create database connections and timing out. The obvious solution is to clean-up the resources explicitly and as quickly as possible.

Let’s introduce a Cleanup() method as seen here:

public class Database
{
    public Database()
    {
        Debug.WriteLine(&quot;Database Created..&quot;);
    }

    public void Query1()
    {
        Debug.WriteLine(&quot;Query1..&quot;);
    }

    public void Query2()
    {
        Debug.WriteLine(&quot;Query2..&quot;);
    }

    public void Cleanup()
    {
        Debug.WriteLine(&quot;Cleaned Up!&quot;);
    }

    ~Database()
    {
        Debug.WriteLine(&quot;Cleaned Up!&quot;);
    }
}

public class Program
{
    public static void Main(string[] args)
    {
        for (int i = 0; i &lt; 4; i++)
        {
            var db = new Database();
            db.Query1();
            db.Query2();
            db.Cleanup();
        }
    }
}

Database Created..

Query1..

Query2..

Cleaned Up!

Database Created..

Query1..

Query2..

Cleaned Up!

Database Created..

Query1..

Query2..

Cleaned Up!

Database Created..

Query1..

Query2..

Cleaned Up!

Cleaned Up!

Cleaned Up!

Cleaned Up!

Cleaned Up!

Note that we have not removed the Finalizer yet. For each database creation Cleanup() will perform explicitly. As we saw in the first for loop example, at the end of the loop, the resources will be garbage collected.

One of the problems with this approach is that if there is an exception in one of the Query operations, the clean-up operation would never get called.

As many of us do, we wrap the query operations in a try{} block and a finally {} block and perform the clean-up operation. Additionally we catch{} the exception and do something, but I have ignored that for code brevity.

public class Program
{
    public static void Main(string[] args)
    {
        for (int i = 0; i &lt; 4; i++)
        {
            var db = new Database();

            try
            {
                db.Query1();
                db.Query2();
            }
            finally
            {
                db.Cleanup();
            }
        }
    }
}

Technically this solves the problem. As the clean-up operation always gets invoked regardless of whether there is an exception or not.

However this approach still has some other issues. For instance, each and every time when we instantiate the Db and invoke query operations, as developers we have to remember to include it in the try{} and finally{} blocks. To make things worse, in more complex situations, we can even introduce bugs without knowing which operation to call etc.

So how do we tackle this situation?

This is where most of us would use the well-known Dispose Pattern. With the Dispose Pattern try{} and finally{} are no longer required. The IDisposable.Dispose() method cleans-up the resources at the end of the operations. This includes any exception scenarios during query operations.

public class Database : IDisposable
{
//More code..
    public void Dispose()
    {
        Cleanup();
        GC.SuppressFinalize(this);
    }
} 

public class Program
{
    public static void Main(string[] args)
    {
        for (int i = 0; i &lt; 4; i++)
        {
            using (var db = new Database())
            {
                db.Query1();
                db.Query2();
            }
        }
    }

This is definitely a much better way to write the code. The using block abstracts away the dispose of the object. It guarantees that the clean-up will occur using Dispose() routine. Most of us would settle with this approach. You will see this pattern used in many applications.

But if we really look closely there is still a problem with the using pattern itself. Technically it will do right thing by explicitly cleaning-up resources. But there is no guarantee that the client of the API would use the using block to clean-up-up the resources. For example, anyone can still write the following code:

var db = new Database();
db.Query1();
db.Query2();

For resource intensive applications, if this code has been committed without being noticed, this can have an adverse effect on the application. So we are back to square one. As we have noticed, there is no immediate dispose or clean-up operation that takes place.

A chance of missing the Dispose() method is a serious problem. Not to mention we are also presented with a new challenge of making sure we implement the Dispose method/logic correctly. Not everyone knows how to implement the Dispose method/logic correctly. Most would resort for some other resources such as blogs/articles. This is all unnecessary trouble to go through.

So in order to address these issues, it would be ideal if we can change the API in such a way that developers cannot make mistakes.

This is where we can use Lambda Expressions to resolve these issues. In order to implement the Execute Around Pattern with Lambda Expressions we will modify the code with the following:

public class Database
{
    private Database()
    {
        Debug.WriteLine(&quot;Database Created..&quot;);
    }

    public void Query1()
    {
        Debug.WriteLine(&quot;Query1..&quot;);
    }

    public void Query2()
    {
        Debug.WriteLine(&quot;Query2..&quot;);
    }

    private void Cleanup()
    {
        Debug.WriteLine(&quot;Cleaned Up!&quot;);
    }

    public static void Create(Action execution)
    {
        var db = new Database();

        try
        {
            execution(db);
        }
        finally
        {
            db.Cleanup();
        }
    }
}

There are few interesting things happening here. IDisposable implementation has been removed. The constructor of this class becomes private. So the design has been enforced in such a way that the user cannot directly instantiate the Database instance. Similarly the Cleanup() method is also private. There is a new Create() method, which takes an Action delegate (which accepts an instance of the database) as a parameter. The implementation of this method would execute the action specified by the Action parameter. Importantly, the execution of the action has been wrapped in a try{} finally{} block, allowing the clean-up operation as we saw earlier.

Here is how the client/user consumes this API

public class Program
{
    public static void Main(string[] args)
    {
        Database.Create(database =&gt;
        {
            database.Query1();
            database.Query2();
        });
    }
}

The main difference from the previous approach is that now we are abstracting the clean-up operation from the client/user and instead are guiding the user to use a specific API. This approach becomes very natural as all boilerplate code has been abstracted away from the client. This way it is hard to imagine that the developer would make a mistake.

More Real World Applications of Execute Around Method Pattern with Lambda Expression
Obviously this pattern is not limited to managing resources of a database. It has so many other potentials. Here are some of its applications.

· In Transactional code where we create a transaction and check whether the transaction is completed, then commit or rollback when required.

· If we have heavy external resources that we want to dispose as quickly as possible without having to wait for the .NET Garbage collection.

· To get around with some framework limitations – more on this below. This is quite interesting. Please see below.

Tackling Framework Limitations

I’m sure most of you are familiar with Unit Testing. I’m a huge fan of Unit Testing myself. In .NET platform, if you have used MSTest framework, I’m sure you have seen the ExpectedException attribute. There is a limitation on the usage of this attribute where we cannot specify the exact call that throws an exception during the test execution.

For example, see the test here.

[TestClass]
public class UnitTest1
{
    [TestMethod][ExpectedException(typeof(Exception))]
    public void SomeTestMethodThrowsException()
    {
        var sut = new SystemUnderTest("some param");

        sut.SomeMethod();
    }
}

This code demoes a typical implementation of an ExpectedException attribute. Note that we expect sut.SomeMethod() would throw an exception.

Here is how the SUT (System Under Test ) would look like. Note that I have removed the detailed implementation for code brevity.

public class SystemUnderTest
{
    public SystemUnderTest(string param)
    {
    }

    public void SomeMethod()
    {
        //more code
        //throws exception
    }
}

During test execution, if there is an exception being thrown, it will be caught and the test would succeed. However the test would not know exactly where the exception has been thrown. For example, it could be during the creation of the SytemUnderTest.

We can use the Execute Around Method Pattern with Lambda Expression to address this limitation. This is by creating a helper method, which accepts an Action parameter as delegate.

public static class ExceptionAssert
{
    public static T Throws(Action action) where T : Exception
    {
        try
        {
            action();
        }
        catch (T ex)
        {
            return ex;
        }

        Assert.Fail("Expected exception of type {0}.", typeof(T));

        return null;
    }
}

Now the text method can be invoked as seen here.

[TestMethod]
public void SomeTestMethodThrowsException()
{
    var sut = new SystemUnderTest("some param");

    ExceptionAssert.Throws(() =&gt; sut.SomeMethod());
}

The above ExceptionAssert.Throws() can be used to explicitly invoke the method that throws the exception.

A separate note…

We would not have this limitation in some of the other Unit Testing framework such as NUnit, xUnit. These frameworks already have built-in helper methods (implemented using this pattern) to target the exact operation that cause exception.

For example xUnit.NET has

</pre>
public static T Throws(Assert.ThrowsDelegate testCode) where T : Exception
<pre>

Summary
In this article we looked at various .NET Design Patterns. Design Patterns are good but they only become effective if we can use them correctly. We would like to think Design Patterns as a set of tools which allow us to make better decisions on the code base. We would also like to treat them as communication tools so we can improve the communication around the code base.

We have looked at the Abstract Factory Pattern and Cascade Pattern. We have also looked at applying a slightly different approach to the existing Design Patterns using lambda expressions. This includes the Cascade-Lambda Pattern, Pluggable Pattern, and finally the Execute Around Pattern with Lambda Expressions. Throughout this article we saw that Lambda Expressions are a great way to enhance the power of some of the well-known Design Patterns.

Performance Optimization in ASP.NET Web Sites

Performance is an important aspect of a modern day web application development. Not only it makes sites seamless to use but also increase the scalability and future proof. In this article, we will look at various aspects of improving the performance of web applications. We would only concentrate on the browser/web server side performance as oppose to server/app server/database server performance optimizations.

Before we getting into detail, the first question is, do we really need optimized the web sites?

Amazon.com has performed a test on their web site, and when they slower the site in 100ms, the sales dropped by 1%. As you would imagine a company like Amazon, 1% is a huge lost

Google slower their Search Engine in 500ms, the traffic dropped by 20%

As you can see, it is a very important aspect of modern web site development.

This even becomes more important as nowadays most sites require optimized sites for mobile, tablet devices and other portable devices that often run on low throughput wireless networks.

Below are few tips that you can take to make a better performing website.

Using the right ASP.NET framework

Check your .NET framework. If you can upgrade your site to use .NET 4.5, then it has some great performance optimizations. .NET 4.5 has a new Garbage Collector which can handle large heap sizes (i.e tens of gigabytes). The other improvements are Multi-core JIT compilation improvements, and ASP.NET App Suspension. These optimizations do not require code changes, it either upgrading the framework or changing the configuration i.e IIS etc.

Below is a great article on an overview of performance improvements of .NET 4.5

http://msdn.microsoft.com/en-us/magazine/hh882452.aspx

File compression

There are often requests bloated within the web server with lot of static content. These content can be compressed hence reducing the bandwidth on requests.

The below setting is only available in II7 and later.


<configuration>

<system.webServer>

<urlCompression doStaticCompression="true" doDynamicCompression="true" />

</system.webServer>

</configuration>

The above configuration setting has direct association with IIS and has nothing to with ASP.NET. The urlCompression name sounds strange but it is not really the compressing of URLs. It is compressing or gzipping the content and that sent to the browser. By setting this to true/enabling you can gzip content sent to the browser while saving lot of bandwidth. Also notice that the above settings is not only include static content such as CSS/JS, but also dynamic content such as aspx pages or razor view.
If your webserver running in Windows Server 2008R2 (IIS7.5) these settings are enabled by default. For other server environments, you would want to tweak the configuration as above so you can take the advantage of compression.

Reducing the number of requests to the Server

ASP.NET provides a great way to bundle and minify these static content files. This way the number of requests to the server can be reduced.
There are many ways to bundle and minify files. For example, MSbuild, third party tools, Bundle Configs etc. But the end result is the same.
One of the easiest ways is to use the new Visual Studio Web Essentials Pack. You can download this extention from the below Url.
http://vswebessentials.com/

Once installed, you can create minified and bundled CSS and Script files using the Web Essential Pack as below.

web essentials

The above menu items would create bundled/minified files, and added to your project. Now instead of referencing multiple CSS and JS files, you can simply reference the minified and bundled versions. This would also mean that there will be lesser requests made to the server, which result in less bandwidth and faster responses.
You may also wonder that even if we have bundled all files together, still the total number of Kilobytes remain the same as if we were to server them as individually. This is not necessarily true. For example,

  • The minified process would reduce the size of the files by removing comments, shortening variable, and removing spaces etc.
  • Bundling them together would also remove the additional Http headers that would require for each individual request.

You can also find some great information on the bundling and minification process below.
http://www.asp.net/mvc/tutorials/mvc-4/bundling-and-minification

Get control of your image requests

It is also possible that you can reduce the number of requests for images.
There are couple of ways to do this.

a. You may create an Image Sprite. With image sprite, you can combine multiple different images into a single large image. Then use the CSS to reposition those images within the site.
Below article shows how you would create Image Sprites using web essentials.
http://www.itorian.com/2014/02/creating-image-sprite-in-visual-studio.html

b. Base64 Data URIs
With this you would never make any requests to the server to obtain any images. You can take your smaller images and directly embed them into you CSS/Stylesheet.
code1

This will become.

.main-content {
    background: url('data:image/png;base64,iVBORw0KGgoA………..+ ') 
    padding-left: 10px;
    padding-top: 30px;
}

Setting up expiry headers

By default the static content served by the web server does not have the expiry dates. We can instruct the browser on how to cache the content, and for how long.
image2

If you set the expiration to a future date, the browser would not make a request to the server but instead the static content will be served within browser’s internal cache.
In the web.config, there is a section that you can control these settings.

  <configuration>
      <system.webServer>
        <staticContent>
          <clientCache cacheControlMode="DisableCache" />
        </staticContent>
      </system.webServer>
    </configuration>

We can change these settings to say cache all the static content requested from the server, for instance cache it maximum for a year. Note the expiration is a sliding expiration, which means the content will be serve from the cache from today up to a year.

<configuration>
      <system.webServer>
        <staticContent>
          <clientCache cacheControlMode="UserMaxAge" httpExpires="365.00:00:00" />
        </staticContent>
      </system.webServer>
    </configuration>

Now the web server will automatically add this header to the static files.

image3

(Note : The above Type ‘doc(1)’ would be the Html dynamic file hence the cache settings are not applied.)
This is all good, but what happen if you make a change to your CSS/JS file. With the above settings in place, it will not serve those changes for the entire year. A one way to tackle this issue is to force the browser to refresh the cache. You may also change the URL (i.e add query string, fingerprint/timestamp) to treat the page as a new URL so the cache get refreshed.

Script rendering order

If possible, you can move the script tags >script< to the very bottom of the page. The reason this is important because during the rendering, when browser comes across with a >script< tag, it stops rendering the rest of the page. If you leave the script tags at the bottom of the page, the page/HTML would render faster, the script can execute later on.

Sometimes this not possible due to some DOM elements or CSS requires those scripts to be present so they can be rendered. If that’s the case you could move those scripts further up on the page. However, as a rule of thumb, try to keep the scripts as lower as possible.
Lower positioning of the >script< tag is not the only option but there are other ways you can defer the load of script files.
For example, you can use the defer attribute.

<script src="some.js" defer></script>

By using the defer attribute, you can specify the script not to run until the page has been fully loaded.

Another way is to configure is to run your scripts asynchronously.

<script src="some.js" async></script>

Using the above async tag, the scripts will be run asynchronously as soon as it is available.

Optimizing images

Images are static content and they do take some bandwidth when requested via web server. A one way to solve this is to reduce the size of the images, in other words optimize the images.

Image “Optimize” does not mean that it reduces the quality of the image. But it will re-arrange the pixels and palettes to make the overall size smaller.

Web Definition of “Image Optimization”
“This term is used to describe the process of image slicing and resolution reduction. This is done to make file sizes smaller so images will load faster.”

So how you optimize images?
There are many third-party tools would optimize images. Below VS extension would do the trick for you.
http://visualstudiogallery.msdn.microsoft.com/a56eddd3-d79b-48ac-8c8f-2db06ade77c3

Size of favicon:
This is often ignored but you may have not noticed that sometimes the size of favicon is considerably large.

Caching HTML

If you have pages that never get updated, you can cache those pages for a certain period. For example, if you use Web Forms or ASP.NET MVC you could use ASP.NET Output Caching.
http://msdn.microsoft.com/en-us/library/xsbfdd8c(v=vs.90).aspx
With ASP.NET Web Pages you can modify the response headers for caching. Please see below on some caching techniques on web pages.
http://msdn.microsoft.com/en-us/library/vstudio/06bh14hk(v=vs.100).aspx
http://www.asp.net/web-pages/tutorials/performance-and-traffic/15-caching-to-improve-the-performance-of-your-website
It is important to note that this would only work if you have pages that do not require updates often.

Tooling support for Optimizing Web Sites

I believe that this is the most important aspect of this article, as the right tool goes long way in producing optimized web sites. There are number of tools, but there are 2 tools that always stand out.
a. Yahoo’s YSlow (https://developer.yahoo.com/yslow/)

image4

It is a Firefox Add-On and one of the most popular among web developers.
It has some great feature including a grading system, and instructions to make your site optimized. Most of the content I described in this article is also provided as tips in this tool, so once optimized your grading should go up.
b. Google’s PageSpeed
This is another great chrome browser extension. Once installed this should also be available part of the chrome developer tool bar.

image5

There is no guarantee that you could make all optimisations that these tools suggested. It is all based on the site, and requirements you need to support. But combining both of these tools to measure your site’s performance and applying some of the tips I mentioned before is a great a start.

Summary
In this article, we looked at some of the approaches that you can take to make optimized and better performing web site. This include, reducing number of requests to the server, changes to the .NET framework, compressing techniques and finally some tools that allows you to make better decision on web site performances.
It is also important to consider that whilst these techniques optimize web site/ web server performances, the overall performance can be based on number of other factors. This includes performance of Application Server, Database Server (in a multi-tiered setup). But there is lot of improvements you can gain optimizing sites as described in this article.

Entity Framework updating (Code First) existing entity using foreign key Id(s) mistakes

Recently we have come across with a scenario like below.

We had an entity (EF bounded), and we wanted to update the entity’s foreign key references from an existing entity. For example, say..

A Person has a Brazilian Passport, and that person need to update the Passport to an American Passport.

The domain/entity model looks like this

 public class Person
 {
   public int Id { get; set; }
   public string Name { get; set; }

   public virtual Passport Passport {get; set; }
   public int PassportId { get; set; }
 }

public class Passport
{
  public int Id { get; set; }
  public string Number { get; set; }
  public string Nationality { get; set; }
}

Notice that Person has 2 properties relates to Passport foreign key references. One is a <a href=”http://blog.staticvoid.co.nz/2012/7/17/entity_framework-navigation_property_basics_with_code_first&#8221; target=”_blank”>Navigational Property</a> and the other is the foreign key id reference. Notice that it uses the {EntityName}Id as the property name. It need to be in this convention for EF to generate the foreign key. If anything other than that EF would create an additional key.

Once you run the <a href=”http://msdn.microsoft.com/en-au/data/jj591621.aspx&#8221; target=”_blank”>EF migration</a> you would see the tables generated as below..

ef_table

Now in order to update the Person’s (only person) passport to an American passport, the code would be something like this..

 using (var dbContext = new PersonContext())
 {
   Console.WriteLine("Update person with passport");
   var person = dbContext.Persons.First();
   var americanPassport = dbContext.Passports.
            Single(x => x.Nationality == "American");

   person.PassportId = americanPassport.Id; //Notice this line

   dbContext.SaveChanges();
 }

I personally don’t like this approach for couple of reasons.

a. It feels “hacky” that you have set the id explicity to update the FK referece.
b. See the entity model, Passport. There are two properties roughly to achieve the same thing. If you use the Navigational Property you can access the Id right? So why we need another Id property?
c. Suddenly your domain entity become database ‘aware’ of FKs etc.

My preferred approach is to leave entity as it is, and just use the Navigational property. Unless if you really have to no the id property for some other reason, which I can’t think of.

 public class Person
 {
    public int Id { get; set; }
    public string Name { get; set; }

    public virtual Passport Passport {get; set; }
}

That’s it! EF would generate the Foreign Key for Passport Entity as usual.

Now you can just update the entity using the Navigational Property (not the FK property id)

 using (var dbContext = new PersonContext())
 {
    Console.WriteLine("Update person with passport");

    var person = dbContext.Persons.First();
    var americanPassport = dbContext.Passports.
        Single(x => x.Nationality == "American");
    person.Passport = americanPassport;

    dbContext.SaveChanges();
 }

This is lot cleaner than setting up ID reference. To perform the update like this you need to make sure your, both entities (Person and Passport), are in unchanged State.

Unchanged: the entity is being tracked by the context and exists in the database, and its property values have not changed from the values in the database

Entity Framework Code First Automatic Migration & Existing Tables

We have been smooth sailing with EF6 Code First migration, but recently had few issues to dealt with migrating existing db tables.

Basically we have the below requirements.

  1. All new tables we create use Code First migration.  We want to avoid migration any existing db tables. Also we still want the EF mappings to configure for the existing tables, and have those entities in DbContext so we can work with those tables.
  2. We want to use the automatic migration. – this seems to be working well within the team environment.
  3. We want the migration to work only in our Dev Env, i.e not in QA, UAT, or Prod Envs.

Below are couple of  related SO questions..

http://stackoverflow.com/questions/19964679/ef-5-code-migration-errors-there-is-already-an-object-named-in-the-datab

http://stackoverflow.com/questions/15303542/entity-framework-automatic-migrations-existing-database

The problem is that the EF migration does not appear to be seamlessly working with existing table structure.

When we run the automatic migration for the very first time, we want the existing table structure to be unchanged. But we also want the EF to be aware of the existing tables so we can perform operations on those tables.  It seems like we can only have one way or the other but not both.

Old and new Entities

(I have simplified the code so we can focus on the problem)

public class NewTable : BaseEntity
{
    public string Title { get; set; }
}

public class OldTable
{
     public int Id { get; set; }
     public string Name { get; set; }
}

BaseEntity contains some common properties such as ID, DateModified etc

Below is how the Mappings look like

    public class NewTableMap : EntityMap<NewTable>
    {
        public NewTableMap()
        {
            Property(x => x.Title).HasMaxLength(40);
        }
    }

    public class OldTableMap : EntityTypeConfiguration<OldTable>
    {
        public OldTableMap()
        {
            HasKey(t => t.Id);
            ToTable("OldTable");
            Property(x => x.Id).HasColumnName("fldId");
        }
    }

Note that I have explicitly specified the OldTable entity has been mapped to “OldTable” And the Id has the db column name “fldId”

DBContext has the standard operations including the model binding.

protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
     modelBuilder.Configurations.Add(new OldTableMap());
     modelBuilder.Configurations.Add(new NewTableMap());

     base.OnModelCreating(modelBuilder);
}

The SQL Database would just have the old table

Since we starting from the scratch let’s enable the migration.

PM> Enable-Migrations -EnableAutomaticMigrations

The above command would create a new Migration Configuration with AutomaticMigration set to True.

We have not created any tables yet! So let’s run the migration command.

PM> Update-Database

We get the below error..
There is already an object named ‘OldTable’ in the database.

As you know there is our existing table in the db.
The script that EF creates had a CreateTable(“dbo.OldTable”), the migration throws the above error.

How can we tell the EF migration to ignore my OldTable??? Honestly don’t know the answer for this. This why the below work around may help you.

First create a specific migration.

PM> Add-Migration FirstMigration

This will create a migration script within the Migration folder. The script has 2 create table functions. One for the NewTable, one for the OldTable.


public override void Up()
{
     CreateTable(
                "dbo.NewTable",
                c => new
                    {
                        Id = c.Int(nullable: false, identity: true),
                        Title = c.String(maxLength: 40),
                        IsActive = c.Boolean(nullable: false),
                        DateModified = c.DateTime(nullable: false,
                                    precision: 7, storeType: "datetime2"),
                        DateCreated = c.DateTime(nullable: false,
                                    precision: 7, storeType: "datetime2"),
                        UserId = c.Guid(nullable: false),
                        MarkAsDeleted = c.Boolean(nullable: false),
                    })
                .PrimaryKey(t => t.Id);

  CreateTable(
                "dbo.OldTable",
                c => new
                    {
                        fldId = c.Int(nullable: false, identity: true),
                        Name = c.String(),
                    })
                .PrimaryKey(t => t.fldId);

}

If we were to run this migration (as above) we would get exactly the same error. So let’s modify the above migration script, for example by removing the CreateTable(“dbo.NewTable”,()… DropTable(“dbo.OldTable”); We leave the scripts for creation of new Tables as it is.

Now if you run the

PM> Update-Database
Specify the ‘-Verbose’ flag to view the SQL statements being applied to the target database.
Applying explicit migrations: [201407120749125_FirstMigration].
Applying explicit migration: 201407120749125_FirstMigration.
Running Seed method.

Now if you check the database, the NewTable has been created and the existing table has not been changed.

Handling practical problems

It is very likely that someone else in your team has a different model changes to your model changes. If that’s the case, you also want to make sure that you don’t commit your initial migration script. You can treat it as your private migration script. This also assumes that you have non-shared database for development.

You team would also have to follow the same process. You can add this migration file to the exclusion list of the source control so the file won’t get committed to the repository.

Now you can make changes to the model as you would normally do and running the automatic migration would apply those changes accordingly.
It will never attempt to create your existing table as the automatic migration is now based on the modified FirstMigration. Subsequent migrations would use the last successful migration.

Just be cautious that in your next automatic migration…
Changing an entity which already mapped to an existing table would also make changes to the existing table.
If you use Ignore() method i.e ( Ignore(x => x.Name)) on the existing entity, it will also exclude that property from the existing table.
If you Ignore() method on the entire existing table during model binding i.e modelBuilder.Ignore(); it will just remove your exiting table. If this happens, simply remove the .Ignore and run the migration.

 

CSS Enlightenment with LESS

I have recently started modifying one of my older site’s CSS. Looking at the file gave me a headache as so much duplication. It was harder to figure it what was really going on.

I have also heard about LESS and other CSS pre-processors but really have not had a chance to use it until now.

After using LESS CSS file is much leaner and easier to navigate through. My CSS classes are no longer duplicated and instead they are wrapped in reusable types called mixins (see also mix what?)

There are re-usable variables defined for various styles, and I have reused those variable frequently.

Also  Visual Studio Web Essential 2012 package is a great way to work with LESS. It compiles your .LESS file and upon Save, you can see the CSS output next to your .LESS file.

Let’s take a look at simple example. And of course a real example (not foo, not bar)

In my site I have a CSS class block that have repeated twice like below..

pre

As you see these are styles for a link and should display different colors on the target link based on hover or not. Now if you were to change the font, you would have to change it in two places. With LESS you can introduce a variable which compiles into the same CSS sets.

pre1

This two classes can also be extracted into a maxin, and control the color from the calling CSS class.

premaxin

It is very much like functional programming! Now if we have to modify the styles they all in a single class.

Once you compile, you can see the generated CSS as below.

pregenerated

LESS makes very easier to work with CSS.

 

VS2010 and ReSharper performances

Below are some tips to improve the performances of ReSharper and VS2010 IDE.

Sometimes it is not just ReSharper causing the overall VS to depreciate the performances, but lot of other factors contribute to this as well. Also think about getting familiar with ReShaper well. Even  though ReSharper causes for a performance decrease, you still code way faster with it. Try to use code snippets, short cut key and being more productive increases the development speed.

Below are few tips to help you maximise the performance of your IDE/ReSharper. Please note that enabling some of the below items you are trading the productivity for the speed of your IDE. If none of the below tips aren’t assisting your problem, then consider upgrading hardware as a last resort.

VS2010

Uninstall Extensions
Disable the un-used extension that you don’t need. Navigate to VS Tools-> Extension Manager -> Disable or Uninstall extension that aren’t used.

VS Hot Fixes
Ensure you have updated performance related hot fixes from Microsoft Visual Studio http://connect.microsoft.com/VisualStudio/Downloads There are evidence that people have downloaded some of hot fixes got performance boost in their development environment.

Project Unload
Unload projects you currently not working on. The less projects means, less source files, less read and writes, hence faster processing speed. You can unload the project that you are not working on.

Remove Navigation Bar
If you are using ReSharper, you don’t need VS2010 to view list of methods and fields at the top of the file (CTRL+ Alt +F does this nicely). Go to Tools | Options | Text Editor | C# and uncheck Navigation bar.

Turn off Track Changes
Go to Tools | Options | Text Editor and uncheck Track changes. This will reduce overhead and speeds up IDE response.

Turn off Track Active items
This will turn off jumping in the explorer whenever you select different files in different projects. Go to Tools | Options | Projects and Solutions and uncheck Track Active Item in Solution Explorer. This will ensure that if you are moving across files in different projects, left pane will still be steady instead of jumping around.

Turn off AutoToolboxPopulate
There is an option in VS 2005 that will cause VS to automatically populate the toolbox with any controls you compile as part of your solution. This is a useful feature when developing controls since it updates them when you build, but it can cause VS to end up taking a long time in some circumstances. To disable this option, select the Tools | Options | Windows Forms Designer and then set AutoToolboxPopulate to False. If you are not using turn off this feature so you get better response time

Turn Off splash screen
When you launch VS2010, it first loads a splash screen. You can disable this and improve the startup time.
Change the  Target path of your VS2010 shortcut to
“C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exe” /nosplash

Pay attention to Warnings and Errors
Not a big deal but this also contributes to VS compilation. The point here is, having a lot of messages serialized to output causes slowdown, so pay attention to first sign of your warnings and fix it.

IntelliTrace (Ultimate Only)
Most of you don’t use VS2010 Ultimate Edition, but if you do and if you realise your IDE is performing slow, try disabling IntellTrace.
Tools | Options | IntelliTrace | General. 

Build and Run
When you press F5 to debug an application Visual Studio first compiles the projects, then runs your project and attaches the debugger. You can cut down on the number of projects that Visual Studio builds during this process by checking the Tools | Options |“Only Build startup projects and dependencies on Run” checkbox in Tools\Options. This increased performance of large projects.

Turn off the Startup Page
Tools -> Options -> Environment -> Startup.
You could configure this to “load last loaded solution”, and or remove the startup channel.

Keep your projects nice and clean
Permanently remove unwanted files, assembly references, and project files from the solution. If you need them back for some reason they are in the TFS. Add-Ins like ReShaper constantly compile the code in background, so lesser is the better.

Hardware acceleration
There are some articles on the web which indicates that visual studio doesn’t run well on machines without a graphic card that supports DirectX 9 or with bugs in the Graphic Card Driver. This is due to the new UI which is build on WPF Visual Studio which uses hardware accelerations. You can try to disable it, for better performance. 

Tools|Options|Environment |General Uncheck the “Automatically adjust visual experience based on client performance” checkbox.
For more information please refer to the below link.
http://support.microsoft.com/kb/2023207

ReSharper

As with VS, you could easily turn off the feature you don’t need on daily basis.

– Navigate to ReSharper | Options, select Settings under “Code Inspection”. Ensure Analyse errors in whole solution is turned off.
– Navigate to ReSharper | Options, select Inspection Severity  under “Code Inspection”. There are lot of static analysis features you could turn off. Simply select from the drop down “Do not show”.
– Try disabling ReSharper IntelliSence and use VS IntelliSense instead.
– There were some performance issues with ReShaper 5. As per JetBrains threads, those issues have been rectified. Please ensure you have the latest ReSharper installed.