Archive for the ‘.NET Architecture’ Category

AngularJS.png

Author: Cole Francis, Architect

BACKGROUND

While you may not be able to tell it by my verbose articles, I am a devout source code minimalist by nature.  Although I’m not entirely certain how I ended up like this, I do have a few loose theories.

  1. I’m probably lazy.  I state this because I’m constantly looking for ways to do more work in fewer lines of code.  This is probably why I’m so partial to software design patterns.  I feel like once I know them, then being able to recapitulate them on command allows me to manufacturer software at a much quicker pace.  If you’ve spent anytime at all playing in the software integration space, then you can appreciate how imperative it is to be quick and nimble.
  2. I’m kind of old.  I cut my teeth in a period when machine resources weren’t exactly plentiful, so it was extremely important that your code didn’t consume too much memory, throttle down the CPU (singular), or take up an extraordinary amount of space on the hard drive or network share.  If it did, people had no problem crawling out of the woodwork to scold you.
  3. I have a guilty conscience.  As much as I would like to code with reckless abandon, I simply cannot bring myself to do it.  I’m sure I would lose sleep at night if I did.  In my opinion, concerns need to be separated, coding conventions need to be followed, yada, yada, yada…  However, there are situations that sometime cause me to overlook certain coding standards in favor of a lazier approach, and that’s when simplicity trumps rigidity!

So, without further delay, here’s a perfect example of my laziness persevering.  Let’s say that an AngularJS code base exists that properly separates its concerns by implementing a number of client-side controllers that perform their own genric activities. At this point, you’re now ready to lay down the client-side service layer functions to communicate with a number of remote Web-based REST API endpoints.  So, you start to write a bunch of service functions that implement the AngularJS http directive and its implied promise pattern, and then suddenly you have an epiphany!  Why not write one generic AngularJS service function that is capable of calling most RESTful Web API endpoints?  So, you think about it for a second, and then you lay down this little eclectic dynamo instead:



var contenttype = 'application/json';
var datatype = 'json';

/* A generic async service can call a RESTful Web API inside an implied $http promise.
*/
this.serviceAction = function(httpVerb, baseUrl, endpoint, qs) {
  return $http({
    method: httpVerb,
    url: baseUrl + endpoint + qs,
    contentType: contenttype,
    dataType: datatype,
  }).success(function(data){
    return data;
  }).error(function(){
    return null;
  });
};

 
That’s literally all there is to it! So, to wrap things up on the AngularJS client-side controller, you would call the service by implementing a fleshed out version of the code snippet below. Provided you aren’t passing in lists of data, and as long as the content types and data types follow the same pattern, then you should be able to write an endless number of AngularJS controller functions that can all call into the same service function, much like the one I’ve provided above. See, I told you I’m lazy. 🙂



/* Async call the AngularJS Service (shown above)
*/
$scope.doStuff = function (passedInId) {

  // Make a call to the AngularJS layer to call a remote endpoint
  httpservice.serviceAction('GET', $scope.baseURL(), '/some/endpoint', '?id=' + passedInId).then(function (response) {
    if (response != null && response.data.length > 0) {
      // Apply the response data to two-way bound array here!
    }
  });
};

 
Thanks for reading and keep on coding! 🙂

CORAndUnityIoC

Author: Cole Francis, Architect


BACKGROUND

The Chain-of-Responsibility design pattern is a rather obscure behavioral pattern that is useful for creating workflows within a solution. Despite its obscurity, it happens to be one of my favorite all-time design patterns. The pattern chains the concrete handling classes together through a common chaining mechanism (some architects and developers also refer to this as the command class), passing a common request payload to each class through a succession pipeline until it reaches a class that is able to act upon the request.

I have personally and successfully architected a solution using this design pattern for a large and well-branded U.S. retail company. The solution is responsible for printing sale signs in well over a thousand stores across the United States, saving my client approximately $10 million dollars a year, year-over-year. From a development perspective, leveraging this design pattern made it easy to isolate the various units of the workflow, simplifying the process of assigning my various team members to the discrete pieces of functionality that were exposed as a result of using this behavioral design pattern.

While I am unable to discuss anything further about my previous personal experiences with this client, I am able to demonstrate a working example of this design pattern and focus on a business problem that is completely unrelated to anything I’ve done in the past. So, without further adieu, allow me introduce a fictitious business problem to you.


THE BUSINESS PROBLEM

A local transport company manages a fleet of 3 trucks and delivers locally made engines to different distribution centers across the Midwest. Here are the cargo hold specifications for their three delivery trucks:

  • Truck 1: (max load 5,000 lbs. and is used only to haul Red Engines)
  • Truck 2: (max load 6,000 lbs. and is used only to haul Yellow Engines)
  • Truck 4: (max load 6,500 lbs. and is used only to haul Blue Engines)

The company that manufactures these engines also ships their products locally to the T.H.E. Truck King Trucking Company warehouse for storage and distribution. After it’s all said and done, the dimensions of the boxed engines are all the same (42”Wx50”Lx 36”H), but the engine weights and locations the engines get shipped to vary significantly:

  • Red Engines: 532 lbs. each (only get shipped to Chicago, IL; Model Number#: R1773456935)
  • Blue Engines: 1,386 lbs. each (only get shipped to Overland Park, KS; Model Number#: B8439845841)
  • Yellow Engines: 1,783 lbs. each (only get shipped to Des Moines, IA; Model Number#: Y4833345760)

Here are some other things the client brings up during your conversation with them:

  • The crates the engines are transported on are very durable and allow for the engines to be stacked on top of each other during transport, which means each truck will reach its maximum weight load well before it runs out of space.
  • As pointed out above, specific engine types get shipped to specific locations.
  • Occasionally engines are put in the wrong trucks, as loaders are very busy and are working strictly from memory. When this occurs, it’s a miserable experience for everyone.
  • Sometimes the trucks aren’t filled to capacity, causing them to operate well below their maximum load. When this occurs, shipping and other operational costs skyrocket, causing us to lose money.
  • The loading crew has been notified of these issues, but mistakes continue to be made.
  • The larger distribution centers we ship to have stated that if the problem isn’t resolved soon, they will cancel our contract and look for a more reliable and efficient transport company.
  • We need a solution that tells our loading crew what truck to load the engine on, as well as something that tells them whether or not the truck is filled to maximum capacity from a maximum weight load standpoint.
  • The engine manufacturing company doesn’t plan to produce any other types of engines in the near future, but there is a possibility they may want to have more than one type of engine distributed to each of the three distribution points. There are no firm arrangements for this to happen yet, but the solution that is put into place must take this into account.


THE SOLUTION

Because we know the dimensions of each truck’s cargo hold, as well as the weight and dimensions of each product being shipped, the solution should be architected to allow a handler to determine the best course of action for the request it is handling. This is similar to the shipping company allowing the handlers of the crates to determine which trucks the different products should be loaded into.

However, instead of allowing a single entity to make this type of determination, we’ll decouple this association and allow the decision pattern to pass through a chain of inspecting classes, forcing the responsibility of determining whether or not it is capable of handling the request onto the class itself. If the class determines it cannot handle the request, then it will pass the request onto the next handler, who will then decide whether or not it is able to handle the request. This process will continue until we’ve exhausted every available concrete handling class.

By architecting the solution in this manner, we’re fairly sure that we’ll be able to meet all of the functional requirements articulated to us by our client.

The Chain of Responsibility (CoR) Pattern

CoR

Given the previous requirements, one great pattern that allows an object to pass along a chain of handlers until a handler determines it is capable of handling the request is known as the “Chain of Responsibility Handler”, or the CoR Pattern. Unlike a Mediator/Observer patter, CoR doesn’t maintain a link between the handlers in the chain, which is why CoR stands out from other patterns, making the sender and the receiver completely decoupled and allowing each handler to maintain its own set of standards and separation of concerns, making it a true object oriented workflow pattern.

Another great aspect beheld in this pattern is that there very little logic in the sender, beyond setting up the successive chain of concrete handlers. What’s more, the sender isn’t tasked with interrogating the request before making a request to handler. It merely chains the concrete handlers together and allows them figure out the rest, with each concrete handler being responsible for its own separate criteria and concern.


CoR PATTERN CLASSES AND OBJECTS

The classes and objects participating in this pattern are:

  1. The Handler Class (Handler Interface):
    1. An inheritable class used to:
      i. Create an interface used to handle requests.
      ii. Sets the successor used to interact with the next concrete method.
      iii. Implements the successor link in the chain.
  2. The ConcreteHandler Class (Handler)
    1. Interrogates the request and determine whether or not it can act upon the information.
    2. If it can’t access the request, then it forwards the request to the next handler (aka the successor).
  3. The Sender Class (Messenger)
    1. The Sender Class can either be a client application that establishes the successive order of the ConcreteHandler Classes, or
    2. The Sender Class can be a concrete class that acts autonomously of the client to establish the successive order of the ConcreteHandler classes and then acts upon them.


THE HANDLER CLASS

We’ll start out with the creation of the handler class. As pointed out in the summary of the code and description in the previous section, the Handler Class is responsible for assigning the next successor concrete class in a chain of classes that we’ll set up in just as bit.

using System;
using THE_TruckKing.Interfaces;

namespace THE_TruckKing
{
    /// <summary>
    /// The handler class assigns the next successor if the preceding class returns successfully
    /// This is the underlying key to the success of the 'Chain of Responsibility Pattern'
    /// </summary>
    /// 
    public abstract class Handler
    {
        protected Handler successor;

        public abstract IEngine HandleRequest(IEngine engine);

        public void SetSuccessor(Handler successor)
        {
            this.successor = successor;
        }
    }
}


THE CONCRETE CLASSES (THIS CAN BE BASED ON SUBJECT OR FUNCTION)

Next, we’ll implement the concrete handlers, inherit from the handler class, as well as act upon the request message. Here is the concrete handler for the blue engines.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using THE_TruckKing.Utilities;
using THE_TruckKing.Interfaces;

namespace THE_TruckKing
{
    /// <summary>
    /// Handles blue engine shipments to Overland Park, KS
    /// </summary>
    /// 
    public class ShipmentHandlerBlue : Handler
    {
        public override IEngine HandleRequest(IEngine engine)
        {
            int totalQuantityShippingHold = 0;
            int totalReturnToStock = 0;

            if (engine.Type == Constants.Engines.BLUE_ENGINE)
            {
                for (int i = 0; i < engine.QuantityShipping; i++)
                {
                    if (Constants.Engine_MAX_Weights.BLUE_ENGINE >=
                        ((engine.TotalQuantityShipping + 1) * Constants.Engine_Base_Weights.BLUE_ENGINE))
                    {
                        engine.TotalQuantityShipping += 1;
                        totalQuantityShippingHold += 1;
                    }
                    else
                    {
                        totalReturnToStock += 1;
                    }
                }

                Console.WriteLine(string.Format("Load {0} blue engine(s) on the truck destined for {1}",
                    totalQuantityShippingHold, Constants.Trucks_Destinations.BLUE_TRUCK));
                Console.WriteLine("");

                if (totalReturnToStock > 0)
                {
                    Console.WriteLine(string.Format("{0} of the {1} yellow engine(s) exceed load capacity. Please return them to stock", 
                        totalReturnToStock.ToString(), engine.QuantityShipping.ToString()));
                    Console.WriteLine("");
                }
            }
            else
            {
                successor.HandleRequest(engine);
            }

            return engine;
        }
    }
}

Continuing the successor chain, we’ll now implement the concrete handler class for the yellow engines. If the previous concrete handler can’t handle the request, then the blue class will then be responsible for determining whether or not it can. As you can see, the pattern follows the exact same pattern as the blue engine concrete handler.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using THE_TruckKing.Utilities;
using THE_TruckKing.Interfaces;

namespace THE_TruckKing
{
    /// <summary>
    /// Handles yellow engine shipments to Chicago, IL
    /// </summary>
    /// 
    public class ShipmentHandlerRed : Handler
    {
        public override IEngine HandleRequest(IEngine engine)
        {
            int totalQuantityShippingHold = 0;
            int totalReturnToStock = 0;

            if (engine.Type == Constants.Engines.RED_ENGINE)
            {
                for (int i = 0; i < engine.QuantityShipping; i++)
                {
                    if (Constants.Engine_MAX_Weights.RED_ENGINE >=
                        ((engine.TotalQuantityShipping + 1) * Constants.Engine_Base_Weights.RED_ENGINE))
                    {
                        engine.TotalQuantityShipping += 1;
                        totalQuantityShippingHold += 1;
                    }
                    else
                    {
                        totalReturnToStock += 1;
                    }
                }

                Console.WriteLine(string.Format("Load {0} red engine(s) on the truck destined for {1}",
                    totalQuantityShippingHold, Constants.Trucks_Destinations.RED_TRUCK));
                Console.WriteLine("");

                if (totalReturnToStock > 0)
                {
                    Console.WriteLine(string.Format("{0} of the {1} yellow engine(s) exceed load capacity. Please return them to stock", 
                        totalReturnToStock.ToString(), engine.QuantityShipping.ToString()));
                    Console.WriteLine("");
                }
            }
            else
            {
                successor.HandleRequest(engine);
            }

            return engine;
        }
    }
}

The final concrete handler in the successor chain handles the workload for yellow engines. If neither the blue concrete handler nor the yellow concrete handler are able to act upon the contents of the request message, then it’s likely that the yellow concrete handler will be responsible for the work. If not, then it too falls through the chain and returns the same IEngine values that were passed into it, meaning no work was performed on the request message.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using THE_TruckKing.Utilities;
using THE_TruckKing.Interfaces;

namespace THE_TruckKing
{
    /// <summary>
    /// Handles yellow engine shipments to Des Moines, IA
    /// </summary>
    /// 
    public class ShipmentHandlerYellow : Handler
    {
        public override IEngine HandleRequest(IEngine engine)
        {
            int totalQuantityShippingHold = 0;
            int totalReturnToStock = 0;

            if (engine.Type == Constants.Engines.YELLOW_ENGINE)
            {
                for (int i = 0; i < engine.QuantityShipping; i++)
                {
                    if (Constants.Engine_MAX_Weights.YELLOW_ENGINE >=
                        ((engine.TotalQuantityShipping + 1) * Constants.Engine_Base_Weights.YELLOW_ENGINE))
                    {
                        engine.TotalQuantityShipping += 1;
                        totalQuantityShippingHold += 1;
                    }
                    else
                    {
                        totalReturnToStock += 1;
                    }
                }

                Console.WriteLine(string.Format("Load {0} yellow engine(s) on the truck destined for {1}", 
                    totalQuantityShippingHold, Constants.Trucks_Destinations.YELLOW_TRUCK));
                Console.WriteLine("");

                if (totalReturnToStock > 0)
                {
                    Console.WriteLine(string.Format("{0} of the {1} yellow engine(s) exceed load capacity. Please return them to stock",
                        totalReturnToStock.ToString(), engine.QuantityShipping.ToString()));
                    Console.WriteLine("");
                }
            }
            else
            {
                successor.HandleRequest(engine);
            }

            return engine;
        }
    }
}


THE IEngine INTERFACE

You’ll notice that each concrete handler accepts the same IEngine interface type, which inherits from the same handler class. This is ultimately what allows the chain-of-responsibility pattern to work. IEngine also implies that we have a concrete class that abides by the tenants of the IEngine interface, so we’ll define the IEngine interface and the concrete Engine type, below:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace THE_TruckKing.Interfaces
{
    public interface IEngine
    {
        int ID { get; set; }
        string Name { get; set; }
        string Description { get; set; }
        string Type { get; set; }
        string ModelNumber { get; set; }
        int QuantityShipping { get; set; }
        int TotalQuantityShipping { get; set; }
        decimal EngineWeightShipping { get; set; }
        decimal TotalWeightShipped { get; set; }
        decimal BaseWeight { get; set; }
    }
}


THE ENGINE CLASS

The engine class is pretty straightforward. It implements the IEngine interface and provides us with a mechanism in which to store our payload data. Actually, I’m not very fond of including this in my example, because I believe that most of this can be inferred and it unnecessarily bloats the example. However, I’ve included it anyway just to be thorough.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using THE_TruckKing.Interfaces;
using System.Runtime.Serialization;

namespace THE_TruckKing.Entities
{
    [Serializable()]
    public class Engine : IEngine
    {
        private int _ID = 0;

        int IEngine.ID
        {
            get
            {
                return _ID;
            }
            set
            {
                _ID = value;
            }
        }

        private string _Name = string.Empty;

        string IEngine.Name
        {
            get
            {
                return _Name;
            }
            set
            {
                _Name = value;
            }
        }

        private string _Description = string.Empty;

        string IEngine.Description
        {
            get
            {
                return _Description;
            }
            set
            {
                _Description = value;
            }
        }

        private string _Type = string.Empty;

        string IEngine.Type
        {
            get
            {
                return _Type;
            }
            set
            {
                _Type = value;
            }
        }

        private string _ModelNumber = string.Empty;

        string IEngine.ModelNumber
        {
            get
            {
                return _ModelNumber;
            }
            set
            {
                _ModelNumber = value;
            }
        }

        private int _QuantityShipping = 0;

        int IEngine.QuantityShipping
        {
            get
            {
                return _QuantityShipping;
            }
            set
            {
                _QuantityShipping = value;
            }
        }

        private int _TotalQuantityShipping = 0;

        int IEngine.TotalQuantityShipping
        {
            get
            {
                return _TotalQuantityShipping;
            }
            set
            {
                _TotalQuantityShipping = value;
            }
        }

        private decimal _EngineWeightShipping = 0.0m;

        decimal IEngine.EngineWeightShipping
        {
            get
            {
                return _EngineWeightShipping;
            }
            set
            {
                _EngineWeightShipping = value;
            }
        }

        private decimal _TotalWeightShipped = 0.0m;

        decimal IEngine.TotalWeightShipped
        {
            get
            {
                return _TotalWeightShipped;
            }
            set
            {
                _TotalWeightShipped = value;
            }
        }

        private decimal _BaseWeight = 0.0m;

        decimal IEngine.BaseWeight
        {
            get
            {
                return _BaseWeight;
            }
            set
            {
                _BaseWeight = value;
            }
        }
    }
}


THE CONSTANT CLASSES

For purposes of simplicity, my solution doesn’t tie into a database, so I’m inferring all of my references by front loading the base IEngine types. In reality, you would be much better off storing these values in a data backing store of your choice (e.g. configuration, database, etc..).

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace THE_TruckKing.Utilities
{
    public static class Constants
    {
        public static class Engines
        {
            public const string RED_ENGINE = "R1773456935";
            public const string BLUE_ENGINE = "B8439845841";
            public const string YELLOW_ENGINE = "Y4833345760";
        }

        public static class Engine_Base_Weights
        {
            public const decimal RED_ENGINE = 532;
            public const decimal BLUE_ENGINE = 1386;
            public const decimal YELLOW_ENGINE = 1783;
        }

        public static class Engine_MAX_Weights
        {
            public const decimal RED_ENGINE = 5000;
            public const decimal BLUE_ENGINE = 6500;
            public const decimal YELLOW_ENGINE = 6000;
        }

        public static class Trucks
        {
            public const string RED_TRUCK = "the red truck";
            public const string BLUE_TRUCK = "the blue truck";
            public const string YELLOW_TRUCK = "the yellow truck";
        }

        public static class Trucks_Destinations
        {
            public const string RED_TRUCK = "Chicago, IL";
            public const string BLUE_TRUCK = "Overland Park, KS";
            public const string YELLOW_TRUCK = "Des Moines, IA";
        }
    }
}


IoC THROUGH THE MICROSOFT UNITY FRAMEWORK

There is one more class that I’ll implement in the project, and that’s an Inversion of Control (IoC) pattern using the Microsoft Unity Framework. I’m using Microsoft Unity 3 in this example, which can be downloaded here:

Microsoft Unity Framework 3

Basically, you just download it, or the latest version of it, and unpack it in a directory on your machine. Afterwards, you’ll be able to reference different assemblies to exact the specific actions you desire by referencing different assemblies (e.g. MVC, Service Locator, Dependency Injection, etc…) in the Unity download. Of course, you’ll still have to hand roll any patterns not provided by the Unity Library, but it will still offer you with a decent jump start in the areas that the library does specifically address.

In this example, what I’m trying to achieve is this:

  • I want to decouple object dependencies from the main assembly to any client applications, which will allow me to minimize the amount of work necessary to replace or update certain properties to the IEngine and Engine classes without necessarily forcing me to make changes to methods that leverage these classes throughout the solution.
  • I’m assuming that the client application that will eventually consume this assembly will need to know very little about its concrete implementation at compile time, so adding or subtracting certain properties to the interface and its support concrete type should pose little or no rework for the client applications.
  • Even though I didn’t follow a TDD approach, I still might want to create unit tests that perform assertions on various parts of the code base in the future, so each class and method must be able to be tested against without using any dependencies.
  • I want to decouple my classes from being responsible for locating and managing the lifetime of dependencies.
    After installing the Microsoft Unity Framework, create references to the following assemblies within the project:
  • Microsoft.Practices.Unity.dll
  • Microsoft.Practices.Unity.Configuration.dll

Next, drop in the following factory pattern to setup the ability to create an instance of the Engine object. Ideally, our client application will simply be able to call the CreateInstance method and pass the desired engineType to Register and Resolve the IoC Container for the object, as well as set a few base properties. We’ll achieve this using the Microsoft Unity Framework.


THE IoC CONTAINER FACTORY

Before I focus too deeply on the following code, I want to point out that I’ve hardcoded the classes in order to simplify the readability of the code. Normally the classes would be driven through some form of dynamic configuration. Regardless…

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Entites = THE_TruckKing.Entities;
using THE_TruckKing.Interfaces;
using Microsoft.Practices.Unity;
using Microsoft.Practices.Unity.Configuration;
using THE_TruckKing.Utilities;

namespace THE_TruckKing.Factories
{
    public class Engine
    {
        static public IEngine CreateInstance(string engineType)
        {
            IUnityContainer _container = new UnityContainer();
            _container.RegisterType(typeof(IEngine), typeof(Entities.Engine));
            IEngine obj = _container.Resolve<IEngine>();
            return obj;
        }

        static private IEngine SetValues(IEngine engine, string engineType)
        {
            try
            {
                switch (engineType)
                {
                    case Constants.Engines.RED_ENGINE:
                        {
                            engine.Type = Constants.Engines.RED_ENGINE;
                            engine.BaseWeight = Constants.Engine_Base_Weights.RED_ENGINE;
                            break;
                        }
                    case Constants.Engines.BLUE_ENGINE:
                        {
                            engine.Type = Constants.Engines.BLUE_ENGINE;
                            engine.BaseWeight = Constants.Engine_Base_Weights.BLUE_ENGINE;
                            break;
                        }
                    case Constants.Engines.YELLOW_ENGINE:
                        {
                            engine.Type = Constants.Engines.YELLOW_ENGINE;
                            engine.BaseWeight = Constants.Engine_Base_Weights.YELLOW_ENGINE;
                            break;
                        }
                    default:
                        {
                            break;
                        }
                }
                
                return engine;
            }
            catch (Exception)
            {
                throw;
            }
        }
    }
}


CREATING THE CHAIN-OF-RESPONSIBILITY COMMANDS

At this point, we’ve coded a textbook Chain-of-Responsibility design pattern. But, in order to complete the pattern we still need to establish the successive order of the handlers in the chain. So, we’ll solve this piece of the puzzle by creating a quick console application that references both the assembly we just created, as well as a couple of the Microsoft Unity Framework assemblies:

  • Microsoft.Practices.Unity.dll
  • Microsoft.Practices.Unity.Configuration.dll

Afterwards, drop the following code in:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using THE_TruckKing;
using THE_TruckKing.Utilities;
using THE_TruckKing.Interfaces;
using THE_TruckKing.Entities;
using Factories = THE_TruckKing.Factories;
using Microsoft.Practices.Unity;
using Microsoft.Practices.Unity.Configuration;


namespace CoR_Pattern_Client
{
    /// <summary>
    /// Program that signifies an engine is ready to be loaded at the dock
    /// </summary>
    class Program
    {
        static void Main(string[] args)
        {
            int x = 0;

            // Calls the Unity IoC Factory Handler to create the instances of the objects
            IEngine redEngine = Factories.Engine.CreateInstance(Constants.Engines.RED_ENGINE);
            IEngine blueEngine = Factories.Engine.CreateInstance(Constants.Engines.BLUE_ENGINE);
            IEngine yellowEngine = Factories.Engine.CreateInstance(Constants.Engines.YELLOW_ENGINE);

            while (x != 999)
            {
                Console.WriteLine("Specify an engine that is ready to be loaded:");
                Console.WriteLine("");
                Console.WriteLine("Press (R) - To Load Red Engine");
                Console.WriteLine("Press (Y) - To Load Yellow Engine");
                Console.WriteLine("Press (B) - To Load Blue Engine");
                Console.WriteLine("Press (Q) - To Quit");

                var input = Console.ReadKey();
                Console.WriteLine("");
                Console.WriteLine("");

                // Completes the Chain-of-Responsibility Pattern
                Handler h1 = new ShipmentHandlerBlue();
                Handler h2 = new ShipmentHandlerYellow();
                Handler h3 = new ShipmentHandlerRed();

                h1.SetSuccessor(h2);
                h2.SetSuccessor(h3);

                switch (input.Key.ToString().ToUpperInvariant())
                {
                    case "R":
                        {
                            redEngine.Type = Constants.Engines.RED_ENGINE;
                            redEngine.QuantityShipping = GetShipmentQuantity();
                            redEngine = h1.HandleRequest(redEngine);
                            break;
                        }
                    case "Y":
                        {
                            yellowEngine.Type = Constants.Engines.YELLOW_ENGINE;
                            yellowEngine.QuantityShipping = GetShipmentQuantity();
                            yellowEngine = h1.HandleRequest(yellowEngine);
                            break;
                        }
                    case "B":
                        {
                            blueEngine.Type = Constants.Engines.BLUE_ENGINE;
                            blueEngine.QuantityShipping = GetShipmentQuantity();
                            blueEngine = h1.HandleRequest(blueEngine);                            
                            break;
                        }
                    case "Q":
                        {
                            Environment.Exit(0);
                            break;
                        }
                    default:
                        {
                            break;
                        }
                }

                Console.ReadLine();
                Console.Clear();
            }
        }

        private static int GetShipmentQuantity()
        {
            int quantity = 0;

            try
            {
                Console.WriteLine("");
                Console.WriteLine("How many engines are you loading?");
                quantity = int.Parse(Console.ReadLine());
                Console.WriteLine("");
                return quantity;
            }
            catch (Exception)
            {
                return 0;
            }
        }
    }
}


FINALLY

Take special note of the following lines of code that exist in the previous codebase. First, we’ll focus on the Microsoft Unity, Inversion of Control aspect of it, which is exhibited in the following lines of code:

  • IEngine redEngine = Factories.Engine.CreateInstance();
  • IEngine blueEngine = Factories.Engine.CreateInstance();
  • IEngine yellowEngine = Factories.Engine.CreateInstance();

This is what allows us to decouple our object dependencies from the main assembly to any client applications. The control of each object’s creation is inverted to the Factory inside the assembly, so the client needs to know very little about creating or consuming the object itself. The Microsoft Unity Framework takes care of all of this for you.

The next interesting piece involves closing the gap on the chain-of-responsibility pattern by implementing a definitive successor chain of responsibilities using the concrete handler types. The following lines of code designate that the ShipmentHandlerBlue() concrete handler will receive the initial payload request, and if it can’t handle it then it then it will be its responsibility to pass the request message along to the ShipmentHanlderYellow() concrete handler in the chain.

Finally, if it can’t handle the payload request, then it will finally pass the responsibility down the chain to the ShipmentHandlerRead concrete handler for fulfillment. Each concrete handler acts autonomously in the chain, meaning that it doesn’t have to know anything else about any other concrete handler, enacting a true separation of concerns and a classic, textbook example of the chain-of-responsibility pattern itself.

Handler h1 = new ShipmentHandlerBlue();
Handler h2 = new ShipmentHandlerYellow();
Handler h3 = new ShipmentHandlerRed();

h1.SetSuccessor(h2);
h2.SetSuccessor(h3);

When you run the application, you should see the following results:

Image1

Image2

Image3

Thanks for reading and keep on coding! 🙂

The Observer Pattern

Author: Cole Francis, Architect

Click here to download my solution on GitHub

BACKGROUND

If you read my previous article, then you’ll know that it focused on the importance of software design patterns. I called out that there are some architects and developers in the field who are averse to incorporating them into their solutions for a variety of bad reasons. Regardless, even if you try your heart out to intentionally avoid incorporating them into your designs and solutions, the truth of the matter is you’ll eventually use them whether you intend to or not.

A great example of this is the Observer Pattern, which is arguably the most widely used software design pattern in the World. It comes in a number of different styles, with the most popular being the Model-View-Controller (MVC), whereby the View representing the Observer and the Model representing the observable Subject. People occasionally make the mistake of referring to MVC as a design pattern, but it actually an architectural style of the Observer Design Pattern.

The Observer Design Pattern’s taxonomy is categorized in the Behavioral Pattern Genus of the Software Design Pattern Family because of its object-event oriented communication structure, which causes changes in the Subject to be reflected in the Observer. In this respect, the Subject is intentionally kept oblivious, or completely decoupled from the Observer class.

Some people also make the mistake of calling the Observer Pattern the Publish-Subscribe Pattern, but they are actually two distinct patterns that just so happen to share some functional overlap. The significant difference between the two design patterns is that the Observable Pattern “notifies” its Observers whenever there’s a change in the Observed Subject, whereas the Publish-Subscribe Pattern “broadcasts” notifications to its Subscribers.

A COUPLE OF NEGATIVES

As with any software design pattern, there are some cons associated with using the Observer Pattern. For instance, the base implementation of the Observer Pattern calls for a concrete Observer, which isn’t always practical, and it’s certainly not easily extensible. Building and deploying an entirely new assembly each time a new Subject is added to the solution would require a rebuild and redeployment of the assembly each time, which is a practice that many larger, bureaucratically-structured companies often frown upon. Given this, I’ll show you how to get around this little nuance later in this article.

Another problem associated with the Observer Pattern involves the potential for memory leaks, which are also referred to as “lapsed listeners” or “latent listeners”. Despite what you call it, a memory leak by any other name is still a memory leak. Regardless, because an explicit registering and unregistering is generally required with this design pattern, if the Subjects aren’t properly unregistered (particularly ones that consume large amounts of memory) then unnecessary memory consumption is certain, as stale Subjects continue to be needlessly observed until something changes. This can result in performance degradation. I’ll explain to you how you can work around this issue.

OBSERVER DESIGN PATTERN OVERVIEW

Typically, there are three (3) distinct classes that comprise the heart and soul of the Observer design pattern, and they are the Observer class, the Subject class, and the Client (or Program). Beyond this, I’ve seen the pattern implemented in a number of different ways, and asking a roomful of architects how they might go about implementing this design pattern is a lot like asking them how they like their morning eggs. You’ll probably get a variety of different responses back.

However, my implementation of this design pattern typically deviates from the norm because I like to include a fourth class to the mix, called the LifetimeManager class. The purpose of the LifetimeManager class is to allow each Subject class to autonomously maintain its own lifetime, alleviating the need for the client to explicitly call the Unregister() method on the Subject object. It’s not that I don’t want the client program to explicitly call the Subscriber’s Unregister() method, but this cleanup call does occasionally get omitted for whatever reason. So, the inclusion of the LifeTimeManager class provides an additional safeguard to protect us against this. I’ll focus on the LifetimeManager class a little bit later in this article.

Moving on, the Observer design pattern is depicted in the class diagram below. As you can see, the Subject inherits from the LifetimeManager class and implements the ISubject interface, but the client program and the Observer are left decoupled from the Subject. You will also notice that the Subject provides the ability to allow a program to register and unregister a Subject class. By inheriting from the LifetimeManager class, the Subject class now also allows the client to establish specific lifetime requirements for the Subject class, such as whether it uses a basic or sliding expiration, its lifetime in seconds, minutes, hours, days, months, and even years. And, if the developer fails to provide this information through the Subject’s overloaded constructor, then the default constructor provides some default values to make sure the Subject is cleaned up properly.

ClassDiagram2

A MORE DETAILED EXPLANATION OF THE PATTERN

The Subject Class

The Subject class also contains a Change() method that’s exactly like the Register() method. This is something else that’s not normally a part of this design pattern, but I intentionally added this because I don’t think it makes sense to call the Register() method anytime changes are made to the Subject(). I think it makes for a bad developer experience. Instead, registering the Subject object once and then calling the Change() method anytime there are changes to the Subject object makes much more sense in my opinion. We can impose the cleanup work upon the Observer class each time the Subject object is changed.

The Observer Class

The Observer class includes an Update() method, which accepts a Subject object and the operation the Observer class needs to perform on the Subject object. For instance, if there’s an add or an update to the Subject object, then the Observers searches through its observed Subject cache to find it using it’s unique SubscriptionId and CacheId’s. If the Subject exists in the cache, then the Observer updates it by deleting the old Subject and adding the new one. If it doesn’t find it in the Subject cache, then it simply adds it. The Observer also accepts a remove action, which causes it to remove the Subject from it observed state.

The Client Program

The only other important element to remember is that anytime an action takes place, then notifications are always propagated back to the client program so that it’s always aware of what’s going on behind the scenes. When the Subject is registered, the client program is notified; When the Subject is unregistered, the client program is notified; When the observed Subject’s data changes, the client program is notified. One of the important tenets of this design pattern is that the client program is always kept aware of any changes that occur to its observed Subject.

The LifetimeManager Class

The LifetimeManager class, which is my own creation, is responsible for maintaining the lifetime of each Subject object that gets created. So, for every Subject object that gets created, a LifetimeManager class also gets created. The LiftetimeManager class includes a variety of malleable properties, which I’ll go over shortly. Also keep in mind that these properties get set by the default constructor of the Subject() class, and they can also be overridden when the Subject object first gets created by passing the override values in an overloaded constructor that I provide in my design, or they can be overridden by simply changing any one of the LifetimeManager class’s property values and then calling the Subject’s Change() method. It’s really as simple as that. Nevertheless, here are the supported properties that make up the LifetimeManager class:

1. ExpirationType: Tells the system whether the expiration type is either basic or sliding. Basic expiration means that the Subject expires at a specific point in time. Sliding expiration simply means that anytime a change is made to the Subject, the expiration slides based upon the values you provide.

2. ExpirationValue: This is an integer that is relative to the next property, TimePrecision.

3. TimePrecision: This is an enumeration that includes specific time intervals, like seconds, minutes, hours, days, months, and even years. So, if I provide a 30 for TimePrecision and provide the enumTimePrecision.Minutes for TimePrecision, then this means that I want my data cache to automatically expire, and hence self-unregister, in 30 minutes. What’s more, if you fail to provide me with these values during the time you Register() your Subject, then I default them for you in the default Constructor code in the Subject class.

So, now that you have an overview and visual understanding of the Observer pattern class structure and relationships, I’ll now spend a little time going over my implementation of the pattern by sharing my working source code with you. My intention is that you can use my source code to get your very own working model up and running. This will allow you to experiment with the pattern on your own. It would also be nice to get some feedback regarding how well you think my custom LifetimeManager class helps to avoid unwanted memory leaks by providing each Subject class with the ability to maintain its own lifetime.

THE OBSERVER CLASS SOURCE CODE

For the most part, it’s the responsibility of the Observer class to perform update operations on a given Subject when requested. Furthermore, the Observer class should respect and observe any changes to the stored Subject’s lifecycle until the Subject requests the Observer to unregister it. Here’s my working example of the Observer class:


using System;
using System.Collections.Generic;


namespace ObserverClient
{
    /// 
    /// The abstract observer base class
    /// 
    public static class Observer
    {
        #region Member Variables

        /// 
        /// The global data cache
        /// 
        private static List _data = new List();

        #endregion

        #region Methods

        /// 
        /// Provides CRUD operations on the global cache object
        /// 
        internal static bool Update(LifetimeManager data, Enums.enumSubjectAction action)
        {
            try
            {
                object o = new object();

                // This locks the critical section, just in case a timer even fires at the same
                // time the main thread's operation is in action.
                lock (o)
                {
                    switch (action)
                    {
                        case Enums.enumSubjectAction.AddChange:
                            {
                                // Finds the original object and removes it, and then it re-adds it to the list
                                _data.RemoveAll(a => a.SubscriptionId == data.SubscriptionId && a.CacheData == data.CacheId);
                                _data.Add(data);
                                break;
                            }
                        case Enums.enumSubjectAction.RemoveChild:
                            {
                                // Finds the entry in the list and removes it
                                _data.RemoveAll(a => a.SubscriptionId == data.SubscriptionId && a.CacheData == data.CacheId);
                                break;
                            }
                        case Enums.enumSubjectAction.RemoveParent:
                            {
                                // Finds the entry in the list and removes it
                                _data.RemoveAll(a => a.SubscriptionId == data.SubscriptionId);
                                break;
                            }
                        default:
                            {
                                // This is useless
                                break;
                            }
                    }

                    return true;
                }
            }
            catch (Exception)
            {
                throw;
            }
        }

        #endregion
    }
}

THE SUBJECT CLASS SOURCE CODE

Once again, the intent of the Subject class is to expose methods to the client allow for the registering and unregistering of the observable Subject. It’s the responsibility of the Subject to call the Observer class’s Update() method and request that specific actions be taken on it (e.g. add or remove).

In my code example below, the Observer class acts as a storage cache for observed Subjects, and it also provides some basic operations necessary to adequately maintain the observed Subjects.

As a side note, take a look at the default and overloaded constructors in the Subject class, below. It’s in these two areas of the Subject object that I either automatically control or allow the developer to override the Subject’s lifetime. Once the lifetime of the Subject object expires, then it is unregistered in the Observer and the client program is then automatically notified that the subject was removed from observation.


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using ObserverClient.Interface;


namespace ObserverClient
{
    /// 
    /// This is the Subject Class, which provides the ability to register and unregister and object.
    /// 
    public class Subject : LifetimeManager,  ISubject
    {
        #region Events

        /// 
        /// Handles the change notification event
        /// 
        public event NotifyChangeEventHandler NotifyChanged;

        #endregion
        
        #region Methods

        /// 
        /// The delegate for the NotificationChangeEventHandler event
        /// 
        public delegate void NotifyChangeEventHandler(T notifyinfo, Enums.enumSubjectAction action);

        /// 
        /// The register method.  This adds the entry and data to the Observer's data cache
        /// and then provides notification of the event to the caller if it's successfully added.
        /// 
        public void Register()
        {
            try
            {
                if (Observer.Update(this, Enums.enumSubjectAction.AddChange))
                {
                    this.NotifyChanged(this, Enums.enumSubjectAction.AddChange);
                }
            }
            catch (Exception)
            {
                throw;
            }
        }

        /// 
        /// The unregister method.  This removes the entry and data in the Observer's data cache
        /// and then provides notification of the event to the caller if it's successfully removed.
        /// 
        public void Unregister()
        {
            try 
	        {	        
		        if (this.SubscriptionId != null && this.CacheId == null)
                {
                    Observer.Update(this, Enums.enumSubjectAction.RemoveParent);
                    this.NotifyChanged(this, Enums.enumSubjectAction.RemoveParent);
                }
                else if (this.SubscriptionId != null && this.CacheId != null)
                {
                    Observer.Update(this, Enums.enumSubjectAction.RemoveChild);
                    this.NotifyChanged(this, Enums.enumSubjectAction.RemoveChild);
                }
	        }
	        catch (Exception)
	        {
		        throw;
	        }
        }

        /// 
        /// The change method.  This modifies the entry and data to the Observer's data cache
        /// and then provides notification of the event to the caller if successful.
        /// 
        public void Change()
        {
            try
            {
                if (Observer.Update(this, Enums.enumSubjectAction.AddChange))
                {
                    if(this.ExpirationType == Enums.enumExpirationType.Sliding)
                    {
                        this.ExpirationStart = DateTime.Now;
                        this.MonitorExpiration();
                    }

                    this.NotifyChanged(this, Enums.enumSubjectAction.AddChange);
                }
            }
            catch (Exception)
            {
                throw;
            }
        }

        /// 
        /// The event handler for object expiration notifications. It calls unregister for the current object.
        /// 
        void s_ExpiredUnregisterNow()
        {
            // Unregisters itself
            this.Unregister();
        }

        #endregion

        #region Constructor(s)

        /// 
        /// The Subject's default constructor (i.e. all the values relating to cache expiration are defaulted to 1 minute).
        /// 
        public Subject()
        {
            this.ExpirationType = Enums.enumExpirationType.Basic;
            this.ExpirationValue = 1;
            this.TimePrecision = Enums.enumTimePrecision.Minutes;
            this.ExpirationStart = DateTime.Now;

            this.NotifyObjectExpired += s_ExpiredUnregisterNow;
            this.MonitorExpiration();
        }

        /// 
        /// The overloaded Subject constructor
        /// 
        public Subject(Enums.enumExpirationType expirationType, int expirationValue, Enums.enumTimePrecision timePrecision)
        {
            this.ExpirationType = expirationType;
            this.ExpirationValue = expirationValue;
            this.TimePrecision = timePrecision;
            this.ExpirationStart = DateTime.Now;

            this.NotifyObjectExpired += s_ExpiredUnregisterNow;
            this.MonitorExpiration();
        }

        #endregion
    }
}

THE ISUBJECT INTERFACE

The ISubject interface merely defines the contract when creating Subject objects. Because the Subject class implements the ISubject interface, then it’s obligated to include the ISubject’s properties and methods. These tenets keeps all Subject objects consistent.


using System;
using System.Collections.Generic;


namespace ObserverClient.Interface
{
    /// 
    /// This is the Subject Interface
    /// 
    public interface ISubject
    {
        #region Interface Operations

        object SubscriptionId { get; set; }
        object CacheId { get; set; }
        object CacheData { get; set; }
        int ExpirationValue { get; set; }
        Enums.enumTimePrecision TimePrecision { get; set; }
        DateTime ExpirationStart { get; set; }
        Enums.enumExpirationType ExpirationType { get; set; }
        void Register();
        void Unregister();

        #endregion
    }
}

THE CLIENT PROGRAM SOURCE CODE

It’s the responsibility of the Client to call the register, unregister, and change methods on the Subjects objects, whenever applicable. The client can also control the lifetime of the Subject object it invokes by overriding the default properties that are set in the Subject’s default constructor. A developer can do this by either injecting the overridden property values in the Subject’s overloaded constructor, or it can accomplish this by simply typing in new lifetime property values on the Subject object and then call the Subject object’s Change() method.

There’s one final note here, and that is that the callback methods are defined by the client program. You’ll see evidence of this where I’ve provided these lines in the source code, below: subject1.NotifyChanged += “Your defined method here!”. This makes it completely flexible, because multiple Subject objects can either share the same notification callback method in the client program, or each instance can define its own.

Also, because the Subject object is generic, I don’t need to implement concrete Subject objects, and they can be defined on-the-fly. This means that I don’t need to redeploy the Observer assembly each time I add a new Subject. This eliminates the other negative that’s typically associated with the Observable design pattern.


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Net.NetworkInformation;
using System.Collections;


namespace ObserverClient
{
    class Program
    {
        /// 
        /// The main entry point into the application
        /// 
        static void Main(string[] args)
        {
            // Register subject 1
            Subject subject1 = new Subject { SubscriptionId = "1", CacheId = "1", CacheData = "1" };
            subject1.NotifyChanged += s_testCacheObserver1NotifyChanged_One;
            // Tie the following event handler to any notifications received on this particular subject
            subject1.ExpirationType = Enums.enumExpirationType.Sliding;
            subject1.Register();

            // Register subject 2
            Subject subject2 = new Subject { SubscriptionId = "1", CacheId = "2", CacheData = "2" };
            // Tie the following event handler to any notifications received on this particular subject
            subject2.NotifyChanged += s_testCacheObserver1NotifyChanged_One;
            subject2.Register();

            // Register subject 3
            Subject subject3 = new Subject { SubscriptionId = "1", CacheId = "1", CacheData = "Boom!" };
            // Tie the following event handler to any notifications received on this particular subject
            subject3.NotifyChanged += s_testCacheObserver1NotifyChanged_Two;
            subject3.Change();

            // Unregister subject 2. Only subject 2's notification event should fire and the
            // notification should be specific about the operations taken on it
            subject2.Unregister();

            // Change subject 1's data.  Only subject 2's notification event should fire and the
            // notification should be specific about the operations taken on it
            subject1.CacheData = "Change Me";
            subject1.Change();

            // Hang out and let the system clean up after itself.  Events should only fire for those
            // objects that are self-unregistered.  The system is capable of maintaining itself.
            Console.ReadKey();
        }

        /// 
        /// Notifications are received from the Subject whenever changes have occurred.
        /// 
        static void s_testCacheObserver1NotifyChanged_One(T notifyInfo, Enums.enumSubjectAction action)
        {
            var data = notifyInfo;
        }

        /// 
        /// Notifications are received from the Subject whenever changes have occurred.
        /// 
        static void s_testCacheObserver1NotifyChanged_Two(T notifyInfo, Enums.enumSubjectAction action)
        {
            var data = notifyInfo;
        }
    }
}

THE LIFETIME MANAGER CLASS SOURCE CODE

Again, the LifetimeManager Class is my own creation. The goal of this class, which I’ve already mentioned a couple of times in this article, is to supply default properties that will allow the Subject to maintain its own lifetime without the need for the Unregister() method having to be called explicitly by the client program.

So, while I still believe it’s imperative that the client program explicitly call the Subject object’s Unregister() method, it’s comforting knowing there’s a backup plan in place if for some reason that doesn’t happen.

I’ve also highlighted all of the granular lifetime options in the source code. as you can see for yourself, the code currently accept anything from milliseconds to years, and everything in between. (lightyears would have been really cool) I could have made it even more granular, but I can’t imagine anyone registering and unregistering an observed Subject for less than a millisecond. Also, I can’t image anyone storing observed Subject for as long as a year, even though I’ve created this implementation to observe Subject objects for as long as ±1.7 × 10 to the 308th power years. That seems sufficient, don’t you think? 🙂


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Timers;
using System.Globalization;

namespace ObserverClient
{
    /// 
    /// The SubjectDecorator class provides additional operations that the Subject class
    /// should be aware of but fall outside its immediate scope of attention.
    /// 
    public class LifetimeManager
    {
        #region Member Variables

        private Timer timer = new Timer();

        #endregion

        #region Properties

        public object SubscriptionId { get; set; }
        public object CacheId { get; set; }
        public object CacheData { get; set; }
        public int ExpirationValue { get; set; }
        public Enums.enumExpirationType ExpirationType { get; set; }
        public Enums.enumTimePrecision TimePrecision { get; set; }
        public DateTime ExpirationStart { get; set; }

        #endregion

        #region Methods

        /// 
        /// Fires when the object's time to live has expired
        /// 
        void s_TimeHasExpired(object sender, ElapsedEventArgs e)
        {
            // Delete the Observer Cache and notify the caller
            NotifyObjectExpired();
        }

        /// 
        /// The delegate for the NotificationChangeEventHandler event
        /// 
        public delegate void NotifyObjectExpiredHandler();

        /// 
        /// Provides expiration monitoring capabilities for itself (self-maintained expiration)
        /// 
        internal void MonitorExpiration()
        {
            double milliseconds = 0;

            switch (this.TimePrecision)
            {
                case Enums.enumTimePrecision.Milliseconds:
                    {
                        milliseconds = DateTime.Now.Subtract(DateTime.Now.AddMilliseconds(this.ExpirationValue)).TotalMilliseconds;
                        break;
                    }
                case Enums.enumTimePrecision.Seconds:
                    {
                        milliseconds = DateTime.Now.Subtract(DateTime.Now.AddSeconds(this.ExpirationValue)).TotalMilliseconds;
                        break;
                    }
                case Enums.enumTimePrecision.Minutes:
                    {
                        milliseconds = DateTime.Now.Subtract(DateTime.Now.AddMinutes(this.ExpirationValue)).TotalMilliseconds;
                        break;
                    }
                case Enums.enumTimePrecision.Hours:
                    {
                        milliseconds = DateTime.Now.Subtract(DateTime.Now.AddHours(this.ExpirationValue)).TotalMilliseconds;
                        break;
                    }
                case Enums.enumTimePrecision.Days:
                    {
                        milliseconds = DateTime.Now.Subtract(DateTime.Now.AddDays(this.ExpirationValue)).TotalMilliseconds;
                        break;
                    }
                case Enums.enumTimePrecision.Months:
                    {
                        milliseconds = DateTime.Now.Subtract(DateTime.Now.AddMonths(this.ExpirationValue)).TotalMilliseconds;
                        break;
                    }
                case Enums.enumTimePrecision.Years:
                    {
                        milliseconds = DateTime.Now.Subtract(DateTime.Now.AddYears(this.ExpirationValue)).TotalMilliseconds;
                        break;
                    }
                default:
                    {
                        break;
                    }

            }

            if(timer.Interval > 0)
            {
                timer.Stop();
                timer.Dispose();
                timer = new Timer(Math.Abs(milliseconds));
                timer.Elapsed += new ElapsedEventHandler(s_TimeHasExpired);
                timer.Enabled = true;
            }
            else
            {
                timer.Elapsed += new ElapsedEventHandler(s_TimeHasExpired);
                timer.Enabled = true;
            }
        }

        #endregion
        
        #region Events

        /// 
        /// Handles the change notification event
        /// 
        public event NotifyObjectExpiredHandler NotifyObjectExpired;

        #endregion        
    }
}

WRAPPING THINGS UP

Well, that’s the Observer design pattern in a nutshell. I’ve even addressed the negatives associated with the design pattern. First, I overcame the “memory leak” issue by creating and tying a configurable LifetimeManager class to the Subject object, which makes sure the Unregister() method always gets called, regardless. Secondly, because I keep the Subject object generic and static, my design only requires one concrete Observer for all Subjects. I’ve also provided you with a Subscription-based model that will allow each Subscriber to observe one or more Subjects in a highly configurable manner. So, I believe that I’ve covered all the bases here…and hopefully then some.

Feel free to stand the example up for yourself. I think I’ve provided you with all the code you need, except for the Enumeration class, which I believe most of you will be able to quickly figure out for yourselves. Anyway, test drive it if you’d like and let me know what you think. I’m particularly interested in what you think about the inclusion of the LifetimeManager class. All comments and questions are always welcome.

Thanks for reading and keep on coding! 🙂

CrossProcessMemoryMaps

Author: Cole Francis, Architect

BACKGROUND PROBLEM

Many moons ago, I was approached by a client about the possibility of injecting a COM wrapped .NET assembly between two configurable COM objects that communicated with one another. The basic idea was that a hardware peripheral would make a request through one COM object, and then that request would be would intercepted by my new COM object which would then prioritize a hardware object’s data in a cross-process, global singleton. From there, any request initiated by a peripheral would then be reordered using the values persisted in my object.

Unfortunately, the solution became infinitely more complex when I learned that peripheral requests could originate from software running on different processes on the same machine. My first attempt involved building an out-of-process storage cache used to update and retrieve data as needed. Although it all seemed perfectly logical, it lacked the split-second processing speed that the client was looking for. So, next I tried to reading and writing data to shared files on the local file system. This also worked but lacked split-second processing capabilities. As a result, I ended up going back and forth to the drawing board before finally implementing a global singleton COM object that met client’s needs (Yes, I know it’s an anti-pattern…but it worked!).

Needless to say, the outcome was a rather bulky solution, as the intermediate layer of software I wrote had to play nicely with COM objects that it was never intended to live between, as well as adhere to specific IDispatch interfaces that weren’t very well documented, and it reacted to functionality that at times seemed random. Although the effort was considered highly successful, development was also very tedious and came at a price…namely my sanity. Looking back on everything well over a decade later and applying the knowledge that I possess today, I definitely would have implemented a much more elegant solution using an API stack that I’ll go over in just a minute.

As for now, let’s switch gears and discuss a something that probably seems completely unrelated to the topic at hand, and that is memory functions. Yes, that’s right…I said memory functions. It’s my belief that when most developers think of storing object and data in memory, two memory functions immediately come to their mind, namely the Heap and Virtual memory (explained below). While these are great mechanisms for managing objects and data internal to a single process, neither of the aforementioned memory-based storage facilities can be leveraged across multiple processes without employing some sort of out-of-process mechanism to persist and share the data.

    1) Heap Memory Functions: Represent instances of a class or an array. This type of memory isn’t immediately returned when a method completes its scope of execution. Instead, Heap memory is reclaimed whenever the .NET garbage collector decides that the object is no longer needed.

    2) Virtual Memory Functions: Represent value types, also known as primitives, which reside in the Stack. Any memory allocated to virtual memory will be immediately returned whenever the method’s scope of execution completes. Using the Stack is obviously more efficient than using the Heap, but the limited lifetime of value types makes them implausible candidates to share data between different classes…let alone sharing data between different processes.

BRING ON MEMORY MAPPING

While most developers focus predominantly on managing Heap and Virtual memory within their applications, there are also a few other memory options out there that are sometimes go unrecognized, including “Local, Global Memory”, “CRT Memory”, “NT Virtual Memory”, and finally “Memory-Mapped Files”. Due to the nature of our subject matter, this article will concentrate solely on “Memory-Mapped Files” (highlighted in orange in the pictorial below).

MemoryMappedFiles

To break it down into layman’s terms, a memory-mapped file allows you to reserve an address space and then commit physical storage to that region. In turn, the physical storage stems from a file that is already on the disk instead of the memory manager, which offers two notable advantages:

    1) Advantage #1 – Accessing data files on the hard drive without taking the I/O performance hit due to the buffering of the file’s content, making it ideal to use with large data files.

    2) Advantage #2 – Memory-mapped files provide the ability to share the same data with multiple processes running on the same machine.

Make no mistake about it, memory-mapped files are the most efficient way for multiple processes running on a single machine to communicate with one another! In fact, they are often used as process loaders in many commonly used operating systems today, including Microsoft Windows. Basically, whenever a process is started the operating system accesses a memory-mapped file in order to execute the application. Anyway, now that you know this little tidbit of information, you should also know that there are two types of memory-mapped files, including:

    1) Persisted memory-mapped files: After a process is done working on a piece of data, that mapped file can then be named and persisted to a file on the hard drive where it can be shared between multiple processes. These files are extremely suitable for working with large amounts of data.

    2) Non-persisted memory-mapped files: These are files that can be shared between two or more disparate threads operating within a single process. However, they don’t get persisted to the hard drive, which means that their data cannot be accessed by other processes.

I’ve put together a working example that showcases the capabilities of persisted memory-mapped files for demonstration purposes. As a precursor, the example depicts mutually exclusive thoughts conjured up by the left and right halves of the brain. Each thought lives and gets processed using its own thread, which in turn gets routed to the cerebral cortex for thought processing. Inside the cerebral cortex, short-term and long-term thoughts get stored and retrieved in a memory-mapped file that’s availability is managed by a mutex.

    A mutex is an object that allows multiple program threads to synchronously share the same resource, such as file access. A mutex can be created with a name to leverage persisted memory-mapped files, or the mutex can be left unnamed to utilize non-persisted memory-mapped files.

In addition to this, I’ve also assembled another application that runs as a completely different process on the same physical machine but is still able to read and write to the persisted memory-mapped file created by the first application. So, let’s get started!

APPLYING VIRTUAL MEMORY TO HUMAN MEMORY

In the code below, I’ve created a console application that references two objects in Heap memory, and they are TheLeftBrain.Stimuli() and TheRightBrain.Stiuli(). I’ve accounted for asynchronous thought processes stemming from the left and right halves of the brain by employing an asynchronous LINQ operation that kicks off two asynchronously blocking sub-threads (i.e. One for the left half of the brain and the other for the right half of the brain). Once the sub-threads are kicked off, the primary thread blocks any further operations until the sub-threads complete their operations and return (or optionally error out). I’ve highlighted the code in orange where I’m performing the asynchronous LINQ operation):


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Threading;
using LeftBrain;
using RightBrain;

namespace LeftBrainRightBrain
{
    class Program
    {
        /// 
        /// The main entry point into the application
        /// 
        /// 
        static void Main(string[] args)
        {
            // Performs an asynchronous operation on both the left and right halves of the brain
            try
            {
                LeftBrain.Stimuli leftBrainStimuli = new LeftBrain.Stimuli();
                RightBrain.Stimuli rightBrainStimuli = new RightBrain.Stimuli();

                // Invoke a blocking, parallel process
                Parallel.Invoke(() =>
                {
                    leftBrainStimuli.Think();
                }, () =>
                {
                    rightBrainStimuli.Think();
                });

                Console.ReadKey();
            }
            catch (Exception)
            {
                throw;
            }
        }
    }
}

At this point, each asynchronous sub-thread calls its respective Stimuli() class. It should be obvious that both the LeftBrain() and RightBrain() objects are fundamentally similar in nature and therefore share interfaces and inherit from the same base class object, with the only significant differences being the types of thoughts they invoke and the additional millisecond I added to Sleep() invocation to the RightBrain() class to simply show some variance between the manner in which the threads are able to process.

Nevertheless, each thought lives in its own isolated thread (making them sub-sub threads) that passes information along to the Cerebral Cortex for data committal and retrieval. Here is an example of the LeftBrain() class and its thoughts:


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Threading;
using CerebralCortex;

namespace LeftBrain 
{
   /// 
    /// Stimulations from the right half of the brain
    /// 
    /// 
    public class Stimuli : Memory, IStimuli
    {
        /// 
        /// Stores memories in a list
        /// 
        /// 
        private List memories = new List();

        /// 
        /// An overloaded constructor
        /// 
        /// 
        public void Think()
        {
            try
            {
                string threadName = string.Empty;
                int threadCounter = 0;

                // Add a list of left brain memories
                memories.Add("The area of a circle is π r squared.");
                memories.Add("The Law of Inertia is Isaac Newton's first law.");
                memories.Add("Richard Feynman was a physicist known for his theories on quantum mechanics.");
                memories.Add("y = mx + b is the equation of a Line, standard form and point-slope.");
                memories.Add("A hypotenuse is the longest side of a right triangle.");
                memories.Add("A chord represents a line segment within a circle that touches 2 points on the circle.");
                memories.Add("Max Planck's quantum mechanical theory suggests that each energy element is proportional to its frequency.");
                memories.Add("A geometry proof is a written account of the complete thought process that is used to reach a conclusion");
                memories.Add("Pythagorean theorem is a relation in Euclidean geometry among the three sides of a right triangle.");
                memories.Add("A proof of Descartes' Rule for polynomials of arbitrary degree can be carried out by induction.");

                // Recount your memories
                memories.ForEach(memory =>
                {
                    this.ProcessThought(string.Format("Thread: {0} (Left Brain)", threadCounter += 1), memory);
                });
            }
            catch (Exception)
            {
                
                throw;
            }
        }

        /// 
        /// Controls the thought process for this half of the brain
        /// 
        /// 
        public void ProcessThought(string threadName, string memory)
        {
            try
            {
                Thread.Sleep(3000);
                Thread monitorThread = null;

                // Spin up a new thread delegate to invoke the thought process
                monitorThread = new Thread(delegate()
                {
                    base.InvokeThoughtProcess(threadName, memory);
                });

                // Name the thread and start it
                monitorThread.Name = threadName;
                monitorThread.Start();
            }
            catch (Exception)
            {
                throw;
            }
        }
    }
}

Likewise, shown below is an example of the RightBrain() class and its thoughts. Once again, the RightBrain() differs from the LeftBrain() mainly in terms of the types thoughts that get invoked, with the left half of the brain’s thoughts being more cognitive in nature and the right half of the brain’s thoughts being more artistic:


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Threading;
using CerebralCortex;

namespace RightBrain
{
    /// 
    /// Stimulations from the right half of the brain
    /// 
    /// 
    public class Stimuli : Memory, IStimuli
    {
        /// 
        /// Stores memories in a list
        /// 
        /// 
        private List memories = new List();

        /// 
        /// An overloaded constructor
        /// 
        /// 
        public void Think()
        {
            try
            {
                string threadName = string.Empty;
                int threadCounter = 0;

                // Add a list of right brain memories
                memories.Add("I wonder if there's a Van Gough Documentary on Television?");
                memories.Add("Isn't the color blue simply radical.");
                memories.Add("Why don't you just drop everything and hitch a ride to California, dude?");
                memories.Add("Wouldn't it be cool to be a shark?");
                memories.Add("This World really is my oyster.  Now, if only I had some cocktail sauce...");
                memories.Add("Why did I stop finger painting?");
                memories.Add("Does anyone want to go to a BBQ?");
                memories.Add("Earth tones are the best.");
                memories.Add("Heavy metal bands rock!");
                memories.Add("I like really shiny thingys.  Oh, Look!  A shiny thingy...");

                // Recount your memories
                memories.ForEach(memory =>
                {
                    this.ProcessThought(string.Format("Thread: {0} (Right Brain)", threadCounter += 1), memory);
                });
            }
            catch (Exception)
            {
                
                throw;
            }
        }

        /// 
        /// Controls the thought process for this half of the brain
        /// 
        /// 
        public void ProcessThought(string threadName, string memory)
        {
            try
            {
                Thread.Sleep(4000);
                Thread monitorThread = null;

                // Spin up a new thread delegate to invoke the thought process
                monitorThread = new Thread(delegate()
                {
                    base.InvokeThoughtProcess(threadName, memory);
                });

                // Name the thread and start it
                monitorThread.Name = threadName;
                monitorThread.Start();
            }
            catch (Exception)
            {
                throw;
            }
        }
    }
}

Regardless, the thread delegates spawned in the LeftBrain and RightBrain Stimuli() classes are responsible for contributing to short-term memory, as each thread commits its discrete memory item to a growing list of memories via the Thought() object. Each thread is also responsible for negotiating with the local mutex (highlighted below in orange) in order to access the critical sections of the code where thread-safety becomes absolutely imperative (highlighted below in wheat), as the individual threads add their messages to the global memory-mapped file (highlighted below in silver).

After each thread writes its memory to the memory-mapped file in the critical section of the code, it then releases the mutex (highlighted below in green) and allows the next sequential thread to lock the mutex and safely enter into the critical section of the code. This behavior repeats itself until all of the threads have exhausted their discrete units-of-work and safely rejoin the hive in their respective hemispheres of the brain. Once all processing completes, the block is then lifted by the primary thread and normal processing continues.


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Threading;
using System.IO;
using System.IO.MemoryMappedFiles;
using System.Runtime.InteropServices;
using System.Xml.Serialization;

namespace CerebralCortex
{
    ///    
    /// The common area of the brain that controls thought
    /// 
    /// 
    public class Memory
    {
        /// 
        /// The local and global mutexes
        /// 
        /// 
        Mutex localMutex = new Mutex(false, "CCFMutex");
        MemoryMappedFile memoryMap = null;

        /// 
        /// Shared memory between the left and right halves of the brain
        /// 
        /// 
        static List<string> Thoughts = new List<string>();

        /// 
        /// Stores a thought in memory
        /// 
        /// 
        private bool StoreThought(string threadName, string thought)
        {
            bool retVal = false;

            try
            {
                Thoughts.Add(string.Concat(threadName, " says: ", thought));
                retVal = true;
            }
            catch (Exception)
            {
                
                throw;
            }

            return retVal;
        }

        /// 
        /// Retrieves a thought from memory
        /// 
        /// 
        private string RetrieveFromShortTermMemory()
        {
            try
            {
                // Returns the last stored thought (simulates short-term memory)
                return Thoughts.Last();
            }
            catch (Exception)
            {
                
                throw;
            }
        }

        /// 
        /// Invokes the thought process (uses a local mutex to control thread access inside the same process)
        /// 
        /// 
        public bool InvokeThoughtProcess(string threadName, string thought)
        {
            try
            {
                // *** CRITICAL SECTION REQUIRING THREAD-SAFE OPERATIONS ***
                {

                    // Causes the thread to wait until the previous thread releases
                    localMutex.WaitOne();

                    // Store the thought
                    StoreThought(threadName, thought);

                    // Create or open the cross-process capable memory map and write data to it
                    memoryMap = MemoryMappedFile.CreateOrOpen("CCFMemoryMap", 2000);

                    byte[] Buffer = ASCIIEncoding.ASCII.GetBytes(string.Join("|", Thoughts));
                    MemoryMappedViewAccessor accessor = memoryMap.CreateViewAccessor();
                    accessor.Write(54, (ushort)Buffer.Length);
                    accessor.WriteArray(54 + 2, Buffer, 0, Buffer.Length);

                    // Conjures the thought back up
                    Console.WriteLine(RetrieveFromShortTermMemory());
                }
                return true;
            }
            catch (Exception)
            {
                throw;
            }
            finally
            {
                // Releases the lock on the critical section of the code
                localMutex.ReleaseMutex();
            }

            return false;
        }
    }
}

With the major portions of the code complete, I am now able to run the application and watch the threads add their memories to the list of memories in the memory-mapped file via the critical section of the cerebral cortex code (click on the pictorial below to view the results)…

MemoryMapOutput

So, to quickly wrap this article up, my final step is to create a separate console application that will run as a completely separate process on the same physical machine in order to demonstrate the cross-process capabilities of a memory-mapped file. In this case, I’ve appropriately named my console application “OmniscientProcess”.

This application will make a call to the RetrieveLongTermMemory() method in its same class in order to negotiate with the global mutex. Provided the negotiation process goes well, the “OmniscientProcess” will attempt to retrieve the data being preserved within the memory-mapped file that was created by our previous application. In theory, this example is equivalent to having some external entity (i.e. someone or something) tapping into your own personal thoughts.


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Threading;
using CerebralCortex;

namespace OmniscientProcess
{
    class Program
    {
        static Mutex globalMutex = new Mutex(false, "CCFMutex");
        static MemoryMappedFile memoryMap = null;

        static void Main(string[] args)
        {
            // Reference the memory object and retrieve our memory-mapped data
            CerebralCortex.Memory cerebralMemory = new Memory();
            List<string> longTermMemories = cerebralMemory.RetrieveLongTermMemory();

            longTermMemories.ForEach(memory =>
            {
                Console.WriteLine(memory);
            });

            Console.WriteLine(string.Empty);
            Console.WriteLine("Press any key to end...");
            Console.ReadKey();
        }

        /// 
        /// Retrieves all thoughts from memory (uses a global mutex to control thread access from different processes)
        /// 
        /// 
        private static List<string> RetrieveLongTermMemory()
        {
            try
            {
                // Causes the thread to wait until the previous thread releases
                globalMutex.WaitOne();

                string delimitedString = string.Empty;

                memoryMap = MemoryMappedFile.OpenExisting("CCFMemoryMap", MemoryMappedFileRights.FullControl);

                MemoryMappedViewAccessor accessor = memoryMap.CreateViewAccessor();
                ushort Size = accessor.ReadUInt16(54);
                byte[] Buffer = new byte[Size];
                accessor.ReadArray(54 + 2, Buffer, 0, Buffer.Length);
                string delimitedThoughts = ASCIIEncoding.ASCII.GetString(Buffer);
                return delimitedThoughts.Split('|').ToList();
            }
            catch (Exception)
            {
                throw;
            }
            finally
            {
                // Releases the lock on the critical section of the code
                globalMutex.ReleaseMutex();
            }
        }
    }
}

The aforementioned application has the ability to retrieve the state of the memory-mapped file from an external process at any point in time, except of course when the mutex is locked. It’s the responsibility of the mutex to exercise thread safety, regardless of the originating process, whenever a thread attempts to access the shared address space that comprises the memory-mapped file (see below):

Output 1 – Here’s a partial listing that was retrieved early in the process:
MemoryMapRetrieval1

Output 2 – Here’s the full listing that was retrieved after all of the threads committed their data:
OmniscientProcess

Finally, while memory-mapped files certainly aren’t a new concept (they’ve actually been around for decades), they are sometimes difficult to wrap your head around when there’s a sizable number of processes and threads flying around in the code. And, while my examples aren’t necessarily basic ones, hopefully they employ some rudimentary concepts that everyone is able to quickly and easily understand.

To recount my steps, I demonstrated calls to disparate objects getting kicked off asynchronously, which in turn conjure up a respectable number of threads per object. Each individual thread, operating in each asynchronously executing object, goes to work by negotiating with a common mutex in an attempt to commit its respective data values to the cross-process, memory-mapped file that’s accessible to applications running as entirely different processes on the same physical machine.

Thanks for reading and keep on coding! 🙂

ColeFrancisBizRulesEngine

Author: Cole Francis, Architect

BACKGROUND

Over the past couple of days, I’ve pondered the possibility of creating a dynamic business rules engine, meaning one that’s rules and types are conjured up and reconciled at runtime. After reading different articles on the subject matter, my focus was imparted to the Microsoft Dynamic Language Runtime (DLR) and Labmda-based Expression Trees, which represent the factory methods available in the System.Linq.Expressions namespace and can be used to construct, query and validate relationally-structured dynamic LINQ lists at runtime using the IQueryable interface. In a nutshell, the C# (or optionally VB) compiler allows you to construct a list of binary expressions at runtime, and then it compiles and assigns them to a Lambda Tree data structure. Once assigned, you can navigate an object through the tree in order to determine whether or not that object’s data meets your business rule criteria.

AFTER SOME RESEARCHING

After reviewing a number of code samples offered by developers who have graciously shared their work on the Internet, I simply couldn’t find one that met my needs. Most of them were either too strongly-typed, too tightly coupled or applicable only to the immediate problem at hand. Instead, what I sought was something a little more reusable and generic. So, in absence of a viable solution, I took a little bit of time out of my schedule to create a truly generic prototype of one. This will be the focus of the solution, below.

THE SOLUTION

To kick things off, I’ve created a Expression Trees compiler that accepts a generic type as an input parameter, along with a list of dynamic rules. Its job is to pre-compile the generic type and dynamic rules into a tree of dynamic, IQueryable Lambda expressions that can validate values in a generic list at runtime. As with all of my examples, I’ve hardcoded the data for my own convenience, but the rules and data can easily originate from a data backing store (e.g. a database, a file, memory, etc…). Regardless, shown in the code block below is the PrecompiledRules Class, the heart of my Expression Trees Rules Engine, and for your convenience I’ve highlighted the line of code that performs the actual Expression Tree compilation in blue):


using System;
using System.Collections.Generic;
using System.Linq;
using System.Linq.Expressions;
using ExpressionTreesRulesEngine.Entities;

namespace ExpressionTreesRulesEngine
{
    /// Author: Cole Francis, Architect
    /// The pre-compiled rules type
    /// 
    public class PrecompiledRules
    {
        ///
        /// A method used to precompile rules for a provided type
        /// 
        public static List<Func<T, bool>> CompileRule<T>(List<T> targetEntity, List<Rule> rules)
        {
            var compiledRules = new List<Func<T, bool>>();

            // Loop through the rules and compile them against the properties of the supplied shallow object 
            rules.ForEach(rule =>
            {
                var genericType = Expression.Parameter(typeof(T));
                var key = MemberExpression.Property(genericType, rule.ComparisonPredicate);
                var propertyType = typeof(T).GetProperty(rule.ComparisonPredicate).PropertyType;
                var value = Expression.Constant(Convert.ChangeType(rule.ComparisonValue, propertyType));
                var binaryExpression = Expression.MakeBinary(rule.ComparisonOperator, key, value);

                compiledRules.Add(Expression.Lambda<Func>(binaryExpression, genericType).Compile());
            });

            // Return the compiled rules to the caller
            return compiledRules;
        }
    }
}


As you can see from the code above, the only dependency in my Expression Trees Rules Engine is on the Rule Class itself. Naturally, I could augment the Rule Class to the PreCompiledRules Class and eliminate the Rule Class altogether, thereby eliminating all dependencies. However, I won’t bother with this for the purpose of this demonstration. But, just know that the possibility does exist. Nonetheless, shown below is the concrete Rule class:


using System;
using System.Linq.Expressions;

namespace ExpressionTreesRulesEngine.Entities
{
    ///
    /// The Rule type
    /// 
    public class Rule
    {
        ///
        /// Denotes the rules predictate (e.g. Name); comparison operator(e.g. ExpressionType.GreaterThan); value (e.g. "Cole")
        /// 
        public string ComparisonPredicate { get; set; }
        public ExpressionType ComparisonOperator { get; set; }
        public string ComparisonValue { get; set; }

        /// 
        /// The rule method that 
        /// 
        public Rule(string comparisonPredicate, ExpressionType comparisonOperator, string comparisonValue)
        {
            ComparisonPredicate = comparisonPredicate;
            ComparisonOperator = comparisonOperator;
            ComparisonValue = comparisonValue;
        }
    }
}


Additionally, I’ve constructed a Car class as a test class that I’ll eventually hydrate with data and then inject into the compiled Expression Tree object for various rules validations:


using System;
using ExpressionTreesRulesEngine.Interfaces;

namespace ExpressionTreesRulesEngine.Entities
{
    public class Car : ICar
    {
        public int Year { get; set; }
        public string Make { get; set; }
        public string Model { get; set; }
    }
}


Next, I’ve created a simple console application and added a project reference to the ExpressionTreesRulesEngine project. Afterwards, I’ve included the following lines of code (see the code blocks below, paying specific attention to the lines of code highlighted in orange) in the Main() in order to construct a list of dynamic rules. Again, these are rules that can be conjured up from a data backing store at runtime. Also, I’m using the ICar interface that I created in the code block above to compile my rules against.

As you can also see, I’m leveraging the out-of-box LINQ.ExpressionTypes enumerates to drive my conditional operators, which is in part what allows me to make the PreCompiledRules class so generic. Never fear, the LINQ.ExpressionTypes enumeration contains a plethora of node operations and conditional operators (and more)…far more enumerates than I’ll probably ever use in my lifetime.


List<Rule> rules = new List<Rule> 
{
     // Create some rules using LINQ.ExpressionTypes for the comparison operators
     new Rule ( "Year", ExpressionType.GreaterThan, "2012"),
     new Rule ( "Make", ExpressionType.Equal, "El Diablo"),
     new Rule ( "Model", ExpressionType.Equal, "Torch" )
};

var compiledMakeModelYearRules= PrecompiledRules.CompileRule(new List<ICar>(), rules);


Once I’ve compiled my rules, then I can simply tuck them away somewhere until I need them. For example, if I store my compiled rules in an out-of-process memory cache, then I can theoretically store them for the lifetime of the cache and invoke them whenever I need them to perform their magic. What’s more, because they’re compiled Lambda Expression Trees, they should be lightning quick against large lists of data. Other than pretending that the Car data isn’t hardcoded in the code example below, here’s how I would otherwise invoke the functionality of the rules engine:


// Create a list to house your test cars
List cars = new List();

// Create a car that's year and model fail the rules validations      
Car car1_Bad = new Car { 
    Year = 2011,
    Make = "El Diablo",
    Model = "Torche"
};
            
// Create a car that meets all the conditions of the rules validations
Car car2_Good = new Car
{
    Year = 2015,
    Make = "El Diablo",
    Model = "Torch"
};

// Add your cars to the list
cars.Add(car1_Bad);
cars.Add(car2_Good);

// Iterate through your list of cars to see which ones meet the rules vs. the ones that don't
cars.ForEach(car => {
    if (compiledMakeModelYearRules.TakeWhile(rule => rule(car)).Count() > 0)
    {
        Console.WriteLine(string.Concat("Car model: ", car.Model, " Passed the compiled rules engine check!"));
    }
    else
    {
        Console.WriteLine(string.Concat("Car model: ", car.Model, " Failed the compiled rules engine check!"));
    }
});

Console.WriteLine(string.Empty);
Console.WriteLine("Press any key to end...");
Console.ReadKey();


As expected, the end result is that car1_Bad fails the rule validations, because its year and model fall outside the range of acceptable values (e.g. 2011 < 2012 and 'Torche' != 'Torch'). In turn, car2_Good passes all of the rule validations as evidenced in the pic below:

TreeExpressionsResults

Well, that’s it. Granted, I can obviously improve on the abovementioned application by building better optics around the precise conditions that cause business rule failures, but that exceeds the intended scope of my article…at least for now. The real takeaway is that I can shift from validating the property values on a list of cars to validating some other object or invoking some other rule set based upon dynamic conditions at runtime, and because we’re using compiled Lambda Expression Trees, rule validations should be quick. I really hope you enjoyed this article. Thanks for reading and keep on coding! 🙂

TheServiceLocatorPattern

Author: Cole Francis, Architect

BACKGROUND:

In object-oriented programming (OOP), the Dependency Inversion Principle, or DIP, stipulates that the conventional dependency relationships established from the high-level policy-setting modules, to the low-level dependency modules, are inverted (i.e. reversed), creating an obvious layer of indirection used to resolve component dependencies. Therefore, the high-level components should exist independently from a low-level component’s implementation and all its minutia.

DIP was suggested by Robert C. Martin in a paper he wrote in 1996 titled, Object Oriented Design Quality Metrics: an analysis of dependencies. Following that, there was an article that appeared in the C++ Report in May 1996 entitled “The Dependency Inversion Principle” and the books Agile Software Development, Principles, Patterns, and Practices, and Agile Principles, Patterns, and Practices in C#.

The principle inverts the way most people may think about Object Oriented Design (OOD), and the Service Locator pattern is an excellent pattern to help demonstrate DIP principles, mainly because it facilitates a runtime provisioning of chosen low-level component implementations from its higher-level componentry.

The key tenants of the Service Locator Pattern are (in layman’s terms):

  • An interface is created, which identifies a set of callable methods that the concrete service class implements.
  • A concrete service class is created, which implements the interface. The concrete class is the component where all the real work gets done (e.g. calling a database, calling a WCF method, making an Http POST Ajax call, etc…).
  • A Service Locator class is created to loosely enlist the interface and its corresponding concrete service class. Once a client application requests to resolve an enlisted type, then it’s the Service Locator’s job to resolve the appropriate interface and return it to the calling application so that the service class’s method(s) can be called.
  • A visual representation of the Dependency Inversion Pattern

To help explain the pattern a little more clearly, I’ve put together a working example of the Service Locator Pattern implementing mock services. One call simulates the retrieval of credit card authorization codes for fulfilled orders coming from a database. Once I retrieve the authorization codes, I then simulate settling them using a remote credit card service provider. Afterwards, I mimic updating our database records with the credit card settlement codes that I received back from the credit card service provider. I’ve intentionally kept the following example simple so that it’s easy to follow and explain, and I’ve also broken the code down into color-coded sections to further dissect the responsibility of each region of code:


namespace ServiceLocatorExample
{
    /// 
    /// A textbook implementation of the Service Locator Pattern.  
    /// 
    /// 
    public class ServiceLocator : IServiceLocator
    {
        #region Member Variables

        /// 
        /// An early loaded dictionary object acting as a memory map for each interface's concrete type
        /// 
        /// 
        private IDictionary services;

        #endregion

        #region IServiceLocator Methods

        /// 
        /// Resolves the concrete service type using a passed in interface
        /// 
        /// 
        public T Resolve()
        {
            try
            {
                return (T)services[typeof(T)];
            }
            catch (KeyNotFoundException)
            {
                throw new ApplicationException("The requested service is not registered");
            }
        }

        /// 
        /// Extends the service locator capabilities by allowing an interface and concrete type to 
        /// be passed in for registration (e.g. if you wrap the assembly and wish to extend the 
        /// service locator to new types added to the extended project)
        /// 
        /// 
        /// IDictionary(object, object), where the first parameterized object is the service interface 
        /// and the second parameterized object is the concrete service type
        /// 
        /// 
        public void Register(object resolver)
        {
            try
            {
                this.services[typeof(T)] = resolver;
            }
            catch (Exception)
            {
                
                throw;
            }
        }

        #endregion

        #region Constructor(s)

        /// 
        /// The service locator constructor, which resolves a supplied interface with its corresponding concrete type
        /// 
        /// 
        public ServiceLocator()
        {
            services = new Dictionary();

            // Registers the service in the locator
            this.services.Add(typeof(IGetFulfilledOrderCCAuthCodes), new GetFulfilledOrderCCAuthCodes());
            this.services.Add(typeof(IGetFulfilledOderCCSettlementCodes), new GetFulfilledOderCCSettlementCodes());
            this.services.Add(typeof(IUpdateFulfilledOrderCCSettlementCodes), new UpdateFulfilledOderCCSettlementCodes());
        }

        #endregion
    }
}


PRE-LOADING DECOUPLED RELATIONSHIPS TO A DICTIONARY OBJECT AT RUNTIME:

If you look at the all the sections I’ve highlighted in yellow, all I’m doing is declaring a Dictionary Object to act as a “registry placeholder” in the Member Variables Region, and then I’m preloading the interface and service class as a key/value pair to the service registry in the Constructor(s) Region of the code.

The key/value pairs that get stored in the Dictionary Object loosely describes the concrete class and its corresponding interface that gets registered as service objects (e.g. “IMyClass”, “MyClass”). An interface describes the methods and properties that are implemented in the concrete class, and the concrete class is type where all the real work gets accomplished. In its most primitive form, the primary job of ServiceLocator class is to store key/value pairs in a simple Dictionary object and either register or resolve those key/value pairs whenever it’s called upon to do so.

GETTING AND SETTING VALUES IN THE DICTIONARY OBJECT AT RUNTIME:

The section that’s color-coded in green denotes simple getter and setter-like methods that are publicly exposed to a consuming application, allowing that consuming application to either register new service objects in the Dictionary Object registry or resolve an existing service object in the Dictionary Object’s registry for use in a client application.

In fact, listed below is a textbook example of how a client application would resolve an existing service object in the Dictionary Object’s registry for use. In this example I’m resolving the IGetFulfilledOrderCCAuthCodes interface to its concrete type and then calling its GetFulfilledOrderCCAuthCodes() method using a console application that I quickly threw together…


/// 
/// Gets the fulfilled orders credit card authorization codes to settle on
/// 
/// 
private static List GetFulfilledOrderCCAuthCodes()
{
    ServiceLocatorExample.ServiceLocator locator2 = new ServiceLocatorExample.ServiceLocator();
    IGetFulfilledOrderCCAuthCodes o = locator2.Resolve();
    return o.GetFulfilledOrderCCAuthCodes();
}


CONGRATULATIONS! YOU’RE DONE:

Assuming that someone has already written logic to retrieve the fulfilled order authorization codes from the database, then your part is done! I really wish there was more to it than this so that I would look like some sort of architectural superhero, but alas there isn’t. Thus, if all you were looking to get out of this post is how to implement a textbook example of the Service Locator design pattern, then you don’t need to go any further. However, for those of you who want to know the advantages and disadvantages of the Service Locator design pattern then please keep reading:

THE ADVANTAGES:

  • The Service Locator Pattern follows many well-recognized architectural principles, like POLA, Hollywood, KISS, Dependency Inversion, YAGNI, and others…
  • Although the Service Locator Pattern is not considered to be a lightweight pattern, it’s still very simple to learn and is easily explainable to others, which means that your junior developer’s eyes won’t pop out of their heads when you attempt to explain the concept to them.
    • This truly is a framework pattern that you can teach a less knowledgeable person over a lunch hour and expect them to fully understand when you’re done, because the Service Locator framework wires everything up using an minimal number of resources (e.g. a Dictionary object containing key/value pairs and the ability to read from and (optionally) write to the Dictionary object).
  • The Service Locator design pattern allows you to quickly and efficiently create a loosely coupled runtime linker that’s ideal for separating concerns across the entire solution, as each type is concerned only about itself and doesn’t care what any of the other components do.
  • For you architectural purists out there, just be aware that using the Service Locator design pattern doesn’t preclude you from coupling it with good Dependency Injection and Factory Pattern frameworks. In fact, by doing so you have the potential of creating a lasting framework that meets the conditions of both SOLID and POLA design principles, as well as many others. Perhaps I’ll make this the topic of my next architectural discussion…

THE DISADVANTAGES:

  • The services (i.e. Key/value pairs that represent concrete classes) that get registered in the Service Locator object are often considered “black box” items to consumers of the Service Locator class, meaning service objects and their dependencies are completely abstracted from the applications that call it. This loosely coupled structure makes it extremely difficult to track down issues without development access to the source code for not only the Service Locator itself, but also all of the dependent service objects that get registered by it at runtime.
    • If you find yourself in this situation and you don’t have access to the Service Locator source code, then tools like the Microsoft ILDasm or the RedGate’s .NET Reflector are occasionally able to shed some light on what’s going on inside the Service Locator assembly; however, if the code happens to be obfuscated or if hidden dependencies are completely unresolvable, then deciphering issues can become an exercise in futility. For this very reason, the Service Locator Pattern violates ACID Principles, which is why some architectural gurus consider the Service Locator design to be more of a design anti-pattern.
  • Because Service Locator’s dictionary is constructed using a key/value concept, all key names must be unique, meaning that it’s not very well-suited for distributed systems without adding more validation checks around the Register method’s dictionary insertion code.
  • Testing the registered objects becomes a difficult process because we aren’t able to test each object in isolation, as registered service objects are considered “black box” items to both the Service Locator and its callers.
  • As I previously mentioned, objects are “late-bound”, which means that a caller is going to burn up some CPU cycles waiting the Service Locator to find the service’s registry entry in the Dictionary object and return it to them, and then they still have to invoke that object “late-bound”. Sure the time is minimal, but it’s still time that isn’t being spent on enhancing something more valuable…like the user experience.
  • There are also some concerns from a security standpoint. Keep in mind that my ServiceLocator class allows people to dynamically register their own objects at runtime (see the code snippet below). Who knows what types of malicious service objects might get registered? What if our ServiceLocator class only performed a minimal set of validations before executing a method on a service object? Now that’s an awfully scary thought.



/// 
/// Extends the service locator capabilities by allowing an interface and concrete type to 
/// be passed in for registration (e.g. if you wrap the assembly and wish to extend the 
/// service locator to new types added to the extended project)
/// 
///
public void Register(object resolver)
{
    try
    {
       this.services[typeof(T)] = resolver;
    }
    catch (Exception)
    {
       throw;
    }
}



SOME FINAL TALKING POINTS:

As you can see, the Service Locator Design Pattern is very simple to explain and understand, but it certainly isn’t a “one size fits all” design pattern, and a fair degree of caution should be exercised before choosing it as a framework to build your solution on. You might even consider coupling it or substituting it with a more robust pattern, like one that offers the same capabilities but resolves its dependency objects at build-time instead of runtime (e.g. Dependency Injection).

Personally, I think this pattern’s “sweet spot” is for applications that: (1) Have a very narrow design scope; (2) Are built for in-house use and not third-party distribution; (3) Leverage service objects that are well-known and well-tested by a development team; (4) Aren’t designed for blazing speed. Thanks for reading and keep on coding! 🙂