SoftwarePatterns

Author: Cole Francis, Architect

Many of our mentors who leave us with beautiful masterpieces to gaze upon also tend to leave behind prescriptive guidance for us to glean from. But, how we decide to implement their imparted knowledge is always left up to us. What I’ve found is that a fair number of people choose to ignore their contributions altogether or find themselves rushing to judgment when it comes to applying their prescriptive guidance, often proposing the wrong tenants to solve a problem. Both of these approaches often yield poor results…occasionally even catastrophic ones.

The realm of software design and development is very similar in this respect. Designing and developing architectures using proven design patterns, prescribed by industry mentors, almost always lend themselves to a more cohesive understanding of a project’s composite breakdown across the entire project team when applied properly, which results in a solution that’s both easier to build and maintain. The design patterns attempt to achieve this by eliminating most of the technical guesswork involved, as well as overcoming many of the technical complexities by attempting separate the concerns.

So, it’s a little bit shocking to me each time I come across a technical article, or even speak with someone face-to-face, that talks about software design patterns like they’re the plague. Their excuses for not using them vary widely, but some of the more common responses include feeling like they’re boxed in when incorporating them into their designs, or feeling like their projects get into this state of over-design and design paralysis when they’re used, or even experiencing unwanted project bloat due to the added layers of unnecessary abstraction. The bottom line is that these people intentionally dismiss the benefits of the design patterns for…[Insert An Excuse Here].

While some of their concerns are warranted, I maintain that software design patterns are ultimately meant to ease the complexity of the Design and Build Phases, not to increase it. What’s more, if a certain design pattern doesn’t actually provide you with more flexibility and predictability, or if you’re really struggling to make that pattern work for you but it doesn’t quite fit, then my advice to you is that you’re probably doing something wrong. Either you’re trying to apply the wrong pattern to the situation or you don’t fully understand what it is that you’re trying to accomplish.

In addition to this, there’s a huge difference between laying down the initial architecture versus having to maintain that architecture or having to operate inside the architectural structure during the project’s immediate lifecycle and beyond. So, if you aren’t using software design patterns to your advantage, then you’re probably inflicting some degree of punishment not only on your current project team, but also on the individuals who will eventually be responsible for supporting and maintaining your final product.

 

Sure we always joke around about how the final product is someone else’s problem (usually the ASM Team’s), but it’s only a joke because there’s a certain degree of truth to this, and we’ve all been on the receiving end of a poorly designed product at some point in our careers. Trust me when I say that it’s about as much fun and productive as trying to stick your elbow in your eye.

 

I have this theory that if Architects were forced to support their own finished products, particularly longer-term, then they would incorporate patterns into their designs a lot more often than they ordinarily do. It’s been proven many times over that employing design patterns make a solution more predictable, flexible, and in most cases more scalable. These are all very important factors to consider when it comes to a project’s “buildability”, extensibility, maintainability, and even the sporadic refactoring.

Additionally, software development is often considered to be a highly subjective exercise; therefore, taking a little extra time up-front to pair the correct software patterns with right contextual areas of the design will often result in an architecture that’s uniformly framed for your developers, which means that it becomes much more difficult for them to get things wrong during development.

 

I need to point out that this shouldn’t be construed as “boxing people in” or “adding unnecessary bloat to your project”, but instead it should be thought of as incorporating a malleable structure that ubiquitously understood across your entire development team. I would argue that this pervasive level of understanding should make your project much easier to build and maintain over time. It’s really cool and productive when everyone speaks the same language and understands where each unit of functionality lives without having to say it over and over again.

 

Also, keep in mind that design patterns are language agnostic for the most part, so including them early in your Technical Design process shouldn’t necessarily influence the programming language(s) you eventually decide to use during your Build Phase. The benefit is that conversations can be more oriented around the solution’s functionality and less oriented around the specific tools that will eventually be used to accomplish the more detailed technical tasks involved.

Finally…who knows? After many years of practice and hard work, perhaps you will be so inclined as to contribute something back to your own community that’s one day revered as a masterpiece by others…

Thanks for reading and keep on coding! 🙂

CrossProcessMemoryMaps

Author: Cole Francis, Architect

BACKGROUND PROBLEM

Many moons ago, I was approached by a client about the possibility of injecting a COM wrapped .NET assembly between two configurable COM objects that communicated with one another. The basic idea was that a hardware peripheral would make a request through one COM object, and then that request would be would intercepted by my new COM object which would then prioritize a hardware object’s data in a cross-process, global singleton. From there, any request initiated by a peripheral would then be reordered using the values persisted in my object.

Unfortunately, the solution became infinitely more complex when I learned that peripheral requests could originate from software running on different processes on the same machine. My first attempt involved building an out-of-process storage cache used to update and retrieve data as needed. Although it all seemed perfectly logical, it lacked the split-second processing speed that the client was looking for. So, next I tried to reading and writing data to shared files on the local file system. This also worked but lacked split-second processing capabilities. As a result, I ended up going back and forth to the drawing board before finally implementing a global singleton COM object that met client’s needs (Yes, I know it’s an anti-pattern…but it worked!).

Needless to say, the outcome was a rather bulky solution, as the intermediate layer of software I wrote had to play nicely with COM objects that it was never intended to live between, as well as adhere to specific IDispatch interfaces that weren’t very well documented, and it reacted to functionality that at times seemed random. Although the effort was considered highly successful, development was also very tedious and came at a price…namely my sanity. Looking back on everything well over a decade later and applying the knowledge that I possess today, I definitely would have implemented a much more elegant solution using an API stack that I’ll go over in just a minute.

As for now, let’s switch gears and discuss a something that probably seems completely unrelated to the topic at hand, and that is memory functions. Yes, that’s right…I said memory functions. It’s my belief that when most developers think of storing object and data in memory, two memory functions immediately come to their mind, namely the Heap and Virtual memory (explained below). While these are great mechanisms for managing objects and data internal to a single process, neither of the aforementioned memory-based storage facilities can be leveraged across multiple processes without employing some sort of out-of-process mechanism to persist and share the data.

    1) Heap Memory Functions: Represent instances of a class or an array. This type of memory isn’t immediately returned when a method completes its scope of execution. Instead, Heap memory is reclaimed whenever the .NET garbage collector decides that the object is no longer needed.

    2) Virtual Memory Functions: Represent value types, also known as primitives, which reside in the Stack. Any memory allocated to virtual memory will be immediately returned whenever the method’s scope of execution completes. Using the Stack is obviously more efficient than using the Heap, but the limited lifetime of value types makes them implausible candidates to share data between different classes…let alone sharing data between different processes.

BRING ON MEMORY MAPPING

While most developers focus predominantly on managing Heap and Virtual memory within their applications, there are also a few other memory options out there that are sometimes go unrecognized, including “Local, Global Memory”, “CRT Memory”, “NT Virtual Memory”, and finally “Memory-Mapped Files”. Due to the nature of our subject matter, this article will concentrate solely on “Memory-Mapped Files” (highlighted in orange in the pictorial below).

MemoryMappedFiles

To break it down into layman’s terms, a memory-mapped file allows you to reserve an address space and then commit physical storage to that region. In turn, the physical storage stems from a file that is already on the disk instead of the memory manager, which offers two notable advantages:

    1) Advantage #1 – Accessing data files on the hard drive without taking the I/O performance hit due to the buffering of the file’s content, making it ideal to use with large data files.

    2) Advantage #2 – Memory-mapped files provide the ability to share the same data with multiple processes running on the same machine.

Make no mistake about it, memory-mapped files are the most efficient way for multiple processes running on a single machine to communicate with one another! In fact, they are often used as process loaders in many commonly used operating systems today, including Microsoft Windows. Basically, whenever a process is started the operating system accesses a memory-mapped file in order to execute the application. Anyway, now that you know this little tidbit of information, you should also know that there are two types of memory-mapped files, including:

    1) Persisted memory-mapped files: After a process is done working on a piece of data, that mapped file can then be named and persisted to a file on the hard drive where it can be shared between multiple processes. These files are extremely suitable for working with large amounts of data.

    2) Non-persisted memory-mapped files: These are files that can be shared between two or more disparate threads operating within a single process. However, they don’t get persisted to the hard drive, which means that their data cannot be accessed by other processes.

I’ve put together a working example that showcases the capabilities of persisted memory-mapped files for demonstration purposes. As a precursor, the example depicts mutually exclusive thoughts conjured up by the left and right halves of the brain. Each thought lives and gets processed using its own thread, which in turn gets routed to the cerebral cortex for thought processing. Inside the cerebral cortex, short-term and long-term thoughts get stored and retrieved in a memory-mapped file that’s availability is managed by a mutex.

    A mutex is an object that allows multiple program threads to synchronously share the same resource, such as file access. A mutex can be created with a name to leverage persisted memory-mapped files, or the mutex can be left unnamed to utilize non-persisted memory-mapped files.

In addition to this, I’ve also assembled another application that runs as a completely different process on the same physical machine but is still able to read and write to the persisted memory-mapped file created by the first application. So, let’s get started!

APPLYING VIRTUAL MEMORY TO HUMAN MEMORY

In the code below, I’ve created a console application that references two objects in Heap memory, and they are TheLeftBrain.Stimuli() and TheRightBrain.Stiuli(). I’ve accounted for asynchronous thought processes stemming from the left and right halves of the brain by employing an asynchronous LINQ operation that kicks off two asynchronously blocking sub-threads (i.e. One for the left half of the brain and the other for the right half of the brain). Once the sub-threads are kicked off, the primary thread blocks any further operations until the sub-threads complete their operations and return (or optionally error out). I’ve highlighted the code in orange where I’m performing the asynchronous LINQ operation):


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Threading;
using LeftBrain;
using RightBrain;

namespace LeftBrainRightBrain
{
    class Program
    {
        /// 
        /// The main entry point into the application
        /// 
        /// 
        static void Main(string[] args)
        {
            // Performs an asynchronous operation on both the left and right halves of the brain
            try
            {
                LeftBrain.Stimuli leftBrainStimuli = new LeftBrain.Stimuli();
                RightBrain.Stimuli rightBrainStimuli = new RightBrain.Stimuli();

                // Invoke a blocking, parallel process
                Parallel.Invoke(() =>
                {
                    leftBrainStimuli.Think();
                }, () =>
                {
                    rightBrainStimuli.Think();
                });

                Console.ReadKey();
            }
            catch (Exception)
            {
                throw;
            }
        }
    }
}

At this point, each asynchronous sub-thread calls its respective Stimuli() class. It should be obvious that both the LeftBrain() and RightBrain() objects are fundamentally similar in nature and therefore share interfaces and inherit from the same base class object, with the only significant differences being the types of thoughts they invoke and the additional millisecond I added to Sleep() invocation to the RightBrain() class to simply show some variance between the manner in which the threads are able to process.

Nevertheless, each thought lives in its own isolated thread (making them sub-sub threads) that passes information along to the Cerebral Cortex for data committal and retrieval. Here is an example of the LeftBrain() class and its thoughts:


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Threading;
using CerebralCortex;

namespace LeftBrain 
{
   /// 
    /// Stimulations from the right half of the brain
    /// 
    /// 
    public class Stimuli : Memory, IStimuli
    {
        /// 
        /// Stores memories in a list
        /// 
        /// 
        private List memories = new List();

        /// 
        /// An overloaded constructor
        /// 
        /// 
        public void Think()
        {
            try
            {
                string threadName = string.Empty;
                int threadCounter = 0;

                // Add a list of left brain memories
                memories.Add("The area of a circle is π r squared.");
                memories.Add("The Law of Inertia is Isaac Newton's first law.");
                memories.Add("Richard Feynman was a physicist known for his theories on quantum mechanics.");
                memories.Add("y = mx + b is the equation of a Line, standard form and point-slope.");
                memories.Add("A hypotenuse is the longest side of a right triangle.");
                memories.Add("A chord represents a line segment within a circle that touches 2 points on the circle.");
                memories.Add("Max Planck's quantum mechanical theory suggests that each energy element is proportional to its frequency.");
                memories.Add("A geometry proof is a written account of the complete thought process that is used to reach a conclusion");
                memories.Add("Pythagorean theorem is a relation in Euclidean geometry among the three sides of a right triangle.");
                memories.Add("A proof of Descartes' Rule for polynomials of arbitrary degree can be carried out by induction.");

                // Recount your memories
                memories.ForEach(memory =>
                {
                    this.ProcessThought(string.Format("Thread: {0} (Left Brain)", threadCounter += 1), memory);
                });
            }
            catch (Exception)
            {
                
                throw;
            }
        }

        /// 
        /// Controls the thought process for this half of the brain
        /// 
        /// 
        public void ProcessThought(string threadName, string memory)
        {
            try
            {
                Thread.Sleep(3000);
                Thread monitorThread = null;

                // Spin up a new thread delegate to invoke the thought process
                monitorThread = new Thread(delegate()
                {
                    base.InvokeThoughtProcess(threadName, memory);
                });

                // Name the thread and start it
                monitorThread.Name = threadName;
                monitorThread.Start();
            }
            catch (Exception)
            {
                throw;
            }
        }
    }
}

Likewise, shown below is an example of the RightBrain() class and its thoughts. Once again, the RightBrain() differs from the LeftBrain() mainly in terms of the types thoughts that get invoked, with the left half of the brain’s thoughts being more cognitive in nature and the right half of the brain’s thoughts being more artistic:


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Threading;
using CerebralCortex;

namespace RightBrain
{
    /// 
    /// Stimulations from the right half of the brain
    /// 
    /// 
    public class Stimuli : Memory, IStimuli
    {
        /// 
        /// Stores memories in a list
        /// 
        /// 
        private List memories = new List();

        /// 
        /// An overloaded constructor
        /// 
        /// 
        public void Think()
        {
            try
            {
                string threadName = string.Empty;
                int threadCounter = 0;

                // Add a list of right brain memories
                memories.Add("I wonder if there's a Van Gough Documentary on Television?");
                memories.Add("Isn't the color blue simply radical.");
                memories.Add("Why don't you just drop everything and hitch a ride to California, dude?");
                memories.Add("Wouldn't it be cool to be a shark?");
                memories.Add("This World really is my oyster.  Now, if only I had some cocktail sauce...");
                memories.Add("Why did I stop finger painting?");
                memories.Add("Does anyone want to go to a BBQ?");
                memories.Add("Earth tones are the best.");
                memories.Add("Heavy metal bands rock!");
                memories.Add("I like really shiny thingys.  Oh, Look!  A shiny thingy...");

                // Recount your memories
                memories.ForEach(memory =>
                {
                    this.ProcessThought(string.Format("Thread: {0} (Right Brain)", threadCounter += 1), memory);
                });
            }
            catch (Exception)
            {
                
                throw;
            }
        }

        /// 
        /// Controls the thought process for this half of the brain
        /// 
        /// 
        public void ProcessThought(string threadName, string memory)
        {
            try
            {
                Thread.Sleep(4000);
                Thread monitorThread = null;

                // Spin up a new thread delegate to invoke the thought process
                monitorThread = new Thread(delegate()
                {
                    base.InvokeThoughtProcess(threadName, memory);
                });

                // Name the thread and start it
                monitorThread.Name = threadName;
                monitorThread.Start();
            }
            catch (Exception)
            {
                throw;
            }
        }
    }
}

Regardless, the thread delegates spawned in the LeftBrain and RightBrain Stimuli() classes are responsible for contributing to short-term memory, as each thread commits its discrete memory item to a growing list of memories via the Thought() object. Each thread is also responsible for negotiating with the local mutex (highlighted below in orange) in order to access the critical sections of the code where thread-safety becomes absolutely imperative (highlighted below in wheat), as the individual threads add their messages to the global memory-mapped file (highlighted below in silver).

After each thread writes its memory to the memory-mapped file in the critical section of the code, it then releases the mutex (highlighted below in green) and allows the next sequential thread to lock the mutex and safely enter into the critical section of the code. This behavior repeats itself until all of the threads have exhausted their discrete units-of-work and safely rejoin the hive in their respective hemispheres of the brain. Once all processing completes, the block is then lifted by the primary thread and normal processing continues.


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Threading;
using System.IO;
using System.IO.MemoryMappedFiles;
using System.Runtime.InteropServices;
using System.Xml.Serialization;

namespace CerebralCortex
{
    ///    
    /// The common area of the brain that controls thought
    /// 
    /// 
    public class Memory
    {
        /// 
        /// The local and global mutexes
        /// 
        /// 
        Mutex localMutex = new Mutex(false, "CCFMutex");
        MemoryMappedFile memoryMap = null;

        /// 
        /// Shared memory between the left and right halves of the brain
        /// 
        /// 
        static List<string> Thoughts = new List<string>();

        /// 
        /// Stores a thought in memory
        /// 
        /// 
        private bool StoreThought(string threadName, string thought)
        {
            bool retVal = false;

            try
            {
                Thoughts.Add(string.Concat(threadName, " says: ", thought));
                retVal = true;
            }
            catch (Exception)
            {
                
                throw;
            }

            return retVal;
        }

        /// 
        /// Retrieves a thought from memory
        /// 
        /// 
        private string RetrieveFromShortTermMemory()
        {
            try
            {
                // Returns the last stored thought (simulates short-term memory)
                return Thoughts.Last();
            }
            catch (Exception)
            {
                
                throw;
            }
        }

        /// 
        /// Invokes the thought process (uses a local mutex to control thread access inside the same process)
        /// 
        /// 
        public bool InvokeThoughtProcess(string threadName, string thought)
        {
            try
            {
                // *** CRITICAL SECTION REQUIRING THREAD-SAFE OPERATIONS ***
                {

                    // Causes the thread to wait until the previous thread releases
                    localMutex.WaitOne();

                    // Store the thought
                    StoreThought(threadName, thought);

                    // Create or open the cross-process capable memory map and write data to it
                    memoryMap = MemoryMappedFile.CreateOrOpen("CCFMemoryMap", 2000);

                    byte[] Buffer = ASCIIEncoding.ASCII.GetBytes(string.Join("|", Thoughts));
                    MemoryMappedViewAccessor accessor = memoryMap.CreateViewAccessor();
                    accessor.Write(54, (ushort)Buffer.Length);
                    accessor.WriteArray(54 + 2, Buffer, 0, Buffer.Length);

                    // Conjures the thought back up
                    Console.WriteLine(RetrieveFromShortTermMemory());
                }
                return true;
            }
            catch (Exception)
            {
                throw;
            }
            finally
            {
                // Releases the lock on the critical section of the code
                localMutex.ReleaseMutex();
            }

            return false;
        }
    }
}

With the major portions of the code complete, I am now able to run the application and watch the threads add their memories to the list of memories in the memory-mapped file via the critical section of the cerebral cortex code (click on the pictorial below to view the results)…

MemoryMapOutput

So, to quickly wrap this article up, my final step is to create a separate console application that will run as a completely separate process on the same physical machine in order to demonstrate the cross-process capabilities of a memory-mapped file. In this case, I’ve appropriately named my console application “OmniscientProcess”.

This application will make a call to the RetrieveLongTermMemory() method in its same class in order to negotiate with the global mutex. Provided the negotiation process goes well, the “OmniscientProcess” will attempt to retrieve the data being preserved within the memory-mapped file that was created by our previous application. In theory, this example is equivalent to having some external entity (i.e. someone or something) tapping into your own personal thoughts.


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Threading;
using CerebralCortex;

namespace OmniscientProcess
{
    class Program
    {
        static Mutex globalMutex = new Mutex(false, "CCFMutex");
        static MemoryMappedFile memoryMap = null;

        static void Main(string[] args)
        {
            // Reference the memory object and retrieve our memory-mapped data
            CerebralCortex.Memory cerebralMemory = new Memory();
            List<string> longTermMemories = cerebralMemory.RetrieveLongTermMemory();

            longTermMemories.ForEach(memory =>
            {
                Console.WriteLine(memory);
            });

            Console.WriteLine(string.Empty);
            Console.WriteLine("Press any key to end...");
            Console.ReadKey();
        }

        /// 
        /// Retrieves all thoughts from memory (uses a global mutex to control thread access from different processes)
        /// 
        /// 
        private static List<string> RetrieveLongTermMemory()
        {
            try
            {
                // Causes the thread to wait until the previous thread releases
                globalMutex.WaitOne();

                string delimitedString = string.Empty;

                memoryMap = MemoryMappedFile.OpenExisting("CCFMemoryMap", MemoryMappedFileRights.FullControl);

                MemoryMappedViewAccessor accessor = memoryMap.CreateViewAccessor();
                ushort Size = accessor.ReadUInt16(54);
                byte[] Buffer = new byte[Size];
                accessor.ReadArray(54 + 2, Buffer, 0, Buffer.Length);
                string delimitedThoughts = ASCIIEncoding.ASCII.GetString(Buffer);
                return delimitedThoughts.Split('|').ToList();
            }
            catch (Exception)
            {
                throw;
            }
            finally
            {
                // Releases the lock on the critical section of the code
                globalMutex.ReleaseMutex();
            }
        }
    }
}

The aforementioned application has the ability to retrieve the state of the memory-mapped file from an external process at any point in time, except of course when the mutex is locked. It’s the responsibility of the mutex to exercise thread safety, regardless of the originating process, whenever a thread attempts to access the shared address space that comprises the memory-mapped file (see below):

Output 1 – Here’s a partial listing that was retrieved early in the process:
MemoryMapRetrieval1

Output 2 – Here’s the full listing that was retrieved after all of the threads committed their data:
OmniscientProcess

Finally, while memory-mapped files certainly aren’t a new concept (they’ve actually been around for decades), they are sometimes difficult to wrap your head around when there’s a sizable number of processes and threads flying around in the code. And, while my examples aren’t necessarily basic ones, hopefully they employ some rudimentary concepts that everyone is able to quickly and easily understand.

To recount my steps, I demonstrated calls to disparate objects getting kicked off asynchronously, which in turn conjure up a respectable number of threads per object. Each individual thread, operating in each asynchronously executing object, goes to work by negotiating with a common mutex in an attempt to commit its respective data values to the cross-process, memory-mapped file that’s accessible to applications running as entirely different processes on the same physical machine.

Thanks for reading and keep on coding! 🙂

ColeFrancisBizRulesEngine

Author: Cole Francis, Architect

BACKGROUND

Over the past couple of days, I’ve pondered the possibility of creating a dynamic business rules engine, meaning one that’s rules and types are conjured up and reconciled at runtime. After reading different articles on the subject matter, my focus was imparted to the Microsoft Dynamic Language Runtime (DLR) and Labmda-based Expression Trees, which represent the factory methods available in the System.Linq.Expressions namespace and can be used to construct, query and validate relationally-structured dynamic LINQ lists at runtime using the IQueryable interface. In a nutshell, the C# (or optionally VB) compiler allows you to construct a list of binary expressions at runtime, and then it compiles and assigns them to a Lambda Tree data structure. Once assigned, you can navigate an object through the tree in order to determine whether or not that object’s data meets your business rule criteria.

AFTER SOME RESEARCHING

After reviewing a number of code samples offered by developers who have graciously shared their work on the Internet, I simply couldn’t find one that met my needs. Most of them were either too strongly-typed, too tightly coupled or applicable only to the immediate problem at hand. Instead, what I sought was something a little more reusable and generic. So, in absence of a viable solution, I took a little bit of time out of my schedule to create a truly generic prototype of one. This will be the focus of the solution, below.

THE SOLUTION

To kick things off, I’ve created a Expression Trees compiler that accepts a generic type as an input parameter, along with a list of dynamic rules. Its job is to pre-compile the generic type and dynamic rules into a tree of dynamic, IQueryable Lambda expressions that can validate values in a generic list at runtime. As with all of my examples, I’ve hardcoded the data for my own convenience, but the rules and data can easily originate from a data backing store (e.g. a database, a file, memory, etc…). Regardless, shown in the code block below is the PrecompiledRules Class, the heart of my Expression Trees Rules Engine, and for your convenience I’ve highlighted the line of code that performs the actual Expression Tree compilation in blue):


using System;
using System.Collections.Generic;
using System.Linq;
using System.Linq.Expressions;
using ExpressionTreesRulesEngine.Entities;

namespace ExpressionTreesRulesEngine
{
    /// Author: Cole Francis, Architect
    /// The pre-compiled rules type
    /// 
    public class PrecompiledRules
    {
        ///
        /// A method used to precompile rules for a provided type
        /// 
        public static List<Func<T, bool>> CompileRule<T>(List<T> targetEntity, List<Rule> rules)
        {
            var compiledRules = new List<Func<T, bool>>();

            // Loop through the rules and compile them against the properties of the supplied shallow object 
            rules.ForEach(rule =>
            {
                var genericType = Expression.Parameter(typeof(T));
                var key = MemberExpression.Property(genericType, rule.ComparisonPredicate);
                var propertyType = typeof(T).GetProperty(rule.ComparisonPredicate).PropertyType;
                var value = Expression.Constant(Convert.ChangeType(rule.ComparisonValue, propertyType));
                var binaryExpression = Expression.MakeBinary(rule.ComparisonOperator, key, value);

                compiledRules.Add(Expression.Lambda<Func>(binaryExpression, genericType).Compile());
            });

            // Return the compiled rules to the caller
            return compiledRules;
        }
    }
}


As you can see from the code above, the only dependency in my Expression Trees Rules Engine is on the Rule Class itself. Naturally, I could augment the Rule Class to the PreCompiledRules Class and eliminate the Rule Class altogether, thereby eliminating all dependencies. However, I won’t bother with this for the purpose of this demonstration. But, just know that the possibility does exist. Nonetheless, shown below is the concrete Rule class:


using System;
using System.Linq.Expressions;

namespace ExpressionTreesRulesEngine.Entities
{
    ///
    /// The Rule type
    /// 
    public class Rule
    {
        ///
        /// Denotes the rules predictate (e.g. Name); comparison operator(e.g. ExpressionType.GreaterThan); value (e.g. "Cole")
        /// 
        public string ComparisonPredicate { get; set; }
        public ExpressionType ComparisonOperator { get; set; }
        public string ComparisonValue { get; set; }

        /// 
        /// The rule method that 
        /// 
        public Rule(string comparisonPredicate, ExpressionType comparisonOperator, string comparisonValue)
        {
            ComparisonPredicate = comparisonPredicate;
            ComparisonOperator = comparisonOperator;
            ComparisonValue = comparisonValue;
        }
    }
}


Additionally, I’ve constructed a Car class as a test class that I’ll eventually hydrate with data and then inject into the compiled Expression Tree object for various rules validations:


using System;
using ExpressionTreesRulesEngine.Interfaces;

namespace ExpressionTreesRulesEngine.Entities
{
    public class Car : ICar
    {
        public int Year { get; set; }
        public string Make { get; set; }
        public string Model { get; set; }
    }
}


Next, I’ve created a simple console application and added a project reference to the ExpressionTreesRulesEngine project. Afterwards, I’ve included the following lines of code (see the code blocks below, paying specific attention to the lines of code highlighted in orange) in the Main() in order to construct a list of dynamic rules. Again, these are rules that can be conjured up from a data backing store at runtime. Also, I’m using the ICar interface that I created in the code block above to compile my rules against.

As you can also see, I’m leveraging the out-of-box LINQ.ExpressionTypes enumerates to drive my conditional operators, which is in part what allows me to make the PreCompiledRules class so generic. Never fear, the LINQ.ExpressionTypes enumeration contains a plethora of node operations and conditional operators (and more)…far more enumerates than I’ll probably ever use in my lifetime.


List<Rule> rules = new List<Rule> 
{
     // Create some rules using LINQ.ExpressionTypes for the comparison operators
     new Rule ( "Year", ExpressionType.GreaterThan, "2012"),
     new Rule ( "Make", ExpressionType.Equal, "El Diablo"),
     new Rule ( "Model", ExpressionType.Equal, "Torch" )
};

var compiledMakeModelYearRules= PrecompiledRules.CompileRule(new List<ICar>(), rules);


Once I’ve compiled my rules, then I can simply tuck them away somewhere until I need them. For example, if I store my compiled rules in an out-of-process memory cache, then I can theoretically store them for the lifetime of the cache and invoke them whenever I need them to perform their magic. What’s more, because they’re compiled Lambda Expression Trees, they should be lightning quick against large lists of data. Other than pretending that the Car data isn’t hardcoded in the code example below, here’s how I would otherwise invoke the functionality of the rules engine:


// Create a list to house your test cars
List cars = new List();

// Create a car that's year and model fail the rules validations      
Car car1_Bad = new Car { 
    Year = 2011,
    Make = "El Diablo",
    Model = "Torche"
};
            
// Create a car that meets all the conditions of the rules validations
Car car2_Good = new Car
{
    Year = 2015,
    Make = "El Diablo",
    Model = "Torch"
};

// Add your cars to the list
cars.Add(car1_Bad);
cars.Add(car2_Good);

// Iterate through your list of cars to see which ones meet the rules vs. the ones that don't
cars.ForEach(car => {
    if (compiledMakeModelYearRules.TakeWhile(rule => rule(car)).Count() > 0)
    {
        Console.WriteLine(string.Concat("Car model: ", car.Model, " Passed the compiled rules engine check!"));
    }
    else
    {
        Console.WriteLine(string.Concat("Car model: ", car.Model, " Failed the compiled rules engine check!"));
    }
});

Console.WriteLine(string.Empty);
Console.WriteLine("Press any key to end...");
Console.ReadKey();


As expected, the end result is that car1_Bad fails the rule validations, because its year and model fall outside the range of acceptable values (e.g. 2011 < 2012 and 'Torche' != 'Torch'). In turn, car2_Good passes all of the rule validations as evidenced in the pic below:

TreeExpressionsResults

Well, that’s it. Granted, I can obviously improve on the abovementioned application by building better optics around the precise conditions that cause business rule failures, but that exceeds the intended scope of my article…at least for now. The real takeaway is that I can shift from validating the property values on a list of cars to validating some other object or invoking some other rule set based upon dynamic conditions at runtime, and because we’re using compiled Lambda Expression Trees, rule validations should be quick. I really hope you enjoyed this article. Thanks for reading and keep on coding! 🙂

CouplingDesignPatterns

Author: Cole Francis, Architect

BACKGROUND PROBLEM

My last editorial focused on building out a small application using a simple Service Locator Pattern, which exposed a number of cons whenever the pattern is used in isolation. As you might recall, one of the biggest problems that developers and architects have with this pattern is the way that service object dependencies are created and then inconspicuously hidden from their callers inside the service object register of the Service Locator Class. This behavior can result in a solution that successfully compiles at build-time but then inexplicably crashes at runtime, often offering no insight into what went wrong.

THE REAL PROBLEM

I think it’s fair to say that when some developers think about design patterns they don’t always consider the possibility of combining one design pattern with another in order to create a more extensible and robust framework. The reason why opportunities like these are overlooked is because the potential for a pattern’s extensibility isn’t always obvious to its implementer.

For this very reason, I think it’s important to demonstrate how certain design patterns can be coupled together to create some very malleable application frameworks, and to prove my point I took the Service Locator Pattern I covered in my previous editorial and combined it with a very basic Factory Pattern.

Combining these two design patterns provides us with the ability to clearly separate the “what to do” from the “when to do it” concerns. It also offers build-time type checking and the ability to test each layer of the application using an object’s interface. Enough chit-chat. Let’s get on with the demo!

THE SOLUTION

Suppose we are a selective automobile manufacturer and offer two well-branded models:

    (1) A luxury model named “The Drifter”.
    (2) A sport luxury model named “The Showdown”.

To keep things simple, I’ve included very few parts for each make’s model. So, while each model is equipped with its own engine and emblem, both models share the same high-end stereo package and high-performance tires. Shown below is a snapshot of the ServiceLocator Class, which looks nearly identical to the one I included in my last editorial. For this reason, I’m not color-coding anything inside the class except where I’ve made changes to it. I’ve also kept the color-coding consistent throughout the rest of the code examples in order to depict how the different classes and design patterns get tied together:


namespace FactoryPatternExample
{
    public class ServiceLocator
    {
        #region Member Variables

        ///
        /// An early loaded dictionary object acting as a memory map for each interface's concrete type
        /// 
        private IDictionary<object, object> services;

        #endregion

        #region IServiceLocator Methods

        ///
        /// Resolves the concrete service type using a passed in interface
        /// 
        public T Resolve<T>()
        {
            try
            {
                return (T)services[typeof(T)];
            }
            catch (KeyNotFoundException)
            {
                throw new ApplicationException("The requested service is not registered");
            }
        }

        /// 
        /// Extends the service locator capabilities by allowing an interface and concrete type to 
        /// be passed in for registration (e.g. if you wrap the assembly and wish to extend the 
        /// service locator to new types added to the extended project)
        /// 
        public void Register<T>(object resolver)
        {
            try
            {
                this.services[typeof(T)] = resolver;
            }
            catch (Exception)
            {

                throw;
            }
        }

        #endregion

        #region Constructor(s)

        ///
        /// The service locator constructor, which resolves a supplied interface with its corresponding concrete type
        /// 
        public ServiceLocator()
        {
            services = new Dictionary<object, object>();

            // Registers the service in the locator
            this.services.Add(typeof(IDrifter_LuxuryVehicle), new Drifter_LuxuryVehicle());
            this.services.Add(typeof(IShowdown_SportVehicle), new Showdown_SportVehicle());
        }

        #endregion
    }
}


Where the abovementioned code differs from a basic Service Locator implementation is when we add our vehicles to the service register’s Dictionary object in the ServiceLocator() Class Constructor. When this occurs, the following parts are registered using a Factory Pattern that gets invoked in the Constructor of the shared Vehicle() Base Class (highlighted in yellow, below):


 
namespace FactoryPatternExample.Vehicles.Models
{
    public class Drifter_LuxuryVehicle : Vehicle, IDrifter_LuxuryVehicle
    {
        /// 
        /// Factory Pattern for the luxury vehicle line of automobiles
        /// 
        /// 
        public override void CreateVehicle()
        {
            Parts.Add(new Parts.Emblems.SilverEmblem());
            Parts.Add(new Parts.Engines._350_LS());
            Parts.Add(new Parts.Stereos.HighEnd_X009());
            Parts.Add(new Parts.Tires.HighPerformancePlus());
        }
    }
}



 
namespace FactoryPatternExample.Vehicles.Models
{
    public class Showdown_SportVehicle : Vehicle, IShowdown_SportVehicle
    {
        /// 
        /// Factory Pattern for the luxury vehicle line of automobiles
        /// 
        /// 
        public override void CreateVehicle()
        {
            Parts.Add(new Parts.Emblems.GoldEmblem());
            Parts.Add(new Parts.Engines._777_ProSeries());
            Parts.Add(new Parts.Stereos.HighEnd_X009());
            Parts.Add(new Parts.Tires.HighPerformancePlus());
        }
    }
}


As you can see from the code above, both subtype classes inherit from the Vehicle() Base Class, but each subtype implements its own distinctive interface (e.g. IDrifter_LuxuryVehicle and IShowdown_SportVehicle). Forcing each subclass to implement its own unique interface is what ultimately allows a calling application to distinguish one vehicle type from another.

Additionally, it’s the Vehicle() Base Class that calls the CreateVehicle() Method inside its Constructor. But, because the CreateVehicle() Method in the Vehicle() Base Class is overridden by each subtype, each subtype is given the ability to add its own set of exclusive parts to the list of parts in the base class. As you can see, I’ve hardcoded all of the parts in my example out of convenience, but they can originate just as easily from a data backing store.



namespace FactoryPatternExample.Vehicles
{
    public abstract class Vehicle : IVehicle
    {
        List _parts = new List();

        public Vehicle()
        {
            this.CreateVehicle();
        }

        public List Parts 
        { 
            get
            {
                return _parts;
            }
        }

        // Factory Method
        public abstract void CreateVehicle();
    }
}


As for the caller (e.g. a client application), it only needs to resolve an object using that object’s interface via the Service Locator in order to obtain access to its publicly exposed methods and properties. (see below):


FactoryPatternExample.ServiceLocator serviceLocator = new FactoryPatternExample.ServiceLocator();
IDrifter_LuxuryVehicle luxuryVehicle = serviceLocator.Resolve<IDrifter_LuxuryVehicle>();

if (luxuryVehicle != null)
{
     foreach (Part part in ((IVehicle)(luxuryVehicle)).Parts)
     {
          Console.WriteLine(string.Concat("   - ", part.Label, ": ", part.Description));
     }
}

Here are the results after making a few minor tweaks to the UI code:

The Results

What’s even more impressive is that the Service Locator now offers compile-time type checking and the ability to test each layer of the code in isolation thanks to the inclusion of the Factory Pattern:

BuildTimeError

In summary, many of the faux pas experienced when implementing the Service Locator Design Pattern can be overcome by coupling it with a slick little Factory Design Pattern. What’s more, if we apply this same logic both equitably and ubiquitously across all design patterns, then it seems unfair to take a single design pattern and criticize its integrity and usefulness in complete sequestration, because it’s often the combination of multiple design patterns that make frameworks and applications more integral and robust. Thanks for reading and keep on coding! 🙂

TheServiceLocatorPattern

Author: Cole Francis, Architect

BACKGROUND:

In object-oriented programming (OOP), the Dependency Inversion Principle, or DIP, stipulates that the conventional dependency relationships established from the high-level policy-setting modules, to the low-level dependency modules, are inverted (i.e. reversed), creating an obvious layer of indirection used to resolve component dependencies. Therefore, the high-level components should exist independently from a low-level component’s implementation and all its minutia.

DIP was suggested by Robert C. Martin in a paper he wrote in 1996 titled, Object Oriented Design Quality Metrics: an analysis of dependencies. Following that, there was an article that appeared in the C++ Report in May 1996 entitled “The Dependency Inversion Principle” and the books Agile Software Development, Principles, Patterns, and Practices, and Agile Principles, Patterns, and Practices in C#.

The principle inverts the way most people may think about Object Oriented Design (OOD), and the Service Locator pattern is an excellent pattern to help demonstrate DIP principles, mainly because it facilitates a runtime provisioning of chosen low-level component implementations from its higher-level componentry.

The key tenants of the Service Locator Pattern are (in layman’s terms):

  • An interface is created, which identifies a set of callable methods that the concrete service class implements.
  • A concrete service class is created, which implements the interface. The concrete class is the component where all the real work gets done (e.g. calling a database, calling a WCF method, making an Http POST Ajax call, etc…).
  • A Service Locator class is created to loosely enlist the interface and its corresponding concrete service class. Once a client application requests to resolve an enlisted type, then it’s the Service Locator’s job to resolve the appropriate interface and return it to the calling application so that the service class’s method(s) can be called.
  • A visual representation of the Dependency Inversion Pattern

To help explain the pattern a little more clearly, I’ve put together a working example of the Service Locator Pattern implementing mock services. One call simulates the retrieval of credit card authorization codes for fulfilled orders coming from a database. Once I retrieve the authorization codes, I then simulate settling them using a remote credit card service provider. Afterwards, I mimic updating our database records with the credit card settlement codes that I received back from the credit card service provider. I’ve intentionally kept the following example simple so that it’s easy to follow and explain, and I’ve also broken the code down into color-coded sections to further dissect the responsibility of each region of code:


namespace ServiceLocatorExample
{
    /// 
    /// A textbook implementation of the Service Locator Pattern.  
    /// 
    /// 
    public class ServiceLocator : IServiceLocator
    {
        #region Member Variables

        /// 
        /// An early loaded dictionary object acting as a memory map for each interface's concrete type
        /// 
        /// 
        private IDictionary services;

        #endregion

        #region IServiceLocator Methods

        /// 
        /// Resolves the concrete service type using a passed in interface
        /// 
        /// 
        public T Resolve()
        {
            try
            {
                return (T)services[typeof(T)];
            }
            catch (KeyNotFoundException)
            {
                throw new ApplicationException("The requested service is not registered");
            }
        }

        /// 
        /// Extends the service locator capabilities by allowing an interface and concrete type to 
        /// be passed in for registration (e.g. if you wrap the assembly and wish to extend the 
        /// service locator to new types added to the extended project)
        /// 
        /// 
        /// IDictionary(object, object), where the first parameterized object is the service interface 
        /// and the second parameterized object is the concrete service type
        /// 
        /// 
        public void Register(object resolver)
        {
            try
            {
                this.services[typeof(T)] = resolver;
            }
            catch (Exception)
            {
                
                throw;
            }
        }

        #endregion

        #region Constructor(s)

        /// 
        /// The service locator constructor, which resolves a supplied interface with its corresponding concrete type
        /// 
        /// 
        public ServiceLocator()
        {
            services = new Dictionary();

            // Registers the service in the locator
            this.services.Add(typeof(IGetFulfilledOrderCCAuthCodes), new GetFulfilledOrderCCAuthCodes());
            this.services.Add(typeof(IGetFulfilledOderCCSettlementCodes), new GetFulfilledOderCCSettlementCodes());
            this.services.Add(typeof(IUpdateFulfilledOrderCCSettlementCodes), new UpdateFulfilledOderCCSettlementCodes());
        }

        #endregion
    }
}


PRE-LOADING DECOUPLED RELATIONSHIPS TO A DICTIONARY OBJECT AT RUNTIME:

If you look at the all the sections I’ve highlighted in yellow, all I’m doing is declaring a Dictionary Object to act as a “registry placeholder” in the Member Variables Region, and then I’m preloading the interface and service class as a key/value pair to the service registry in the Constructor(s) Region of the code.

The key/value pairs that get stored in the Dictionary Object loosely describes the concrete class and its corresponding interface that gets registered as service objects (e.g. “IMyClass”, “MyClass”). An interface describes the methods and properties that are implemented in the concrete class, and the concrete class is type where all the real work gets accomplished. In its most primitive form, the primary job of ServiceLocator class is to store key/value pairs in a simple Dictionary object and either register or resolve those key/value pairs whenever it’s called upon to do so.

GETTING AND SETTING VALUES IN THE DICTIONARY OBJECT AT RUNTIME:

The section that’s color-coded in green denotes simple getter and setter-like methods that are publicly exposed to a consuming application, allowing that consuming application to either register new service objects in the Dictionary Object registry or resolve an existing service object in the Dictionary Object’s registry for use in a client application.

In fact, listed below is a textbook example of how a client application would resolve an existing service object in the Dictionary Object’s registry for use. In this example I’m resolving the IGetFulfilledOrderCCAuthCodes interface to its concrete type and then calling its GetFulfilledOrderCCAuthCodes() method using a console application that I quickly threw together…


/// 
/// Gets the fulfilled orders credit card authorization codes to settle on
/// 
/// 
private static List GetFulfilledOrderCCAuthCodes()
{
    ServiceLocatorExample.ServiceLocator locator2 = new ServiceLocatorExample.ServiceLocator();
    IGetFulfilledOrderCCAuthCodes o = locator2.Resolve();
    return o.GetFulfilledOrderCCAuthCodes();
}


CONGRATULATIONS! YOU’RE DONE:

Assuming that someone has already written logic to retrieve the fulfilled order authorization codes from the database, then your part is done! I really wish there was more to it than this so that I would look like some sort of architectural superhero, but alas there isn’t. Thus, if all you were looking to get out of this post is how to implement a textbook example of the Service Locator design pattern, then you don’t need to go any further. However, for those of you who want to know the advantages and disadvantages of the Service Locator design pattern then please keep reading:

THE ADVANTAGES:

  • The Service Locator Pattern follows many well-recognized architectural principles, like POLA, Hollywood, KISS, Dependency Inversion, YAGNI, and others…
  • Although the Service Locator Pattern is not considered to be a lightweight pattern, it’s still very simple to learn and is easily explainable to others, which means that your junior developer’s eyes won’t pop out of their heads when you attempt to explain the concept to them.
    • This truly is a framework pattern that you can teach a less knowledgeable person over a lunch hour and expect them to fully understand when you’re done, because the Service Locator framework wires everything up using an minimal number of resources (e.g. a Dictionary object containing key/value pairs and the ability to read from and (optionally) write to the Dictionary object).
  • The Service Locator design pattern allows you to quickly and efficiently create a loosely coupled runtime linker that’s ideal for separating concerns across the entire solution, as each type is concerned only about itself and doesn’t care what any of the other components do.
  • For you architectural purists out there, just be aware that using the Service Locator design pattern doesn’t preclude you from coupling it with good Dependency Injection and Factory Pattern frameworks. In fact, by doing so you have the potential of creating a lasting framework that meets the conditions of both SOLID and POLA design principles, as well as many others. Perhaps I’ll make this the topic of my next architectural discussion…

THE DISADVANTAGES:

  • The services (i.e. Key/value pairs that represent concrete classes) that get registered in the Service Locator object are often considered “black box” items to consumers of the Service Locator class, meaning service objects and their dependencies are completely abstracted from the applications that call it. This loosely coupled structure makes it extremely difficult to track down issues without development access to the source code for not only the Service Locator itself, but also all of the dependent service objects that get registered by it at runtime.
    • If you find yourself in this situation and you don’t have access to the Service Locator source code, then tools like the Microsoft ILDasm or the RedGate’s .NET Reflector are occasionally able to shed some light on what’s going on inside the Service Locator assembly; however, if the code happens to be obfuscated or if hidden dependencies are completely unresolvable, then deciphering issues can become an exercise in futility. For this very reason, the Service Locator Pattern violates ACID Principles, which is why some architectural gurus consider the Service Locator design to be more of a design anti-pattern.
  • Because Service Locator’s dictionary is constructed using a key/value concept, all key names must be unique, meaning that it’s not very well-suited for distributed systems without adding more validation checks around the Register method’s dictionary insertion code.
  • Testing the registered objects becomes a difficult process because we aren’t able to test each object in isolation, as registered service objects are considered “black box” items to both the Service Locator and its callers.
  • As I previously mentioned, objects are “late-bound”, which means that a caller is going to burn up some CPU cycles waiting the Service Locator to find the service’s registry entry in the Dictionary object and return it to them, and then they still have to invoke that object “late-bound”. Sure the time is minimal, but it’s still time that isn’t being spent on enhancing something more valuable…like the user experience.
  • There are also some concerns from a security standpoint. Keep in mind that my ServiceLocator class allows people to dynamically register their own objects at runtime (see the code snippet below). Who knows what types of malicious service objects might get registered? What if our ServiceLocator class only performed a minimal set of validations before executing a method on a service object? Now that’s an awfully scary thought.



/// 
/// Extends the service locator capabilities by allowing an interface and concrete type to 
/// be passed in for registration (e.g. if you wrap the assembly and wish to extend the 
/// service locator to new types added to the extended project)
/// 
///
public void Register(object resolver)
{
    try
    {
       this.services[typeof(T)] = resolver;
    }
    catch (Exception)
    {
       throw;
    }
}



SOME FINAL TALKING POINTS:

As you can see, the Service Locator Design Pattern is very simple to explain and understand, but it certainly isn’t a “one size fits all” design pattern, and a fair degree of caution should be exercised before choosing it as a framework to build your solution on. You might even consider coupling it or substituting it with a more robust pattern, like one that offers the same capabilities but resolves its dependency objects at build-time instead of runtime (e.g. Dependency Injection).

Personally, I think this pattern’s “sweet spot” is for applications that: (1) Have a very narrow design scope; (2) Are built for in-house use and not third-party distribution; (3) Leverage service objects that are well-known and well-tested by a development team; (4) Aren’t designed for blazing speed. Thanks for reading and keep on coding! 🙂