Archive for the ‘C#’ Category

The Results

By:  Cole Francis, Senior Solution Architect, The PSC Group, LLC

Let’s just say that I had a situation where I was calling a third-party API to return valuable information, but that third-party service occasionally failed.  What I discovered through recursion is that 1 out of every 3-5 calls succeeded, but it wasn’t guaranteed.  Therefore, I couldn’t simply wrap my core logic in a hard-coded iterative loop and expect it to succeed.  So, I was relegated to either coming up with a way to write some custom Retry logic to handle errors and reattempt the call or locating and existing third-party package that offers this sort of functionality as not to reinvent the proverbial wheel.

Fortunately, I stumbled across a .NET NuGet Package called Polly.   After reading the abstract about the offering (Click Here to Read More About the Polly Project), I discovered that Polly is a .NET compatible library that complies with transient-fault-handling logic by implementing policies that offer thread-safe resiliency to Retry, Circuit Breaker, Timeout, Bulkhead Isolation, and Fallback logic, and in a way that is very easy to implement inside a .NET project codebase.  I also need to point out that Polly targets .NET 4.0, .NET 4.5 and .NET Standard 1.1.

While Polly offers a plethora of capabilities, many of which I’ll keep in my back pocket for a rainy day, I was interested in just one, the Retry logic.  Here’s how I implemented it.  First, I included the Polly NuGet Package in my solution, like so:

The Results

Next, I included the following lines of code when calling the suspect third-party Web API:

// Here is my wait and retry policy, with 250 millisecond wait intervals.
// It will attempt to call the API 10 times.
Policy
  .Handle(e => (e is Exception))
  .WaitAndRetry(10, attempt => TimeSpan.FromMilliseconds(250))
  .Execute(() =>
  {
    // Your core logic should go here!  
    // If an exception is thrown by the called object, 
    // then Polly will wait 250ms and try again for a total of 10 times.
    var response = CallToSuspectThirdPartyAPI(input);
  });

That’s all there really is to it, and I’m only scratching the surface when it comes to Polly’s full gamut of functionality.  Here’s a full list of Polly’s capabilities if you’re interested:

  1. Retry – I just described this one to you.
  2. Circuit Breaker – Fail fast under struggling conditions (you define the conditions and thresholds).
  3. Timeout – Wait until you hit a certain point, and then move on.
  4. Bulkhead Isolation – Provides fault isolation, so that certain failing threads don’t fault the entire process.
  5. Cache – Provides caching (temporary storage and retrieval) capabilities.
  6. Fallback – Anticipates a potential failure and allows a developer to provide an alternative course of action if a potential failure is ever realized.
  7. PolicyWrap – Allows for any (and all) of the previously mentioned policies to be combined, so that different programmatic strategies can be exercised when different faults occur.

Thanks for reading, and keep on coding!  🙂

Advertisements

The Results

Author:  Cole Francis, Senior Solution Architect @ The PSC Group, LLC

Download the Source Code for This Example Here!

BACKGROUND

Traditionally speaking, creating custom Microsoft Windows Services can be a real pain.  The endless and mind-numbing repetitions of using the InstallUtil command-line utility and Ctrl+Alt+P attachments to the debug the code from the Microsoft Visual Studio IDE are more than enough to discourage the average Software Developer.

While many companies are now shying away from writing Windows Services in an attempt to get better optics around job failures, custom Windows Services continue to exist in limited enterprise development situations where certain thresholds of caution are exercised.

But, if you’re ever blessed with the dubious honor of having to write a custom Windows Service, take note of the fact that there are much easier ways of approaching this task than there used to be, and in my opinion one of the easiest ways is to use a NuGet package called TopShelf.

Here are the top three benefits of using TopShelf to create a Windows Service:

  1. The first benefit of using TopShelf is that you get out from underneath the nuances of using the InstallUtil command to install and uninstall your Windows Service.
  2. Secondly, you create your Windows Service using a simple and familiar Console Application template type inside Microsoft Visual Studio.  So, not only is it extraordinarily easy to create, it’s also just as easy to debug and eventually transition into a fully-fledged Windows Service leveraging TopShelf. This involves a small series of steps that I’ll demonstrate for you shortly.
  3. Because you’ve taken the complexity and mystery out of creating, installing, and debugging your Windows Service, you can focus on writing better code.

So, now that I’ve explained some of the benefits of using TopShelf to create a Windows Service, let’s run through a quick step-by-step example of how to get one up and running.  Don’t be alarmed by the number of steps in my example below.  You’ll find that you’ll be able to work through them very quickly.


Step 1

The first step is to create a simple Console Application in Microsoft Visual Studio.  As you can see in the example below, I named mine TopShelfCWS, but you can name yours whatever you want.

The Results


Step 2

The second step is to open the NuGet Package Manager from the Microsoft Visual Studio IDE menu and then click on the Manage NuGet Packages for Solution option in the submenu as shown in the example below.

The Results


Step 3

After the NuGet Package Manager screen appears, click on the Browser option at the top of the dialog box, and then search on the words “TopShelf”.  A number of packages should appear in the list, and you’ll want to select the one shown in the example below.

The Results


Step 4

Next, select the version of the TopShelf product that aligns with your project or you can simply opt to use the default version that was automatically selected for you, which is what I have done in my working example.

Afterwards, click the Install button.  After the package successfully installs itself, you’ll see a green checkbox by the TopShelf icon, just like you see in the example below.

The Results


Step 5

Next, add a new Class to the TopShelfCWS project, and name it something that’s relevant to your solution.  As you can see in the example below, I named my class NameMeAnything.

The Results


Step 6

In your new class (e.g. NameMeAnything), add a reference to the TopShelf product, and then inherit from ServiceControl.

The Results


Step 7

Afterwards, right click on the words ServiceControl and implement its interface as shown in the example below.

The Results


Step 8

After implementing the interface, you’ll see two new events show up in your class.  They’re called Start() and Stop(), and they’re the only two events that the TopShelf product relies upon to communicate with the Windows Service Start and Stop events.

The Results


Step 9

Next, we’ll head back to the Main event inside the Program class of the Console Application.  Inside the Main event, you’ll set the service properties of your impending Windows Service.  It will include properties like:

  • The ServiceName: Indicates the name used by the system to identify this service.
  • The DisplayName: Indicates the friendly name that identifies the service to the user.
  • The Description: Gets or sets the description for the service.

For more information, see the example below.

The Results


Step 10

Let’s go back to your custom class one more time (e.g. NameMeAnything.cs), and add the code in the following example to your class.  You’ll replace this with your own custom code at some point, but following my example will give you a good idea of how things behave.

The Results


Step 11

Make sure you include some Console writes to account for all the event behaviors that will occur when you run it.

The Results


Step 12

As I mentioned earlier, you can run the Console Application simply as that, a Console Application.  You can do this by simply pressing the F5 key.  If you’ve followed my example up to this point, then you should see the output shown in the following example.

The Results


Step 13

Now that you’ve run your solution as a simple Console Application, let’s take the next step and install it as a Window Service.

To do this, open a command prompt and navigate to the bin\Debug folder of your project.   *IMPORTANT:  Make sure you’re running the command prompt in Administrator mode* as shown in the example below.

The Results


Step 14

One of the more beautiful aspects of the TopShelf product is how it abstracts you away from all the .NET InstallUtil nonsense.  Installing your Console Application as a Windows Service is as easy as typing the name of your executable, followed by the word “Install”.  See the example below.

The Results


Step 15

Once it installs, you’ll see the output shown in the example below.

The Results


Step 16

What’s more, if you navigate to the Windows Services dialog box, you should now see your Console Application show up as a fully-operable Windows Service, as depicted below.

The Results


Step 17

You can now modify the properties of your Windows Service and start it.  Since all I’m doing in my example is executing a simple timer operation and logging out console messages, I just kept all the Windows Service properties defaults and started my service.  See the example below.

The Results


Step 18

If all goes well, you’ll see your Windows Service running in the Windows Services dialog box.

The Results


Step 19

So, now that your console application is running as a Windows Service, you’re absent the the advantage of seeing your console messages being written to the console. So, how do you debug it?

The answer is that you can use the more traditional means of attaching a Visual Studio process to your running Windows Service by clicking Ctrl + Alt + P in the Visual Studio IDE, and then selecting the name of your running Windows Service, as shown in the example below.

The Results


Step 20

Next, set a breakpoint on the _timer_Elapsed event.  If everything is running and hooked up properly, then your breakpoint should be hit every second, and you can press F10 to step it though the event handler that’s responsible for writing the output to the console, as shown in the example below.

The Results


Step 21

Once you’re convinced that your Windows Service is behaving properly, you can stop it and test the TopShelf uninstallation process.

Again, TopShelf completely abstracts you away from the nuances of the InstallUtil utility, by allowing you to uninstall your Windows Service just as easily as you initially installed it.

The Results


Step 22

Finally, if you go back into the Windows Services dialog box and refresh your running Windows Services, then you should quickly see that your Windows Service has been successfully removed.

The Results


SUMMARY

In summary, I walked you through the easy steps of creating a custom Windows Service using the TopShelf NuGet package and a simple C# .NET Console application.

In the end, starting out with a TopShelf NuGet package and a simple Console application allows for a much easier and intuitive Windows Service development process, because it abstracts away all the complexities normally associated with traditional Windows Service development and debugging, resulting in more time to focus on writing better code. These are all good things!

Hi, I’m Cole Francis, a Senior Solution Architect for The PSC Group, LLC located in Schaumburg, IL.  We’re a Microsoft Partner that specializes in technology solutions that help our clients achieve their strategic business objectives.  PSC serves clients nationwide from Chicago and Kansas City.

Thanks for reading, and keep on coding!  🙂

CreateImagefromPDF

By: Cole Francis, Senior Architect at The PSC Group, LLC.

Let’s say you’re working on a hypothetical project, and you run across a requirement for creating an image from the first page of a client-provided PDF document.  Let’s say the PDF document is named MyPDF.pdf, and your client wants you to produce a .PNG image output file named MyPDF.png.

Furthermore, the client states that you absolutely cannot read the contents of the PDF file, and you’ll only know if you’re successful if you can read the output that your code generates inside the image file.  So, that’s it, those are the only requirements.   What do you do?

SOLUTION

Thankfully, there are a number of solutions to address this problem, and I’m going to use a lesser known .NET NuGet package to handle this problem.  Why?  Well, for one I want to demonstrate what an easy problem this is to solve.  So, I’ll start off by searching in the .NET NuGet Package Manager Library for something describing what I want to do.  Voila, I run across a lesser known package named “Pdf2Png”.  I install it in less than 5 seconds.

Pdf2Png.png

So, is the Pdf2Png package thread-safe and server-side compliant?  I don’t know, but I’m not concerned about it because it wasn’t listed as a functional requirement.  So, this is something that will show up as an assumption in the Statement-of-Work document and will be quickly addressed if my assumption is incorrect.

Next, I create a very simple console application, although this could be just about any .NET file type, as long as it has rights to the file system.  The process to create the console application takes me another 10 seconds.

Next, I drop in the following three lines of code and execute the application, taking another 5 secondsThis would actually be one line of code if I was passing in the source and target file locations and names.

 string pdf_filename = @"c:\cole\PdfToPng\MyPDF.pdf";
 string png_filename = @"c:\cole\PdfToPng\MyPDF.png";
 List errors = cs_pdf_to_image.Pdf2Image.Convert(pdf_filename, png_filename);

Although my work isn’t overwhelmingly complex, the output is extraordinary for a mere 20 seconds worth of work!  Alas, I have not one, but two files in my source folder.  One’s my source PDF document, and the other one’s the image that was produced from my console application using the Pdf2Png package.

TwoFiles.png

Finally, when I open the .PNG image file, it reveals the mysterious content that was originally inside the source PDF document:

SomeThingsArentHard.png

Before I end, I have to mention that the Pdf2Png component is not only simple, but it’s also somewhat sophisticated.  The library is a subset of Mark Redman’s work on PDFConvert using Ghostscript gsdll32.dll, and it automatically makes the Ghostscript gsdll32 accessible on a client machine that may not have it physically installed.

Thanks for reading, and keep on coding!  🙂

CrossProcessMemoryMaps

Author: Cole Francis, Architect

BACKGROUND PROBLEM

Many moons ago, I was approached by a client about the possibility of injecting a COM wrapped .NET assembly between two configurable COM objects that communicated with one another. The basic idea was that a hardware peripheral would make a request through one COM object, and then that request would be would intercepted by my new COM object which would then prioritize a hardware object’s data in a cross-process, global singleton. From there, any request initiated by a peripheral would then be reordered using the values persisted in my object.

Unfortunately, the solution became infinitely more complex when I learned that peripheral requests could originate from software running on different processes on the same machine. My first attempt involved building an out-of-process storage cache used to update and retrieve data as needed. Although it all seemed perfectly logical, it lacked the split-second processing speed that the client was looking for. So, next I tried to reading and writing data to shared files on the local file system. This also worked but lacked split-second processing capabilities. As a result, I ended up going back and forth to the drawing board before finally implementing a global singleton COM object that met client’s needs (Yes, I know it’s an anti-pattern…but it worked!).

Needless to say, the outcome was a rather bulky solution, as the intermediate layer of software I wrote had to play nicely with COM objects that it was never intended to live between, as well as adhere to specific IDispatch interfaces that weren’t very well documented, and it reacted to functionality that at times seemed random. Although the effort was considered highly successful, development was also very tedious and came at a price…namely my sanity. Looking back on everything well over a decade later and applying the knowledge that I possess today, I definitely would have implemented a much more elegant solution using an API stack that I’ll go over in just a minute.

As for now, let’s switch gears and discuss a something that probably seems completely unrelated to the topic at hand, and that is memory functions. Yes, that’s right…I said memory functions. It’s my belief that when most developers think of storing object and data in memory, two memory functions immediately come to their mind, namely the Heap and Virtual memory (explained below). While these are great mechanisms for managing objects and data internal to a single process, neither of the aforementioned memory-based storage facilities can be leveraged across multiple processes without employing some sort of out-of-process mechanism to persist and share the data.

    1) Heap Memory Functions: Represent instances of a class or an array. This type of memory isn’t immediately returned when a method completes its scope of execution. Instead, Heap memory is reclaimed whenever the .NET garbage collector decides that the object is no longer needed.

    2) Virtual Memory Functions: Represent value types, also known as primitives, which reside in the Stack. Any memory allocated to virtual memory will be immediately returned whenever the method’s scope of execution completes. Using the Stack is obviously more efficient than using the Heap, but the limited lifetime of value types makes them implausible candidates to share data between different classes…let alone sharing data between different processes.

BRING ON MEMORY MAPPING

While most developers focus predominantly on managing Heap and Virtual memory within their applications, there are also a few other memory options out there that are sometimes go unrecognized, including “Local, Global Memory”, “CRT Memory”, “NT Virtual Memory”, and finally “Memory-Mapped Files”. Due to the nature of our subject matter, this article will concentrate solely on “Memory-Mapped Files” (highlighted in orange in the pictorial below).

MemoryMappedFiles

To break it down into layman’s terms, a memory-mapped file allows you to reserve an address space and then commit physical storage to that region. In turn, the physical storage stems from a file that is already on the disk instead of the memory manager, which offers two notable advantages:

    1) Advantage #1 – Accessing data files on the hard drive without taking the I/O performance hit due to the buffering of the file’s content, making it ideal to use with large data files.

    2) Advantage #2 – Memory-mapped files provide the ability to share the same data with multiple processes running on the same machine.

Make no mistake about it, memory-mapped files are the most efficient way for multiple processes running on a single machine to communicate with one another! In fact, they are often used as process loaders in many commonly used operating systems today, including Microsoft Windows. Basically, whenever a process is started the operating system accesses a memory-mapped file in order to execute the application. Anyway, now that you know this little tidbit of information, you should also know that there are two types of memory-mapped files, including:

    1) Persisted memory-mapped files: After a process is done working on a piece of data, that mapped file can then be named and persisted to a file on the hard drive where it can be shared between multiple processes. These files are extremely suitable for working with large amounts of data.

    2) Non-persisted memory-mapped files: These are files that can be shared between two or more disparate threads operating within a single process. However, they don’t get persisted to the hard drive, which means that their data cannot be accessed by other processes.

I’ve put together a working example that showcases the capabilities of persisted memory-mapped files for demonstration purposes. As a precursor, the example depicts mutually exclusive thoughts conjured up by the left and right halves of the brain. Each thought lives and gets processed using its own thread, which in turn gets routed to the cerebral cortex for thought processing. Inside the cerebral cortex, short-term and long-term thoughts get stored and retrieved in a memory-mapped file that’s availability is managed by a mutex.

    A mutex is an object that allows multiple program threads to synchronously share the same resource, such as file access. A mutex can be created with a name to leverage persisted memory-mapped files, or the mutex can be left unnamed to utilize non-persisted memory-mapped files.

In addition to this, I’ve also assembled another application that runs as a completely different process on the same physical machine but is still able to read and write to the persisted memory-mapped file created by the first application. So, let’s get started!

APPLYING VIRTUAL MEMORY TO HUMAN MEMORY

In the code below, I’ve created a console application that references two objects in Heap memory, and they are TheLeftBrain.Stimuli() and TheRightBrain.Stiuli(). I’ve accounted for asynchronous thought processes stemming from the left and right halves of the brain by employing an asynchronous LINQ operation that kicks off two asynchronously blocking sub-threads (i.e. One for the left half of the brain and the other for the right half of the brain). Once the sub-threads are kicked off, the primary thread blocks any further operations until the sub-threads complete their operations and return (or optionally error out). I’ve highlighted the code in orange where I’m performing the asynchronous LINQ operation):


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Threading;
using LeftBrain;
using RightBrain;

namespace LeftBrainRightBrain
{
    class Program
    {
        /// 
        /// The main entry point into the application
        /// 
        /// 
        static void Main(string[] args)
        {
            // Performs an asynchronous operation on both the left and right halves of the brain
            try
            {
                LeftBrain.Stimuli leftBrainStimuli = new LeftBrain.Stimuli();
                RightBrain.Stimuli rightBrainStimuli = new RightBrain.Stimuli();

                // Invoke a blocking, parallel process
                Parallel.Invoke(() =>
                {
                    leftBrainStimuli.Think();
                }, () =>
                {
                    rightBrainStimuli.Think();
                });

                Console.ReadKey();
            }
            catch (Exception)
            {
                throw;
            }
        }
    }
}

At this point, each asynchronous sub-thread calls its respective Stimuli() class. It should be obvious that both the LeftBrain() and RightBrain() objects are fundamentally similar in nature and therefore share interfaces and inherit from the same base class object, with the only significant differences being the types of thoughts they invoke and the additional millisecond I added to Sleep() invocation to the RightBrain() class to simply show some variance between the manner in which the threads are able to process.

Nevertheless, each thought lives in its own isolated thread (making them sub-sub threads) that passes information along to the Cerebral Cortex for data committal and retrieval. Here is an example of the LeftBrain() class and its thoughts:


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Threading;
using CerebralCortex;

namespace LeftBrain 
{
   /// 
    /// Stimulations from the right half of the brain
    /// 
    /// 
    public class Stimuli : Memory, IStimuli
    {
        /// 
        /// Stores memories in a list
        /// 
        /// 
        private List memories = new List();

        /// 
        /// An overloaded constructor
        /// 
        /// 
        public void Think()
        {
            try
            {
                string threadName = string.Empty;
                int threadCounter = 0;

                // Add a list of left brain memories
                memories.Add("The area of a circle is π r squared.");
                memories.Add("The Law of Inertia is Isaac Newton's first law.");
                memories.Add("Richard Feynman was a physicist known for his theories on quantum mechanics.");
                memories.Add("y = mx + b is the equation of a Line, standard form and point-slope.");
                memories.Add("A hypotenuse is the longest side of a right triangle.");
                memories.Add("A chord represents a line segment within a circle that touches 2 points on the circle.");
                memories.Add("Max Planck's quantum mechanical theory suggests that each energy element is proportional to its frequency.");
                memories.Add("A geometry proof is a written account of the complete thought process that is used to reach a conclusion");
                memories.Add("Pythagorean theorem is a relation in Euclidean geometry among the three sides of a right triangle.");
                memories.Add("A proof of Descartes' Rule for polynomials of arbitrary degree can be carried out by induction.");

                // Recount your memories
                memories.ForEach(memory =>
                {
                    this.ProcessThought(string.Format("Thread: {0} (Left Brain)", threadCounter += 1), memory);
                });
            }
            catch (Exception)
            {
                
                throw;
            }
        }

        /// 
        /// Controls the thought process for this half of the brain
        /// 
        /// 
        public void ProcessThought(string threadName, string memory)
        {
            try
            {
                Thread.Sleep(3000);
                Thread monitorThread = null;

                // Spin up a new thread delegate to invoke the thought process
                monitorThread = new Thread(delegate()
                {
                    base.InvokeThoughtProcess(threadName, memory);
                });

                // Name the thread and start it
                monitorThread.Name = threadName;
                monitorThread.Start();
            }
            catch (Exception)
            {
                throw;
            }
        }
    }
}

Likewise, shown below is an example of the RightBrain() class and its thoughts. Once again, the RightBrain() differs from the LeftBrain() mainly in terms of the types thoughts that get invoked, with the left half of the brain’s thoughts being more cognitive in nature and the right half of the brain’s thoughts being more artistic:


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Threading;
using CerebralCortex;

namespace RightBrain
{
    /// 
    /// Stimulations from the right half of the brain
    /// 
    /// 
    public class Stimuli : Memory, IStimuli
    {
        /// 
        /// Stores memories in a list
        /// 
        /// 
        private List memories = new List();

        /// 
        /// An overloaded constructor
        /// 
        /// 
        public void Think()
        {
            try
            {
                string threadName = string.Empty;
                int threadCounter = 0;

                // Add a list of right brain memories
                memories.Add("I wonder if there's a Van Gough Documentary on Television?");
                memories.Add("Isn't the color blue simply radical.");
                memories.Add("Why don't you just drop everything and hitch a ride to California, dude?");
                memories.Add("Wouldn't it be cool to be a shark?");
                memories.Add("This World really is my oyster.  Now, if only I had some cocktail sauce...");
                memories.Add("Why did I stop finger painting?");
                memories.Add("Does anyone want to go to a BBQ?");
                memories.Add("Earth tones are the best.");
                memories.Add("Heavy metal bands rock!");
                memories.Add("I like really shiny thingys.  Oh, Look!  A shiny thingy...");

                // Recount your memories
                memories.ForEach(memory =>
                {
                    this.ProcessThought(string.Format("Thread: {0} (Right Brain)", threadCounter += 1), memory);
                });
            }
            catch (Exception)
            {
                
                throw;
            }
        }

        /// 
        /// Controls the thought process for this half of the brain
        /// 
        /// 
        public void ProcessThought(string threadName, string memory)
        {
            try
            {
                Thread.Sleep(4000);
                Thread monitorThread = null;

                // Spin up a new thread delegate to invoke the thought process
                monitorThread = new Thread(delegate()
                {
                    base.InvokeThoughtProcess(threadName, memory);
                });

                // Name the thread and start it
                monitorThread.Name = threadName;
                monitorThread.Start();
            }
            catch (Exception)
            {
                throw;
            }
        }
    }
}

Regardless, the thread delegates spawned in the LeftBrain and RightBrain Stimuli() classes are responsible for contributing to short-term memory, as each thread commits its discrete memory item to a growing list of memories via the Thought() object. Each thread is also responsible for negotiating with the local mutex (highlighted below in orange) in order to access the critical sections of the code where thread-safety becomes absolutely imperative (highlighted below in wheat), as the individual threads add their messages to the global memory-mapped file (highlighted below in silver).

After each thread writes its memory to the memory-mapped file in the critical section of the code, it then releases the mutex (highlighted below in green) and allows the next sequential thread to lock the mutex and safely enter into the critical section of the code. This behavior repeats itself until all of the threads have exhausted their discrete units-of-work and safely rejoin the hive in their respective hemispheres of the brain. Once all processing completes, the block is then lifted by the primary thread and normal processing continues.


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Threading;
using System.IO;
using System.IO.MemoryMappedFiles;
using System.Runtime.InteropServices;
using System.Xml.Serialization;

namespace CerebralCortex
{
    ///    
    /// The common area of the brain that controls thought
    /// 
    /// 
    public class Memory
    {
        /// 
        /// The local and global mutexes
        /// 
        /// 
        Mutex localMutex = new Mutex(false, "CCFMutex");
        MemoryMappedFile memoryMap = null;

        /// 
        /// Shared memory between the left and right halves of the brain
        /// 
        /// 
        static List<string> Thoughts = new List<string>();

        /// 
        /// Stores a thought in memory
        /// 
        /// 
        private bool StoreThought(string threadName, string thought)
        {
            bool retVal = false;

            try
            {
                Thoughts.Add(string.Concat(threadName, " says: ", thought));
                retVal = true;
            }
            catch (Exception)
            {
                
                throw;
            }

            return retVal;
        }

        /// 
        /// Retrieves a thought from memory
        /// 
        /// 
        private string RetrieveFromShortTermMemory()
        {
            try
            {
                // Returns the last stored thought (simulates short-term memory)
                return Thoughts.Last();
            }
            catch (Exception)
            {
                
                throw;
            }
        }

        /// 
        /// Invokes the thought process (uses a local mutex to control thread access inside the same process)
        /// 
        /// 
        public bool InvokeThoughtProcess(string threadName, string thought)
        {
            try
            {
                // *** CRITICAL SECTION REQUIRING THREAD-SAFE OPERATIONS ***
                {

                    // Causes the thread to wait until the previous thread releases
                    localMutex.WaitOne();

                    // Store the thought
                    StoreThought(threadName, thought);

                    // Create or open the cross-process capable memory map and write data to it
                    memoryMap = MemoryMappedFile.CreateOrOpen("CCFMemoryMap", 2000);

                    byte[] Buffer = ASCIIEncoding.ASCII.GetBytes(string.Join("|", Thoughts));
                    MemoryMappedViewAccessor accessor = memoryMap.CreateViewAccessor();
                    accessor.Write(54, (ushort)Buffer.Length);
                    accessor.WriteArray(54 + 2, Buffer, 0, Buffer.Length);

                    // Conjures the thought back up
                    Console.WriteLine(RetrieveFromShortTermMemory());
                }
                return true;
            }
            catch (Exception)
            {
                throw;
            }
            finally
            {
                // Releases the lock on the critical section of the code
                localMutex.ReleaseMutex();
            }

            return false;
        }
    }
}

With the major portions of the code complete, I am now able to run the application and watch the threads add their memories to the list of memories in the memory-mapped file via the critical section of the cerebral cortex code (click on the pictorial below to view the results)…

MemoryMapOutput

So, to quickly wrap this article up, my final step is to create a separate console application that will run as a completely separate process on the same physical machine in order to demonstrate the cross-process capabilities of a memory-mapped file. In this case, I’ve appropriately named my console application “OmniscientProcess”.

This application will make a call to the RetrieveLongTermMemory() method in its same class in order to negotiate with the global mutex. Provided the negotiation process goes well, the “OmniscientProcess” will attempt to retrieve the data being preserved within the memory-mapped file that was created by our previous application. In theory, this example is equivalent to having some external entity (i.e. someone or something) tapping into your own personal thoughts.


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Threading;
using CerebralCortex;

namespace OmniscientProcess
{
    class Program
    {
        static Mutex globalMutex = new Mutex(false, "CCFMutex");
        static MemoryMappedFile memoryMap = null;

        static void Main(string[] args)
        {
            // Reference the memory object and retrieve our memory-mapped data
            CerebralCortex.Memory cerebralMemory = new Memory();
            List<string> longTermMemories = cerebralMemory.RetrieveLongTermMemory();

            longTermMemories.ForEach(memory =>
            {
                Console.WriteLine(memory);
            });

            Console.WriteLine(string.Empty);
            Console.WriteLine("Press any key to end...");
            Console.ReadKey();
        }

        /// 
        /// Retrieves all thoughts from memory (uses a global mutex to control thread access from different processes)
        /// 
        /// 
        private static List<string> RetrieveLongTermMemory()
        {
            try
            {
                // Causes the thread to wait until the previous thread releases
                globalMutex.WaitOne();

                string delimitedString = string.Empty;

                memoryMap = MemoryMappedFile.OpenExisting("CCFMemoryMap", MemoryMappedFileRights.FullControl);

                MemoryMappedViewAccessor accessor = memoryMap.CreateViewAccessor();
                ushort Size = accessor.ReadUInt16(54);
                byte[] Buffer = new byte[Size];
                accessor.ReadArray(54 + 2, Buffer, 0, Buffer.Length);
                string delimitedThoughts = ASCIIEncoding.ASCII.GetString(Buffer);
                return delimitedThoughts.Split('|').ToList();
            }
            catch (Exception)
            {
                throw;
            }
            finally
            {
                // Releases the lock on the critical section of the code
                globalMutex.ReleaseMutex();
            }
        }
    }
}

The aforementioned application has the ability to retrieve the state of the memory-mapped file from an external process at any point in time, except of course when the mutex is locked. It’s the responsibility of the mutex to exercise thread safety, regardless of the originating process, whenever a thread attempts to access the shared address space that comprises the memory-mapped file (see below):

Output 1 – Here’s a partial listing that was retrieved early in the process:
MemoryMapRetrieval1

Output 2 – Here’s the full listing that was retrieved after all of the threads committed their data:
OmniscientProcess

Finally, while memory-mapped files certainly aren’t a new concept (they’ve actually been around for decades), they are sometimes difficult to wrap your head around when there’s a sizable number of processes and threads flying around in the code. And, while my examples aren’t necessarily basic ones, hopefully they employ some rudimentary concepts that everyone is able to quickly and easily understand.

To recount my steps, I demonstrated calls to disparate objects getting kicked off asynchronously, which in turn conjure up a respectable number of threads per object. Each individual thread, operating in each asynchronously executing object, goes to work by negotiating with a common mutex in an attempt to commit its respective data values to the cross-process, memory-mapped file that’s accessible to applications running as entirely different processes on the same physical machine.

Thanks for reading and keep on coding! 🙂

ColeFrancisBizRulesEngine

Author: Cole Francis, Architect

BACKGROUND

Over the past couple of days, I’ve pondered the possibility of creating a dynamic business rules engine, meaning one that’s rules and types are conjured up and reconciled at runtime. After reading different articles on the subject matter, my focus was imparted to the Microsoft Dynamic Language Runtime (DLR) and Labmda-based Expression Trees, which represent the factory methods available in the System.Linq.Expressions namespace and can be used to construct, query and validate relationally-structured dynamic LINQ lists at runtime using the IQueryable interface. In a nutshell, the C# (or optionally VB) compiler allows you to construct a list of binary expressions at runtime, and then it compiles and assigns them to a Lambda Tree data structure. Once assigned, you can navigate an object through the tree in order to determine whether or not that object’s data meets your business rule criteria.

AFTER SOME RESEARCHING

After reviewing a number of code samples offered by developers who have graciously shared their work on the Internet, I simply couldn’t find one that met my needs. Most of them were either too strongly-typed, too tightly coupled or applicable only to the immediate problem at hand. Instead, what I sought was something a little more reusable and generic. So, in absence of a viable solution, I took a little bit of time out of my schedule to create a truly generic prototype of one. This will be the focus of the solution, below.

THE SOLUTION

To kick things off, I’ve created a Expression Trees compiler that accepts a generic type as an input parameter, along with a list of dynamic rules. Its job is to pre-compile the generic type and dynamic rules into a tree of dynamic, IQueryable Lambda expressions that can validate values in a generic list at runtime. As with all of my examples, I’ve hardcoded the data for my own convenience, but the rules and data can easily originate from a data backing store (e.g. a database, a file, memory, etc…). Regardless, shown in the code block below is the PrecompiledRules Class, the heart of my Expression Trees Rules Engine, and for your convenience I’ve highlighted the line of code that performs the actual Expression Tree compilation in blue):


using System;
using System.Collections.Generic;
using System.Linq;
using System.Linq.Expressions;
using ExpressionTreesRulesEngine.Entities;

namespace ExpressionTreesRulesEngine
{
    /// Author: Cole Francis, Architect
    /// The pre-compiled rules type
    /// 
    public class PrecompiledRules
    {
        ///
        /// A method used to precompile rules for a provided type
        /// 
        public static List<Func<T, bool>> CompileRule<T>(List<T> targetEntity, List<Rule> rules)
        {
            var compiledRules = new List<Func<T, bool>>();

            // Loop through the rules and compile them against the properties of the supplied shallow object 
            rules.ForEach(rule =>
            {
                var genericType = Expression.Parameter(typeof(T));
                var key = MemberExpression.Property(genericType, rule.ComparisonPredicate);
                var propertyType = typeof(T).GetProperty(rule.ComparisonPredicate).PropertyType;
                var value = Expression.Constant(Convert.ChangeType(rule.ComparisonValue, propertyType));
                var binaryExpression = Expression.MakeBinary(rule.ComparisonOperator, key, value);

                compiledRules.Add(Expression.Lambda<Func>(binaryExpression, genericType).Compile());
            });

            // Return the compiled rules to the caller
            return compiledRules;
        }
    }
}


As you can see from the code above, the only dependency in my Expression Trees Rules Engine is on the Rule Class itself. Naturally, I could augment the Rule Class to the PreCompiledRules Class and eliminate the Rule Class altogether, thereby eliminating all dependencies. However, I won’t bother with this for the purpose of this demonstration. But, just know that the possibility does exist. Nonetheless, shown below is the concrete Rule class:


using System;
using System.Linq.Expressions;

namespace ExpressionTreesRulesEngine.Entities
{
    ///
    /// The Rule type
    /// 
    public class Rule
    {
        ///
        /// Denotes the rules predictate (e.g. Name); comparison operator(e.g. ExpressionType.GreaterThan); value (e.g. "Cole")
        /// 
        public string ComparisonPredicate { get; set; }
        public ExpressionType ComparisonOperator { get; set; }
        public string ComparisonValue { get; set; }

        /// 
        /// The rule method that 
        /// 
        public Rule(string comparisonPredicate, ExpressionType comparisonOperator, string comparisonValue)
        {
            ComparisonPredicate = comparisonPredicate;
            ComparisonOperator = comparisonOperator;
            ComparisonValue = comparisonValue;
        }
    }
}


Additionally, I’ve constructed a Car class as a test class that I’ll eventually hydrate with data and then inject into the compiled Expression Tree object for various rules validations:


using System;
using ExpressionTreesRulesEngine.Interfaces;

namespace ExpressionTreesRulesEngine.Entities
{
    public class Car : ICar
    {
        public int Year { get; set; }
        public string Make { get; set; }
        public string Model { get; set; }
    }
}


Next, I’ve created a simple console application and added a project reference to the ExpressionTreesRulesEngine project. Afterwards, I’ve included the following lines of code (see the code blocks below, paying specific attention to the lines of code highlighted in orange) in the Main() in order to construct a list of dynamic rules. Again, these are rules that can be conjured up from a data backing store at runtime. Also, I’m using the ICar interface that I created in the code block above to compile my rules against.

As you can also see, I’m leveraging the out-of-box LINQ.ExpressionTypes enumerates to drive my conditional operators, which is in part what allows me to make the PreCompiledRules class so generic. Never fear, the LINQ.ExpressionTypes enumeration contains a plethora of node operations and conditional operators (and more)…far more enumerates than I’ll probably ever use in my lifetime.


List<Rule> rules = new List<Rule> 
{
     // Create some rules using LINQ.ExpressionTypes for the comparison operators
     new Rule ( "Year", ExpressionType.GreaterThan, "2012"),
     new Rule ( "Make", ExpressionType.Equal, "El Diablo"),
     new Rule ( "Model", ExpressionType.Equal, "Torch" )
};

var compiledMakeModelYearRules= PrecompiledRules.CompileRule(new List<ICar>(), rules);


Once I’ve compiled my rules, then I can simply tuck them away somewhere until I need them. For example, if I store my compiled rules in an out-of-process memory cache, then I can theoretically store them for the lifetime of the cache and invoke them whenever I need them to perform their magic. What’s more, because they’re compiled Lambda Expression Trees, they should be lightning quick against large lists of data. Other than pretending that the Car data isn’t hardcoded in the code example below, here’s how I would otherwise invoke the functionality of the rules engine:


// Create a list to house your test cars
List cars = new List();

// Create a car that's year and model fail the rules validations      
Car car1_Bad = new Car { 
    Year = 2011,
    Make = "El Diablo",
    Model = "Torche"
};
            
// Create a car that meets all the conditions of the rules validations
Car car2_Good = new Car
{
    Year = 2015,
    Make = "El Diablo",
    Model = "Torch"
};

// Add your cars to the list
cars.Add(car1_Bad);
cars.Add(car2_Good);

// Iterate through your list of cars to see which ones meet the rules vs. the ones that don't
cars.ForEach(car => {
    if (compiledMakeModelYearRules.TakeWhile(rule => rule(car)).Count() > 0)
    {
        Console.WriteLine(string.Concat("Car model: ", car.Model, " Passed the compiled rules engine check!"));
    }
    else
    {
        Console.WriteLine(string.Concat("Car model: ", car.Model, " Failed the compiled rules engine check!"));
    }
});

Console.WriteLine(string.Empty);
Console.WriteLine("Press any key to end...");
Console.ReadKey();


As expected, the end result is that car1_Bad fails the rule validations, because its year and model fall outside the range of acceptable values (e.g. 2011 < 2012 and 'Torche' != 'Torch'). In turn, car2_Good passes all of the rule validations as evidenced in the pic below:

TreeExpressionsResults

Well, that’s it. Granted, I can obviously improve on the abovementioned application by building better optics around the precise conditions that cause business rule failures, but that exceeds the intended scope of my article…at least for now. The real takeaway is that I can shift from validating the property values on a list of cars to validating some other object or invoking some other rule set based upon dynamic conditions at runtime, and because we’re using compiled Lambda Expression Trees, rule validations should be quick. I really hope you enjoyed this article. Thanks for reading and keep on coding! 🙂

CouplingDesignPatterns

Author: Cole Francis, Architect

BACKGROUND PROBLEM

My last editorial focused on building out a small application using a simple Service Locator Pattern, which exposed a number of cons whenever the pattern is used in isolation. As you might recall, one of the biggest problems that developers and architects have with this pattern is the way that service object dependencies are created and then inconspicuously hidden from their callers inside the service object register of the Service Locator Class. This behavior can result in a solution that successfully compiles at build-time but then inexplicably crashes at runtime, often offering no insight into what went wrong.

THE REAL PROBLEM

I think it’s fair to say that when some developers think about design patterns they don’t always consider the possibility of combining one design pattern with another in order to create a more extensible and robust framework. The reason why opportunities like these are overlooked is because the potential for a pattern’s extensibility isn’t always obvious to its implementer.

For this very reason, I think it’s important to demonstrate how certain design patterns can be coupled together to create some very malleable application frameworks, and to prove my point I took the Service Locator Pattern I covered in my previous editorial and combined it with a very basic Factory Pattern.

Combining these two design patterns provides us with the ability to clearly separate the “what to do” from the “when to do it” concerns. It also offers build-time type checking and the ability to test each layer of the application using an object’s interface. Enough chit-chat. Let’s get on with the demo!

THE SOLUTION

Suppose we are a selective automobile manufacturer and offer two well-branded models:

    (1) A luxury model named “The Drifter”.
    (2) A sport luxury model named “The Showdown”.

To keep things simple, I’ve included very few parts for each make’s model. So, while each model is equipped with its own engine and emblem, both models share the same high-end stereo package and high-performance tires. Shown below is a snapshot of the ServiceLocator Class, which looks nearly identical to the one I included in my last editorial. For this reason, I’m not color-coding anything inside the class except where I’ve made changes to it. I’ve also kept the color-coding consistent throughout the rest of the code examples in order to depict how the different classes and design patterns get tied together:


namespace FactoryPatternExample
{
    public class ServiceLocator
    {
        #region Member Variables

        ///
        /// An early loaded dictionary object acting as a memory map for each interface's concrete type
        /// 
        private IDictionary<object, object> services;

        #endregion

        #region IServiceLocator Methods

        ///
        /// Resolves the concrete service type using a passed in interface
        /// 
        public T Resolve<T>()
        {
            try
            {
                return (T)services[typeof(T)];
            }
            catch (KeyNotFoundException)
            {
                throw new ApplicationException("The requested service is not registered");
            }
        }

        /// 
        /// Extends the service locator capabilities by allowing an interface and concrete type to 
        /// be passed in for registration (e.g. if you wrap the assembly and wish to extend the 
        /// service locator to new types added to the extended project)
        /// 
        public void Register<T>(object resolver)
        {
            try
            {
                this.services[typeof(T)] = resolver;
            }
            catch (Exception)
            {

                throw;
            }
        }

        #endregion

        #region Constructor(s)

        ///
        /// The service locator constructor, which resolves a supplied interface with its corresponding concrete type
        /// 
        public ServiceLocator()
        {
            services = new Dictionary<object, object>();

            // Registers the service in the locator
            this.services.Add(typeof(IDrifter_LuxuryVehicle), new Drifter_LuxuryVehicle());
            this.services.Add(typeof(IShowdown_SportVehicle), new Showdown_SportVehicle());
        }

        #endregion
    }
}


Where the abovementioned code differs from a basic Service Locator implementation is when we add our vehicles to the service register’s Dictionary object in the ServiceLocator() Class Constructor. When this occurs, the following parts are registered using a Factory Pattern that gets invoked in the Constructor of the shared Vehicle() Base Class (highlighted in yellow, below):


 
namespace FactoryPatternExample.Vehicles.Models
{
    public class Drifter_LuxuryVehicle : Vehicle, IDrifter_LuxuryVehicle
    {
        /// 
        /// Factory Pattern for the luxury vehicle line of automobiles
        /// 
        /// 
        public override void CreateVehicle()
        {
            Parts.Add(new Parts.Emblems.SilverEmblem());
            Parts.Add(new Parts.Engines._350_LS());
            Parts.Add(new Parts.Stereos.HighEnd_X009());
            Parts.Add(new Parts.Tires.HighPerformancePlus());
        }
    }
}



 
namespace FactoryPatternExample.Vehicles.Models
{
    public class Showdown_SportVehicle : Vehicle, IShowdown_SportVehicle
    {
        /// 
        /// Factory Pattern for the luxury vehicle line of automobiles
        /// 
        /// 
        public override void CreateVehicle()
        {
            Parts.Add(new Parts.Emblems.GoldEmblem());
            Parts.Add(new Parts.Engines._777_ProSeries());
            Parts.Add(new Parts.Stereos.HighEnd_X009());
            Parts.Add(new Parts.Tires.HighPerformancePlus());
        }
    }
}


As you can see from the code above, both subtype classes inherit from the Vehicle() Base Class, but each subtype implements its own distinctive interface (e.g. IDrifter_LuxuryVehicle and IShowdown_SportVehicle). Forcing each subclass to implement its own unique interface is what ultimately allows a calling application to distinguish one vehicle type from another.

Additionally, it’s the Vehicle() Base Class that calls the CreateVehicle() Method inside its Constructor. But, because the CreateVehicle() Method in the Vehicle() Base Class is overridden by each subtype, each subtype is given the ability to add its own set of exclusive parts to the list of parts in the base class. As you can see, I’ve hardcoded all of the parts in my example out of convenience, but they can originate just as easily from a data backing store.



namespace FactoryPatternExample.Vehicles
{
    public abstract class Vehicle : IVehicle
    {
        List _parts = new List();

        public Vehicle()
        {
            this.CreateVehicle();
        }

        public List Parts 
        { 
            get
            {
                return _parts;
            }
        }

        // Factory Method
        public abstract void CreateVehicle();
    }
}


As for the caller (e.g. a client application), it only needs to resolve an object using that object’s interface via the Service Locator in order to obtain access to its publicly exposed methods and properties. (see below):


FactoryPatternExample.ServiceLocator serviceLocator = new FactoryPatternExample.ServiceLocator();
IDrifter_LuxuryVehicle luxuryVehicle = serviceLocator.Resolve<IDrifter_LuxuryVehicle>();

if (luxuryVehicle != null)
{
     foreach (Part part in ((IVehicle)(luxuryVehicle)).Parts)
     {
          Console.WriteLine(string.Concat("   - ", part.Label, ": ", part.Description));
     }
}

Here are the results after making a few minor tweaks to the UI code:

The Results

What’s even more impressive is that the Service Locator now offers compile-time type checking and the ability to test each layer of the code in isolation thanks to the inclusion of the Factory Pattern:

BuildTimeError

In summary, many of the faux pas experienced when implementing the Service Locator Design Pattern can be overcome by coupling it with a slick little Factory Design Pattern. What’s more, if we apply this same logic both equitably and ubiquitously across all design patterns, then it seems unfair to take a single design pattern and criticize its integrity and usefulness in complete sequestration, because it’s often the combination of multiple design patterns that make frameworks and applications more integral and robust. Thanks for reading and keep on coding! 🙂

TheServiceLocatorPattern

Author: Cole Francis, Architect

BACKGROUND:

In object-oriented programming (OOP), the Dependency Inversion Principle, or DIP, stipulates that the conventional dependency relationships established from the high-level policy-setting modules, to the low-level dependency modules, are inverted (i.e. reversed), creating an obvious layer of indirection used to resolve component dependencies. Therefore, the high-level components should exist independently from a low-level component’s implementation and all its minutia.

DIP was suggested by Robert C. Martin in a paper he wrote in 1996 titled, Object Oriented Design Quality Metrics: an analysis of dependencies. Following that, there was an article that appeared in the C++ Report in May 1996 entitled “The Dependency Inversion Principle” and the books Agile Software Development, Principles, Patterns, and Practices, and Agile Principles, Patterns, and Practices in C#.

The principle inverts the way most people may think about Object Oriented Design (OOD), and the Service Locator pattern is an excellent pattern to help demonstrate DIP principles, mainly because it facilitates a runtime provisioning of chosen low-level component implementations from its higher-level componentry.

The key tenants of the Service Locator Pattern are (in layman’s terms):

  • An interface is created, which identifies a set of callable methods that the concrete service class implements.
  • A concrete service class is created, which implements the interface. The concrete class is the component where all the real work gets done (e.g. calling a database, calling a WCF method, making an Http POST Ajax call, etc…).
  • A Service Locator class is created to loosely enlist the interface and its corresponding concrete service class. Once a client application requests to resolve an enlisted type, then it’s the Service Locator’s job to resolve the appropriate interface and return it to the calling application so that the service class’s method(s) can be called.
  • A visual representation of the Dependency Inversion Pattern

To help explain the pattern a little more clearly, I’ve put together a working example of the Service Locator Pattern implementing mock services. One call simulates the retrieval of credit card authorization codes for fulfilled orders coming from a database. Once I retrieve the authorization codes, I then simulate settling them using a remote credit card service provider. Afterwards, I mimic updating our database records with the credit card settlement codes that I received back from the credit card service provider. I’ve intentionally kept the following example simple so that it’s easy to follow and explain, and I’ve also broken the code down into color-coded sections to further dissect the responsibility of each region of code:


namespace ServiceLocatorExample
{
    /// 
    /// A textbook implementation of the Service Locator Pattern.  
    /// 
    /// 
    public class ServiceLocator : IServiceLocator
    {
        #region Member Variables

        /// 
        /// An early loaded dictionary object acting as a memory map for each interface's concrete type
        /// 
        /// 
        private IDictionary services;

        #endregion

        #region IServiceLocator Methods

        /// 
        /// Resolves the concrete service type using a passed in interface
        /// 
        /// 
        public T Resolve()
        {
            try
            {
                return (T)services[typeof(T)];
            }
            catch (KeyNotFoundException)
            {
                throw new ApplicationException("The requested service is not registered");
            }
        }

        /// 
        /// Extends the service locator capabilities by allowing an interface and concrete type to 
        /// be passed in for registration (e.g. if you wrap the assembly and wish to extend the 
        /// service locator to new types added to the extended project)
        /// 
        /// 
        /// IDictionary(object, object), where the first parameterized object is the service interface 
        /// and the second parameterized object is the concrete service type
        /// 
        /// 
        public void Register(object resolver)
        {
            try
            {
                this.services[typeof(T)] = resolver;
            }
            catch (Exception)
            {
                
                throw;
            }
        }

        #endregion

        #region Constructor(s)

        /// 
        /// The service locator constructor, which resolves a supplied interface with its corresponding concrete type
        /// 
        /// 
        public ServiceLocator()
        {
            services = new Dictionary();

            // Registers the service in the locator
            this.services.Add(typeof(IGetFulfilledOrderCCAuthCodes), new GetFulfilledOrderCCAuthCodes());
            this.services.Add(typeof(IGetFulfilledOderCCSettlementCodes), new GetFulfilledOderCCSettlementCodes());
            this.services.Add(typeof(IUpdateFulfilledOrderCCSettlementCodes), new UpdateFulfilledOderCCSettlementCodes());
        }

        #endregion
    }
}


PRE-LOADING DECOUPLED RELATIONSHIPS TO A DICTIONARY OBJECT AT RUNTIME:

If you look at the all the sections I’ve highlighted in yellow, all I’m doing is declaring a Dictionary Object to act as a “registry placeholder” in the Member Variables Region, and then I’m preloading the interface and service class as a key/value pair to the service registry in the Constructor(s) Region of the code.

The key/value pairs that get stored in the Dictionary Object loosely describes the concrete class and its corresponding interface that gets registered as service objects (e.g. “IMyClass”, “MyClass”). An interface describes the methods and properties that are implemented in the concrete class, and the concrete class is type where all the real work gets accomplished. In its most primitive form, the primary job of ServiceLocator class is to store key/value pairs in a simple Dictionary object and either register or resolve those key/value pairs whenever it’s called upon to do so.

GETTING AND SETTING VALUES IN THE DICTIONARY OBJECT AT RUNTIME:

The section that’s color-coded in green denotes simple getter and setter-like methods that are publicly exposed to a consuming application, allowing that consuming application to either register new service objects in the Dictionary Object registry or resolve an existing service object in the Dictionary Object’s registry for use in a client application.

In fact, listed below is a textbook example of how a client application would resolve an existing service object in the Dictionary Object’s registry for use. In this example I’m resolving the IGetFulfilledOrderCCAuthCodes interface to its concrete type and then calling its GetFulfilledOrderCCAuthCodes() method using a console application that I quickly threw together…


/// 
/// Gets the fulfilled orders credit card authorization codes to settle on
/// 
/// 
private static List GetFulfilledOrderCCAuthCodes()
{
    ServiceLocatorExample.ServiceLocator locator2 = new ServiceLocatorExample.ServiceLocator();
    IGetFulfilledOrderCCAuthCodes o = locator2.Resolve();
    return o.GetFulfilledOrderCCAuthCodes();
}


CONGRATULATIONS! YOU’RE DONE:

Assuming that someone has already written logic to retrieve the fulfilled order authorization codes from the database, then your part is done! I really wish there was more to it than this so that I would look like some sort of architectural superhero, but alas there isn’t. Thus, if all you were looking to get out of this post is how to implement a textbook example of the Service Locator design pattern, then you don’t need to go any further. However, for those of you who want to know the advantages and disadvantages of the Service Locator design pattern then please keep reading:

THE ADVANTAGES:

  • The Service Locator Pattern follows many well-recognized architectural principles, like POLA, Hollywood, KISS, Dependency Inversion, YAGNI, and others…
  • Although the Service Locator Pattern is not considered to be a lightweight pattern, it’s still very simple to learn and is easily explainable to others, which means that your junior developer’s eyes won’t pop out of their heads when you attempt to explain the concept to them.
    • This truly is a framework pattern that you can teach a less knowledgeable person over a lunch hour and expect them to fully understand when you’re done, because the Service Locator framework wires everything up using an minimal number of resources (e.g. a Dictionary object containing key/value pairs and the ability to read from and (optionally) write to the Dictionary object).
  • The Service Locator design pattern allows you to quickly and efficiently create a loosely coupled runtime linker that’s ideal for separating concerns across the entire solution, as each type is concerned only about itself and doesn’t care what any of the other components do.
  • For you architectural purists out there, just be aware that using the Service Locator design pattern doesn’t preclude you from coupling it with good Dependency Injection and Factory Pattern frameworks. In fact, by doing so you have the potential of creating a lasting framework that meets the conditions of both SOLID and POLA design principles, as well as many others. Perhaps I’ll make this the topic of my next architectural discussion…

THE DISADVANTAGES:

  • The services (i.e. Key/value pairs that represent concrete classes) that get registered in the Service Locator object are often considered “black box” items to consumers of the Service Locator class, meaning service objects and their dependencies are completely abstracted from the applications that call it. This loosely coupled structure makes it extremely difficult to track down issues without development access to the source code for not only the Service Locator itself, but also all of the dependent service objects that get registered by it at runtime.
    • If you find yourself in this situation and you don’t have access to the Service Locator source code, then tools like the Microsoft ILDasm or the RedGate’s .NET Reflector are occasionally able to shed some light on what’s going on inside the Service Locator assembly; however, if the code happens to be obfuscated or if hidden dependencies are completely unresolvable, then deciphering issues can become an exercise in futility. For this very reason, the Service Locator Pattern violates ACID Principles, which is why some architectural gurus consider the Service Locator design to be more of a design anti-pattern.
  • Because Service Locator’s dictionary is constructed using a key/value concept, all key names must be unique, meaning that it’s not very well-suited for distributed systems without adding more validation checks around the Register method’s dictionary insertion code.
  • Testing the registered objects becomes a difficult process because we aren’t able to test each object in isolation, as registered service objects are considered “black box” items to both the Service Locator and its callers.
  • As I previously mentioned, objects are “late-bound”, which means that a caller is going to burn up some CPU cycles waiting the Service Locator to find the service’s registry entry in the Dictionary object and return it to them, and then they still have to invoke that object “late-bound”. Sure the time is minimal, but it’s still time that isn’t being spent on enhancing something more valuable…like the user experience.
  • There are also some concerns from a security standpoint. Keep in mind that my ServiceLocator class allows people to dynamically register their own objects at runtime (see the code snippet below). Who knows what types of malicious service objects might get registered? What if our ServiceLocator class only performed a minimal set of validations before executing a method on a service object? Now that’s an awfully scary thought.



/// 
/// Extends the service locator capabilities by allowing an interface and concrete type to 
/// be passed in for registration (e.g. if you wrap the assembly and wish to extend the 
/// service locator to new types added to the extended project)
/// 
///
public void Register(object resolver)
{
    try
    {
       this.services[typeof(T)] = resolver;
    }
    catch (Exception)
    {
       throw;
    }
}



SOME FINAL TALKING POINTS:

As you can see, the Service Locator Design Pattern is very simple to explain and understand, but it certainly isn’t a “one size fits all” design pattern, and a fair degree of caution should be exercised before choosing it as a framework to build your solution on. You might even consider coupling it or substituting it with a more robust pattern, like one that offers the same capabilities but resolves its dependency objects at build-time instead of runtime (e.g. Dependency Injection).

Personally, I think this pattern’s “sweet spot” is for applications that: (1) Have a very narrow design scope; (2) Are built for in-house use and not third-party distribution; (3) Leverage service objects that are well-known and well-tested by a development team; (4) Aren’t designed for blazing speed. Thanks for reading and keep on coding! 🙂