Archive for the ‘.NET Architecture’ Category

Here’s to a Successful First Year!

Posted: September 24, 2016 in .NET Architecture
Tags:

ThankYou.png

To my Loyal Readers: 

I published my very first Möbius Straits article approximately one year ago, with an original goal of courting the technical intellect of a just a few savvy thought leaders.  In retrospect, this exercise helped me remember just how difficult it is to stay committed to a long-term writing initiative.  To aggravate things just a bit more, my hobbies are motorcycling, music, food and travel…and none of these things align incredibly well with creative technical writing.

So, in an attempt to evade a potential degradation of creativity and writer’s block, I experimented with a number of things, including co-creating a small writing group encouraging individuals to write articles based upon their own unique interests.  This, in turn, offered me a dedicated block of time to work on my own material, spurring my productivity for a while.  The bottom line is that I forced myself to find time to write about things that I thought the technical community might find both relevant and interesting.  In my humble opinion, there is no greater gift than sharing what you know with someone else who can benefit from your experience and expertise. 

Regardless, I now find myself embarking on my second year of creative technical writing, and as I pour through the first year’s readership analytics, I’m very enthusiastic about what I see.  For example, over the past four months, Möbius Straits has found a steady footing of 600-650 readers per month.  What I’ve also discovered is that many of you are returning readers representing over 130 countries from around the World.  Also, with five days still left in September, this month’s readership is projected to reach over 700 unique visitors for the first time ever.

bestmonthonrecord

Finally, from a holistic standpoint, the data is even more exciting, as the total number of non-unique Möbius Straits’ visits has grown almost 800% since January 1, 2016 (see the chart below), suggesting a very strong and loyal monthly following.  I am without words, maybe for the first time ever, and cannot thank you enough for your ongoing patronage.  As I mentioned in my original paragraph, it can be difficult to stay committed to a long-term writing initiative; however, your ongoing support is more than enough inspiration to keep me emotionally invested for another year.  I really owe this to you.  Once again, thank you so much.  Love!

2015to2016

 

Thanks for reading and keep on coding! 🙂

Advertisements

CreativeIntegrationIoT.png

 

Author:  Cole Francis, Architect


BACKGROUND

This weekend I picked up a Raspberry Pi 3 Model B, which is the last single-board computer from the Raspberry Pi Foundation. The Model B’s capabilities are quite impressive. For instance, it’s capable of streaming BluRay-quality video, and its 40-pin GPIO header gives you access to 27 GPIO, UART, I2C, SPI as well as both 3.3V and 5V power sources. It also comes with onboard Wi-Fi and Bluetooth, all in a compact unit that’s only slightly larger than a debit card.

What’s more, I also purchased a 7″ touch display that plugs right in to the Raspberry Pi’s motherboard.  I was feeling creative, so I decided to find a way to take the technical foundation that I came up with in my previous article and somehow incorporate the Pi into that design, essentially taking it to a whole new level.  If you read my previous article, then you already know that my original design looks like this:

Microsoft Flow

Basically, the abovementioned design workflow represents a Microsoft Office 365 Flow component monitoring my Microsoft Office 365 Exchange Inbox for incoming emails. It looks for anything with “Win” in the email subject line and automatically calls an Azure-based MVC WebAPI endpoint whenever it encounters one.  In turn, the WebAPI endpoint then calls an internal method that sends out another email to User 2. 

In any event, I created the abovementioned workflow to simply prove that we can do practically anything we want with Microsoft Flow acting as a catalyst to perform work across disparate platforms and codebases.

However, now I’m going to alter the original design workflow just a bit.  First, I’m going to change the Microsoft Flow component to start passing in email subject lines into our Azure-based WebAPI endpoint.  Secondly, I’m eliminating User 2 and substituting this person with the Raspberry Pi 3 IoT device running on a Windows 10 IoT Core OS. Never fear, in this article I’m also going to provide you with step-by-step instructions on how to install the OS on a Raspberry Pi 3 device.  Also, from this point on I’m going to refer to the Raspberry Pi 3 as “the Pi” just because it’s easier.

Once again, if you read my previous article, then you already know that the only time the Microsoft Flow component contacts the WebAPI is if an inbound email’s subject line matches the criteria we setup for it in Microsoft Flow.  In our new design, our Flow component will now pass the email subject line to a WebAPI enpoint, which will get enqueued in a static property in the cloud.

Separately, the Pi will also contact the Azure-hosted WebAPI endpoint on a regularly scheduled interval to see if an enqueued subject is being stored.  If so, then the Pi’s call to the WebAPI will cause the WebAPI to dequeue the subject line and return it to the Pi.  Finally, the Pi will interrogate the returned subject line and perform an automated action using the returned data.  The following technical design workflow probably lays it out better than I can explain it.

FlowDesign2.png


SOLUTION

Our solution will take us through a number of steps, including:

  1. Installing Microsoft Windows 10 IoT Core on the Pi.
  2. Modifying the Microsoft Flow component that we created in the previous article.
  3. Modifying the Azure-based (cloud) WebAPI2 project that I created in my previous article on Microsoft Flow.
  4. Creating a new Universal Windows Application that will reside on the Pi.

So, let’s get started by first setting up the Pi and installing Microsoft 10 IoT Core on it. We’re going to build our own little Smart Factory.


SETTING UP THE RASPBERRY PI 3

First, we’ll need to download the tools that are necessary to get the Windows IoT Core on the Pi.  You can get them here:

https://developer.microsoft.com/en-us/windows/iot/Downloads.htm

After we download the abovementioned tools, we’ll install them on our laptop or desktop.  Then we’ll be presented with the following wizard that will help guide us through the rest of the process.  The first screen that shows up is the “My devices” screen.  As you can see, it’s blank, and I can honestly say that I’ve never seen anything filled in this portion of the wizard, so you can ignore this section for now.  At this point, let’s sign into our Microsoft MSDN account and begin navigating through the wizard.

IoTWizard1

We can move onto the “Setup a new device” at this point:

IoTWizard2.png

Once we’re done adding our options, click the download and Install button in the lower right-hand corner of the screen.  It prompts us to enter an SD card if we haven’t already.

***A small word of caution***  The Raspberry Pi 3 uses a MicroSD card to host its operating system on, so take that into consideration when shopping for SD Cards.  What you’ll probably want to get is a MicroSD with a regular SD card adapter.  That’s what I did.  You’ll also want to study the SD Cards that Microsoft recommends for compatibility.  I unsuccessfully burnt through three SD cards before I gave up and went with their recommendation.  After conceding and going with a compatible SD card, I was able to render the Windows 10 IoT Core OS successfully, so don’t make the same costly mistake I made. 

Anyway, we’ll eventually get to the point where we’re asked to erase the data on the SD card we’ve inserted.  This process deletes all existing data on our SD card, formats it using a FAT32 file system, and then installs the Windows 10 IoT Core image on it.

IoTWizard4.png

You should see the following screen when the wizard starts copying the files onto the SD card:

IoTWizard5.png

Our SD card is finally ready for action.

IoTWizard6.png

At this point, we can remove the SD Card Adapter from our laptop or desktop, and also remove the micro SD card from the SD Card Adapter.  Next, insert the micro SD card into the Pi’s miniSD port and then boot it up.

Afterwards, we’ll connect an Ethernet cable from our laptop (or optionally a desktop) to the Ethernet port on the Raspberry Pi.  Then we’ll run the following command using the Pi’s local IP address.  For example, my Pi’s IP address is 169.254.16.5, but your Pi’s IP address might be different, so pay close attention to this detail.

Anyway, this sets the Pi up as a Remote Managed Trusted Host and allows us to administer it from our local machine, which in this case is a laptop.  So, now we should be able to deploy our code to the Pi and interact with in Visual Studio 2015 debug mode.

IoTWizard7.png

At this point, all of the heavy lifting for the Pi’s OS installation and communication infrastructure is complete.


MODIFYING OUR EXISTING MICROSOFT FLOW COMPONENT

So, let’s piggyback off of the previous article I wrote about on Microsoft Flow and extend it to incorporate a Pi into the mix.  But, before we do, let’s tweak our Microsoft “PSC Win Wire” Flow component just a bit, since our new design goal is to start passing in the subject line of an inbound email to an Azure-hosted WebAPI endpoint.  If you recall, in the previous article we were simply calling a WebAPI endpoint without passing a parameter.  So, let’s change the “PSC Win Wire” Flow component so that we can start passing an email subject line to a WebAPI endpoint.  We’ll accomplish this by making the changes you see in the picture below.

IoTWizard8.png

We’re now officially done with the necessary modifications to our Microsoft Flow component, so let’s save our work.

Once again, it’s the Flow component’s job is to continually monitor our email inbox for any emails that match the conditions that we set up, which in this case are if “PSC Win Wire” is included in the inbound email’s subject line.  Once this condition is met, then our Flow component will be responsible for calling the “SetWhoSoldTheBusiness” endpoint in the Azure-hosted WebAPI, and the WebAPI will enqueue this email subject line.


MICROSOFT AZURE .NET MVC WebAPI (THE CLOUD)

Now let’s focus our attention on creating a couple of new WebAPI endpoints using Visual Studio 2015.  First, let’s create a SetWhoSoldTheBusiness endpoint that accepts a string parameter, which will contain the email subject line that gets passed to us by the Flow component.   Next, we’ll create a GetWhoSoldTheBusiness endpoint, which will be called by the Pi to retrieve email subject lines, as shown in the C# code below.



namespace BlueBird.Controllers
{
    /// 
    /// The email controller
    /// 
    public class EmailController : ApiController
    {
        /// 
        /// Set the region that sold the business
        /// 
        /// The subject line of the email
        // GET: api/SetWhoSoldTheBusiness?subjectLine=""
        [HttpGet]
        public void SetWhoSoldTheBusiness(string subjectLine)
        {
            try
            {
                Email.SetWhoSoldTheBusiness(subjectLine);
            }
            catch (Exception)
            {
                throw;
            }
        }

        /// 
        /// Get the region that sold the business
        /// 
        /// 
        // GET: api/GetWhoSoldTheBusiness
        [HttpGet]
        public string GetWhoSoldTheBusiness()
        {
            try
            {
                return Email.GetWhoSoldTheBusiness();
            }
            catch (Exception)
            {
                throw;
            }
        }
    }
}


Whereas our WebAPI endpoint code above acts as a façade layer for calls being made from external callers, the concrete class below is tasked with actually accomplishing the real work, like storing and retrieving the email subject lines.  It’s the job of the BlueBird.Repository.Email class to enqueue and dequeue email subject lines whenever it’s called on to do so by the SetWhoSoldTheBusiness and GetWhoSoldTheBusiness WebAPI endpoints in the abovementioned code.



namespace BlueBird.Repository
{
    /// 
    /// The email repository
    /// 
    public static class Email //: IEmail
    {
        /// 
        /// The company that sold the business
        /// 
        public static Queue whoSoldTheBusiness = new Queue();

        /// 
        /// Determine who sold the business via the email subject line and drop it in the queue
        /// 
        /// The email subject line
        public static void SetWhoSoldTheBusiness(string subjectLine)
        {
            try
            {
                if (subjectLine.Contains("KC"))
                {
                    whoSoldTheBusiness.Enqueue("KC");
                }
                else if (subjectLine.Contains("CHI"))
                {
                    whoSoldTheBusiness.Enqueue("CHI");
                }
                else if (subjectLine.Contains("TAL"))
                {
                    whoSoldTheBusiness.Enqueue("TAL");
                }
            }
            catch (Exception)
            {

                throw;
            }
        }

        /// 
        /// Return the region that sold the business and drop it from the queue
        /// 
        /// The email subject line
        public static string GetWhoSoldTheBusiness()
        {
            string retVal = string.Empty;

            try
            {
                if (whoSoldTheBusiness != null)
                {
                    if (whoSoldTheBusiness.Count > 0)
                    {
                        retVal = whoSoldTheBusiness.Dequeue();
                    }
                }

                return retVal;
            }
            catch (Exception)
            {
                throw;
            }
        }
    }
}


Well, this represents all the work we’ll need to do in the WebAPI project, aside from deploying it to the Azure Cloud.


UNIVERSAL WINDOWS APPLICATION (e.g. UWA)

So, now let’s create a blank Universal Windows Application (herein referred to simply as UWA) in Visual Studio 2015, which will act as a second caller to the WebAPI endpoints we created above.  As a quick recap, our Microsoft Flow component calls a method in our cloud-hosted WebAPI to enqueue email subject lines anytime its conditions are met. 

Thus, it’s only fitting that our UWA, which will be hosted on the Pi, will have the ability to retrieve the data that’s enqueued in our WebAPI so that it can do something creative with that data.  As a result, it will be the responsibility of the UWA living in the Pi to ping our Azure WebAPI GetWhoSoldTheBusiness method every 10 seconds to find out if any enqueued email subject lines exist.  If so, then it will retrieve them. 

What’s more, upon retrieving an email subject line, it will interrogate it for the word “KC” (for Kansas City) or “CHI” (for Chicago) somewhere in the email subject line.  If it finds the word “KC” then we’ll have it play one song on the Pi, and if it finds “CHI” then we’ll play a different song.  So, let’s let’s start creating our UWA IoT application. We’ll use the Visual Studio 2015 (Universal Windows) template to get started.  Let’s name the new project PSCBlueBirdIoT, just like what’s shown in the screen below:

IoTWizard9.png

After creating the UWA Project, we’ll want to right-click on the project and enter in our Pi’s local IP Address.  We’ll also want to target it as a Remote Machine.  Also, let’s make sure that we check the “Uninstall and then re-install my package” option so that we’re not creating new instances of our application every time you redeploy to the Pi.  One last item of detail, let’s make sure that we check the “Allow local network loopback” option under the “Start Action” grouping as shown below.

IoTWizard10.png

Our code’s going to be really simple for the UWA Project.  Let’s create a simple timer inside of it that fires every ten seconds.  Whenever the timer fires, its sole responsibility will be to make an AJAX call to our Azure-hosted (cloud hosted) Web API endpoint, GetWhoSoldTheBusiness. And, it will pull back that value from the WebAPI queue object if an entry exists.  As previously mentioned, if the email subject line contains “KC” (e.g. “PSC Win Wire- KC”), then we’ll play one song; otherwise, we’ll play a different song if the email subject line contains “CHI” (e.g. “PSC Win Wire – CHI”).  Here’s the code for this:



namespace PSCBlueBirdIoT
{
    /// 
    /// An empty page that can be used on its own or navigated to within a Frame.
    /// 
    public sealed partial class MainPage : Page
    {
        #region Private Member Variables

        /// 
        /// Local timer
        /// 
        DispatcherTimer _timer = new DispatcherTimer();
        Queue _queueDealsWon = new Queue();

        #endregion

        #region Events

        /// 
        /// The main page
        /// 
        public MainPage()
        {
            this.InitializeComponent();
            this.DispatchTimerSetup();

        }

        /// 
        /// Fires on timer tick
        /// 
        /// The timer
        /// Any additional event arguments
        private void _timer_Tick(object sender, object e)
        {
            this.GetWhoSoldTheBusiness();
        }

        #endregion

        #region Private Methods

        /// 
        /// The setup for the dispatch timer
        /// 
        private void DispatchTimerSetup()
        {
            _timer.Tick += _timer_Tick;
            _timer.Interval = new TimeSpan(0, 1, 0);
            _timer.Start();
        }

        /// 
        /// Get who sold the business
        /// 
        private async void GetWhoSoldTheBusiness()
        {
            try
            {
                using (var client = new HttpClient())
                {
                    string retVal = string.Empty;

                    retVal = await client.GetStringAsync(new Uri("https://yourazurewebsite.net/api/Email/GetWhoSoldTheBusiness"));
                    retVal = retVal.Replace("\\", "");

                    if (retVal != string.Empty && retVal != "\"\"" && retVal != null)
                    {
                        if (retVal.Contains("CHI"))
                        {
                            retVal = "CHI.mp3";
                        }
                        else if (retVal.Contains("KC"))
                        {
                            retVal = "KC.mp3";
                        }
                        _queueDealsWon.Enqueue(retVal);

                        StorageFile file = await StorageFile.GetFileFromApplicationUriAsync(new Uri("ms-appx:///Music/" + _queueDealsWon.Dequeue()));
                        BackgroundMediaPlayer.Shutdown();
                        MediaPlayer player = BackgroundMediaPlayer.Current;
                        player.AutoPlay = false;
                        player.SetFileSource(file);
                        player.Play();

                    }
                }
            }
            catch (Exception e)
            {
                throw e;
            }
        }

        #endregion
    }
}


Now that this is done, let’s build and deploy our UWA application onto the Pi.  The pictorial below shows it doing its magic.  Because we’ve set the Pi up as a Trusted Remote Host, above, we can also do things like debug it using Visual Studio 2015 (Administrator mode) on our local machine.

IoTWizard11.png


TESTING IT ALL OUT

At this point, we’re done…as in “done, done”.  Smile  So, let’s test it end-to-end by kicking off an email to ourselves that matches the criteria we entered in our Microsoft Flow component.  If all goes as planned, then our Flow component will pick it up, call our Azure-based WebAPI endpoint and then enqueue our email subject line.

Finally, our UWA, which lives on the Pi, will separately call the other Azure-based WebAPI endpoint every 10 seconds, dequeueing and returning any email subject lines that might exist inside our Azure-hosted WebAPI.  Once the UWA application retrieves an email subject line, it will then determine if either “CHI” or “KC” is present within the subject line and play one song or another based on the response.  Pretty cool, huh?!?  Anyway, here’s a quick video of it in action…

Thanks for reading and keep on coding! 🙂

MicrosoftFlow

Author: Cole Francis, Architect

Today I had the pleasure of working with Microsoft Flow, Microsoft’s latest SaaS-based workflow offering. Introduced in April, 2016 and still in Preview mode, Flow allows both developers and non-developers alike to rapidly create visual workflow sequences using a number of on-prem and cloud-based services.  In fact, anyone who is interested in “low code” or “no code” integration-centric  solutions might want to take a closer look at Microsoft Flow.

Given this, I thought my goal for today would be to leverage Microsoft Flow to create a very rudimentary workflow that gets kicked off by an ordinary email, which in turn will call a cloud-based MVC WebAPI endpoint via an HTTP GET request, and then it will ultimately crank out a second email initiated by the WebAPI endpoint.

Obviously, the custom WebAPI endpoint isn’t necessary to generate the second email, as Microsoft Flow can accomplish this on its own without requiring any custom code at all.  So, the reason I’m adding the custom WebAPI enpoint into the mix is to simply prove that Flow has the ability to integrate with a custom RESTful WebAPI endpoint.  After all, if I can successfully accomplish this, then I can foreseeably communicate with any endpoint on any codebase on any platform.  So, here’s my overall architectural design and workflow:

Microsoft Flow

To kick things off, let’s create a simple workflow using Microsoft Flow.  We’ll do this by first logging into Microsoft Office 365.  If we look closely, we’ll find the Flow application within the waffle:

Office365Portal

After clicking on the Flow application, I’m taken to the next screen where I can either choose from an impressive number of existing workflow templates, or I can optionally choose to create my own custom workflow:

FlowTemplates.png

I need to call out that I’ve just shown you a very small fraction of pre-defined templates that are actually available in Flow.  As of this writing, there are hundreds of pre-defined templates that can be used to integrate with an impressive number of Microsoft and non-Microsoft platforms.  The real beauty is that they can be used to perform some very impressive tasks without writing a lick of code.  For example, I can incorporate approval workflows, collect data, interact with various email platforms, perform mobile push notifications (incl. iOS), track productivity, interact with various social media channels, synchronize data, etc…

Moreover, Microsoft Flow comes with an impressive number of triggers, which interact with an generous number of platforms, such as Box, DropBox, Dynamics CRM, Facebook, GitHub, Google Calendar, Instagram, MailChimp, Office365, OneDrive, OneDrive for Business, Project Online, RSS, Salesforce, SharePoint, SparkPost, Trello, Twitter, Visual Studio Team Services, Wunderlist, Yammer, YouTube, PowerApps, and more.

So, let’s continue building our very own Microsoft Flow workflow object.  I’ll do this by clicking on the “My Flows” option at the top of the web page.  This navigates me to a page that displays my saved workflows.  In my case, I don’t currently have any saved workflows, so I’ll click the “Create new flow” button that’s available to me (see the image below).

MyFlows

Next, I’ll search for the word “Mail”, which presents me with the following options:

Office365Email.png

Since the company I work for uses Microsoft Office 365 Outlook, I’ll select that option.  After doing this, I’m presented with the following “Action widget”.

Office365Inbox.png

I will then click on the “Show advanced options” link, which provides me with some additional options.  I’ll fill in the information using something that meets my specific needs.  In my particular case, I want to be able to kick-off my workflow from any email that contains “Win” in the Subject line.

Office365InboxOptions

Next, I’ll click on the (+ New step) link at the bottom of my widget, and I’m presented with some additional options.  As you can see, I can either “Add another action”, “Add a condition”, or click on the “…More” option to do things like “Add an apply to each” option, “Add a do until” condition, or “Add a scope”.

Office365InboxOptions0.png

As I previously mentioned, I want to be able to call a custom Azure-based RESTful WebAPI endpoint from my custom Flow object.  So, I’ll click on the “Add an action”, and then I’ll select the “HTTP” widget from the list of actions that are available.

RESTfulWebAPIoption.png

After clicking on the “HTTP” widget, I’m now presented with the “HTTP” widget options.  At a minimum, the “HTTP” object will allow me to specify a URI for my WebAPI endpoint (e.g. http://www.microsoftazure.net/XXXEndpoint), as well as an Http Verb (e.g. GET, POST, DELETE, etc…).  You’ll need to fill in your RESTful WebAPI endpoint data according to your own needs, but mine looks like this:

HTTPOption.png

After I’m done, I’ll can save my custom Flow by clicking the “Create Flow” button at the top of the page and providing my Flow with a meaningful name.  Providing your Flow with a meaningful name is very important, because you could eventually have a hundred of these things, so being able to distinguish one from another will be key.  For example, I named my custom Flow “PSC Win Wire”.  After saving my Flow, I can now do things like create additional Flows, edit existing Flows, activate or deactivate Flows, delete Flows, and review the viability and performance of my existing Flows by clicking on the “List Runs” icon that’s available to me.

SaveFlow.png

In any event, now that I’ve completed my custom Flow object, all I’ll need to do now is quickly spin up a .NET MVC WebAPI2 solution that contains my custom WebAPI endpoint, and then push my bits to the Cloud in order to publicly expose my endpoint.  I need to point out that my solution doesn’t necessarily need to be hosted in the Cloud, as a publicly exposed on-prem endpoint should work just fine.  However, I don’t have a quick way of publicly exposing my WebAPI endpoint on-prem, so resorting to the Cloud is the best approach for me.

I also need to point out again that creating a custom .NET MVC WebAPI isn’t necessary to run Microsoft Flows.  There are plenty of OOB templates that don’t require you to write any custom code at all.  This type of versatility is what makes Microsoft Flow so alluring.

In any case, the end result of my .NET MVC WebAPI2 project is shown below.  As you can see, the core WebAPI code generates an email (my real code will have values where you only see XXXX’s in the pic below…sorry!   🙂

MVCWebAPI.png

The GetLatestEmail() method will get called from a publicly exposed endpoint in the EmailController() class.  For simplicity’s sake, my EmailController class only contains one endpoint, and its named GetLatestEmails():

The Controller.png

So, now that I’m done setting everything up, it’s time for me to publish my code to the Azure Cloud.  I’ll start this off by cleaning and building my solution.  Afterwards, I’ll right-click on my project in the Solution Explorer pane, and then I’ll click on the Publish option that appears below.

Publish1.png

Now that this is out of the way, I’ll begin entering in my Azure Publish Web profile options.  Since I’m deploying an MVC application that contains a WebAPI2 endpoint, I’ve selected the “Microsoft Azure Web Apps” option form the Profile category.

Publish2.png

Next, I’ll enter the “Connection” options and fill that information in.   Afterwards, I should now have enough information to publish my solution to the Azure Cloud.  Of course, if you’re trying this on your own, this example assumes that you already have a Microsoft Azure Account.  If you don’t have a Microsoft Azure account, then you can find out more about it by clicking here.

Publish3.png

Regardless, I’ll click the “Publish” button now, which will automatically compile my code. If the build is successful then it will publish my bits to Microsoft’s Azure Cloud.  Now comes the fun part…testing it out!

First, I’ll create an email that matches the same conditions that were specified by me in the “Office 365 Outlook – When an email arrives” Flow widget I previously created.  If you recall, that workflow widget is being triggered by the word “Win” in the Subject line of any email that gets sent to me, so I’ll make sure that my test email meets that condition.

PSCWinWireEmail

After I send an email that meets my Flow’s conditions, then my custom Flow object should get kicked-off and call my endpoint, which means that if all goes well, then I should receive another email from my WebAPI endpoint.  Hey, look!  I successfully received an email from the WebAPI endpoint, just as I expected.  That was really quick!  🙂

EmailResults.png

Now that we know that our custom Flow object works A-Z, I want tell you about another really cool Microsoft Flow feature, and that’s the ability to monitor the progress of my custom Flow objects.  I can accomplish this by clicking on the “List Runs” icon in the “My Flows” section of the Microsoft Flow main page (see below).

ListRun1.png

Doing this will conjure up the following page.  From here, I can gain more insight and visibility into the viability and efficiency of my custom Flows by simply clicking on the arrow to the right of each of the rows below.

ListRun2.png

Once I do that, I’m presented with the following page.  At this point, I can drill down into the objects by clicking on them, which will display all of the metadata associated with the selected widget.  Pretty cool, huh!

ListRun3.png

Well, that’s it for this example.  I hope you’ve enjoyed my walkthrough.  I personally find Microsoft Flow to be a very promising SaaS-based workflow offering.

Thanks for reading and keep on coding! 🙂

AngularJS.png

Author: Cole Francis, Architect

BACKGROUND

While you may not be able to tell it by my verbose articles, I am a devout source code minimalist by nature.  Although I’m not entirely certain how I ended up like this, I do have a few loose theories.

  1. I’m probably lazy.  I state this because I’m constantly looking for ways to do more work in fewer lines of code.  This is probably why I’m so partial to software design patterns.  I feel like once I know them, then being able to recapitulate them on command allows me to manufacturer software at a much quicker pace.  If you’ve spent anytime at all playing in the software integration space, then you can appreciate how imperative it is to be quick and nimble.
  2. I’m kind of old.  I cut my teeth in a period when machine resources weren’t exactly plentiful, so it was extremely important that your code didn’t consume too much memory, throttle down the CPU (singular), or take up an extraordinary amount of space on the hard drive or network share.  If it did, people had no problem crawling out of the woodwork to scold you.
  3. I have a guilty conscience.  As much as I would like to code with reckless abandon, I simply cannot bring myself to do it.  I’m sure I would lose sleep at night if I did.  In my opinion, concerns need to be separated, coding conventions need to be followed, yada, yada, yada…  However, there are situations that sometime cause me to overlook certain coding standards in favor of a lazier approach, and that’s when simplicity trumps rigidity!

So, without further delay, here’s a perfect example of my laziness persevering.  Let’s say that an AngularJS code base exists that properly separates its concerns by implementing a number of client-side controllers that perform their own genric activities. At this point, you’re now ready to lay down the client-side service layer functions to communicate with a number of remote Web-based REST API endpoints.  So, you start to write a bunch of service functions that implement the AngularJS http directive and its implied promise pattern, and then suddenly you have an epiphany!  Why not write one generic AngularJS service function that is capable of calling most RESTful Web API endpoints?  So, you think about it for a second, and then you lay down this little eclectic dynamo instead:



var contenttype = 'application/json';
var datatype = 'json';

/* A generic async service can call a RESTful Web API inside an implied $http promise.
*/
this.serviceAction = function(httpVerb, baseUrl, endpoint, qs) {
  return $http({
    method: httpVerb,
    url: baseUrl + endpoint + qs,
    contentType: contenttype,
    dataType: datatype,
  }).success(function(data){
    return data;
  }).error(function(){
    return null;
  });
};

 
That’s literally all there is to it! So, to wrap things up on the AngularJS client-side controller, you would call the service by implementing a fleshed out version of the code snippet below. Provided you aren’t passing in lists of data, and as long as the content types and data types follow the same pattern, then you should be able to write an endless number of AngularJS controller functions that can all call into the same service function, much like the one I’ve provided above. See, I told you I’m lazy. 🙂



/* Async call the AngularJS Service (shown above)
*/
$scope.doStuff = function (passedInId) {

  // Make a call to the AngularJS layer to call a remote endpoint
  httpservice.serviceAction('GET', $scope.baseURL(), '/some/endpoint', '?id=' + passedInId).then(function (response) {
    if (response != null && response.data.length > 0) {
      // Apply the response data to two-way bound array here!
    }
  });
};

 
Thanks for reading and keep on coding! 🙂

CORAndUnityIoC

Author: Cole Francis, Architect


BACKGROUND

The Chain-of-Responsibility design pattern is a rather obscure behavioral pattern that is useful for creating workflows within a solution. Despite its obscurity, it happens to be one of my favorite all-time design patterns. The pattern chains the concrete handling classes together through a common chaining mechanism (some architects and developers also refer to this as the command class), passing a common request payload to each class through a succession pipeline until it reaches a class that is able to act upon the request.

I have personally and successfully architected a solution using this design pattern for a large and well-branded U.S. retail company. The solution is responsible for printing sale signs in well over a thousand stores across the United States, saving my client approximately $10 million dollars a year, year-over-year. From a development perspective, leveraging this design pattern made it easy to isolate the various units of the workflow, simplifying the process of assigning my various team members to the discrete pieces of functionality that were exposed as a result of using this behavioral design pattern.

While I am unable to discuss anything further about my previous personal experiences with this client, I am able to demonstrate a working example of this design pattern and focus on a business problem that is completely unrelated to anything I’ve done in the past. So, without further adieu, allow me introduce a fictitious business problem to you.


THE BUSINESS PROBLEM

A local transport company manages a fleet of 3 trucks and delivers locally made engines to different distribution centers across the Midwest. Here are the cargo hold specifications for their three delivery trucks:

  • Truck 1: (max load 5,000 lbs. and is used only to haul Red Engines)
  • Truck 2: (max load 6,000 lbs. and is used only to haul Yellow Engines)
  • Truck 4: (max load 6,500 lbs. and is used only to haul Blue Engines)

The company that manufactures these engines also ships their products locally to the T.H.E. Truck King Trucking Company warehouse for storage and distribution. After it’s all said and done, the dimensions of the boxed engines are all the same (42”Wx50”Lx 36”H), but the engine weights and locations the engines get shipped to vary significantly:

  • Red Engines: 532 lbs. each (only get shipped to Chicago, IL; Model Number#: R1773456935)
  • Blue Engines: 1,386 lbs. each (only get shipped to Overland Park, KS; Model Number#: B8439845841)
  • Yellow Engines: 1,783 lbs. each (only get shipped to Des Moines, IA; Model Number#: Y4833345760)

Here are some other things the client brings up during your conversation with them:

  • The crates the engines are transported on are very durable and allow for the engines to be stacked on top of each other during transport, which means each truck will reach its maximum weight load well before it runs out of space.
  • As pointed out above, specific engine types get shipped to specific locations.
  • Occasionally engines are put in the wrong trucks, as loaders are very busy and are working strictly from memory. When this occurs, it’s a miserable experience for everyone.
  • Sometimes the trucks aren’t filled to capacity, causing them to operate well below their maximum load. When this occurs, shipping and other operational costs skyrocket, causing us to lose money.
  • The loading crew has been notified of these issues, but mistakes continue to be made.
  • The larger distribution centers we ship to have stated that if the problem isn’t resolved soon, they will cancel our contract and look for a more reliable and efficient transport company.
  • We need a solution that tells our loading crew what truck to load the engine on, as well as something that tells them whether or not the truck is filled to maximum capacity from a maximum weight load standpoint.
  • The engine manufacturing company doesn’t plan to produce any other types of engines in the near future, but there is a possibility they may want to have more than one type of engine distributed to each of the three distribution points. There are no firm arrangements for this to happen yet, but the solution that is put into place must take this into account.


THE SOLUTION

Because we know the dimensions of each truck’s cargo hold, as well as the weight and dimensions of each product being shipped, the solution should be architected to allow a handler to determine the best course of action for the request it is handling. This is similar to the shipping company allowing the handlers of the crates to determine which trucks the different products should be loaded into.

However, instead of allowing a single entity to make this type of determination, we’ll decouple this association and allow the decision pattern to pass through a chain of inspecting classes, forcing the responsibility of determining whether or not it is capable of handling the request onto the class itself. If the class determines it cannot handle the request, then it will pass the request onto the next handler, who will then decide whether or not it is able to handle the request. This process will continue until we’ve exhausted every available concrete handling class.

By architecting the solution in this manner, we’re fairly sure that we’ll be able to meet all of the functional requirements articulated to us by our client.

The Chain of Responsibility (CoR) Pattern

CoR

Given the previous requirements, one great pattern that allows an object to pass along a chain of handlers until a handler determines it is capable of handling the request is known as the “Chain of Responsibility Handler”, or the CoR Pattern. Unlike a Mediator/Observer patter, CoR doesn’t maintain a link between the handlers in the chain, which is why CoR stands out from other patterns, making the sender and the receiver completely decoupled and allowing each handler to maintain its own set of standards and separation of concerns, making it a true object oriented workflow pattern.

Another great aspect beheld in this pattern is that there very little logic in the sender, beyond setting up the successive chain of concrete handlers. What’s more, the sender isn’t tasked with interrogating the request before making a request to handler. It merely chains the concrete handlers together and allows them figure out the rest, with each concrete handler being responsible for its own separate criteria and concern.


CoR PATTERN CLASSES AND OBJECTS

The classes and objects participating in this pattern are:

  1. The Handler Class (Handler Interface):
    1. An inheritable class used to:
      i. Create an interface used to handle requests.
      ii. Sets the successor used to interact with the next concrete method.
      iii. Implements the successor link in the chain.
  2. The ConcreteHandler Class (Handler)
    1. Interrogates the request and determine whether or not it can act upon the information.
    2. If it can’t access the request, then it forwards the request to the next handler (aka the successor).
  3. The Sender Class (Messenger)
    1. The Sender Class can either be a client application that establishes the successive order of the ConcreteHandler Classes, or
    2. The Sender Class can be a concrete class that acts autonomously of the client to establish the successive order of the ConcreteHandler classes and then acts upon them.


THE HANDLER CLASS

We’ll start out with the creation of the handler class. As pointed out in the summary of the code and description in the previous section, the Handler Class is responsible for assigning the next successor concrete class in a chain of classes that we’ll set up in just as bit.

using System;
using THE_TruckKing.Interfaces;

namespace THE_TruckKing
{
    /// <summary>
    /// The handler class assigns the next successor if the preceding class returns successfully
    /// This is the underlying key to the success of the 'Chain of Responsibility Pattern'
    /// </summary>
    /// 
    public abstract class Handler
    {
        protected Handler successor;

        public abstract IEngine HandleRequest(IEngine engine);

        public void SetSuccessor(Handler successor)
        {
            this.successor = successor;
        }
    }
}


THE CONCRETE CLASSES (THIS CAN BE BASED ON SUBJECT OR FUNCTION)

Next, we’ll implement the concrete handlers, inherit from the handler class, as well as act upon the request message. Here is the concrete handler for the blue engines.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using THE_TruckKing.Utilities;
using THE_TruckKing.Interfaces;

namespace THE_TruckKing
{
    /// <summary>
    /// Handles blue engine shipments to Overland Park, KS
    /// </summary>
    /// 
    public class ShipmentHandlerBlue : Handler
    {
        public override IEngine HandleRequest(IEngine engine)
        {
            int totalQuantityShippingHold = 0;
            int totalReturnToStock = 0;

            if (engine.Type == Constants.Engines.BLUE_ENGINE)
            {
                for (int i = 0; i < engine.QuantityShipping; i++)
                {
                    if (Constants.Engine_MAX_Weights.BLUE_ENGINE >=
                        ((engine.TotalQuantityShipping + 1) * Constants.Engine_Base_Weights.BLUE_ENGINE))
                    {
                        engine.TotalQuantityShipping += 1;
                        totalQuantityShippingHold += 1;
                    }
                    else
                    {
                        totalReturnToStock += 1;
                    }
                }

                Console.WriteLine(string.Format("Load {0} blue engine(s) on the truck destined for {1}",
                    totalQuantityShippingHold, Constants.Trucks_Destinations.BLUE_TRUCK));
                Console.WriteLine("");

                if (totalReturnToStock > 0)
                {
                    Console.WriteLine(string.Format("{0} of the {1} yellow engine(s) exceed load capacity. Please return them to stock", 
                        totalReturnToStock.ToString(), engine.QuantityShipping.ToString()));
                    Console.WriteLine("");
                }
            }
            else
            {
                successor.HandleRequest(engine);
            }

            return engine;
        }
    }
}

Continuing the successor chain, we’ll now implement the concrete handler class for the yellow engines. If the previous concrete handler can’t handle the request, then the blue class will then be responsible for determining whether or not it can. As you can see, the pattern follows the exact same pattern as the blue engine concrete handler.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using THE_TruckKing.Utilities;
using THE_TruckKing.Interfaces;

namespace THE_TruckKing
{
    /// <summary>
    /// Handles yellow engine shipments to Chicago, IL
    /// </summary>
    /// 
    public class ShipmentHandlerRed : Handler
    {
        public override IEngine HandleRequest(IEngine engine)
        {
            int totalQuantityShippingHold = 0;
            int totalReturnToStock = 0;

            if (engine.Type == Constants.Engines.RED_ENGINE)
            {
                for (int i = 0; i < engine.QuantityShipping; i++)
                {
                    if (Constants.Engine_MAX_Weights.RED_ENGINE >=
                        ((engine.TotalQuantityShipping + 1) * Constants.Engine_Base_Weights.RED_ENGINE))
                    {
                        engine.TotalQuantityShipping += 1;
                        totalQuantityShippingHold += 1;
                    }
                    else
                    {
                        totalReturnToStock += 1;
                    }
                }

                Console.WriteLine(string.Format("Load {0} red engine(s) on the truck destined for {1}",
                    totalQuantityShippingHold, Constants.Trucks_Destinations.RED_TRUCK));
                Console.WriteLine("");

                if (totalReturnToStock > 0)
                {
                    Console.WriteLine(string.Format("{0} of the {1} yellow engine(s) exceed load capacity. Please return them to stock", 
                        totalReturnToStock.ToString(), engine.QuantityShipping.ToString()));
                    Console.WriteLine("");
                }
            }
            else
            {
                successor.HandleRequest(engine);
            }

            return engine;
        }
    }
}

The final concrete handler in the successor chain handles the workload for yellow engines. If neither the blue concrete handler nor the yellow concrete handler are able to act upon the contents of the request message, then it’s likely that the yellow concrete handler will be responsible for the work. If not, then it too falls through the chain and returns the same IEngine values that were passed into it, meaning no work was performed on the request message.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using THE_TruckKing.Utilities;
using THE_TruckKing.Interfaces;

namespace THE_TruckKing
{
    /// <summary>
    /// Handles yellow engine shipments to Des Moines, IA
    /// </summary>
    /// 
    public class ShipmentHandlerYellow : Handler
    {
        public override IEngine HandleRequest(IEngine engine)
        {
            int totalQuantityShippingHold = 0;
            int totalReturnToStock = 0;

            if (engine.Type == Constants.Engines.YELLOW_ENGINE)
            {
                for (int i = 0; i < engine.QuantityShipping; i++)
                {
                    if (Constants.Engine_MAX_Weights.YELLOW_ENGINE >=
                        ((engine.TotalQuantityShipping + 1) * Constants.Engine_Base_Weights.YELLOW_ENGINE))
                    {
                        engine.TotalQuantityShipping += 1;
                        totalQuantityShippingHold += 1;
                    }
                    else
                    {
                        totalReturnToStock += 1;
                    }
                }

                Console.WriteLine(string.Format("Load {0} yellow engine(s) on the truck destined for {1}", 
                    totalQuantityShippingHold, Constants.Trucks_Destinations.YELLOW_TRUCK));
                Console.WriteLine("");

                if (totalReturnToStock > 0)
                {
                    Console.WriteLine(string.Format("{0} of the {1} yellow engine(s) exceed load capacity. Please return them to stock",
                        totalReturnToStock.ToString(), engine.QuantityShipping.ToString()));
                    Console.WriteLine("");
                }
            }
            else
            {
                successor.HandleRequest(engine);
            }

            return engine;
        }
    }
}


THE IEngine INTERFACE

You’ll notice that each concrete handler accepts the same IEngine interface type, which inherits from the same handler class. This is ultimately what allows the chain-of-responsibility pattern to work. IEngine also implies that we have a concrete class that abides by the tenants of the IEngine interface, so we’ll define the IEngine interface and the concrete Engine type, below:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace THE_TruckKing.Interfaces
{
    public interface IEngine
    {
        int ID { get; set; }
        string Name { get; set; }
        string Description { get; set; }
        string Type { get; set; }
        string ModelNumber { get; set; }
        int QuantityShipping { get; set; }
        int TotalQuantityShipping { get; set; }
        decimal EngineWeightShipping { get; set; }
        decimal TotalWeightShipped { get; set; }
        decimal BaseWeight { get; set; }
    }
}


THE ENGINE CLASS

The engine class is pretty straightforward. It implements the IEngine interface and provides us with a mechanism in which to store our payload data. Actually, I’m not very fond of including this in my example, because I believe that most of this can be inferred and it unnecessarily bloats the example. However, I’ve included it anyway just to be thorough.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using THE_TruckKing.Interfaces;
using System.Runtime.Serialization;

namespace THE_TruckKing.Entities
{
    [Serializable()]
    public class Engine : IEngine
    {
        private int _ID = 0;

        int IEngine.ID
        {
            get
            {
                return _ID;
            }
            set
            {
                _ID = value;
            }
        }

        private string _Name = string.Empty;

        string IEngine.Name
        {
            get
            {
                return _Name;
            }
            set
            {
                _Name = value;
            }
        }

        private string _Description = string.Empty;

        string IEngine.Description
        {
            get
            {
                return _Description;
            }
            set
            {
                _Description = value;
            }
        }

        private string _Type = string.Empty;

        string IEngine.Type
        {
            get
            {
                return _Type;
            }
            set
            {
                _Type = value;
            }
        }

        private string _ModelNumber = string.Empty;

        string IEngine.ModelNumber
        {
            get
            {
                return _ModelNumber;
            }
            set
            {
                _ModelNumber = value;
            }
        }

        private int _QuantityShipping = 0;

        int IEngine.QuantityShipping
        {
            get
            {
                return _QuantityShipping;
            }
            set
            {
                _QuantityShipping = value;
            }
        }

        private int _TotalQuantityShipping = 0;

        int IEngine.TotalQuantityShipping
        {
            get
            {
                return _TotalQuantityShipping;
            }
            set
            {
                _TotalQuantityShipping = value;
            }
        }

        private decimal _EngineWeightShipping = 0.0m;

        decimal IEngine.EngineWeightShipping
        {
            get
            {
                return _EngineWeightShipping;
            }
            set
            {
                _EngineWeightShipping = value;
            }
        }

        private decimal _TotalWeightShipped = 0.0m;

        decimal IEngine.TotalWeightShipped
        {
            get
            {
                return _TotalWeightShipped;
            }
            set
            {
                _TotalWeightShipped = value;
            }
        }

        private decimal _BaseWeight = 0.0m;

        decimal IEngine.BaseWeight
        {
            get
            {
                return _BaseWeight;
            }
            set
            {
                _BaseWeight = value;
            }
        }
    }
}


THE CONSTANT CLASSES

For purposes of simplicity, my solution doesn’t tie into a database, so I’m inferring all of my references by front loading the base IEngine types. In reality, you would be much better off storing these values in a data backing store of your choice (e.g. configuration, database, etc..).

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace THE_TruckKing.Utilities
{
    public static class Constants
    {
        public static class Engines
        {
            public const string RED_ENGINE = "R1773456935";
            public const string BLUE_ENGINE = "B8439845841";
            public const string YELLOW_ENGINE = "Y4833345760";
        }

        public static class Engine_Base_Weights
        {
            public const decimal RED_ENGINE = 532;
            public const decimal BLUE_ENGINE = 1386;
            public const decimal YELLOW_ENGINE = 1783;
        }

        public static class Engine_MAX_Weights
        {
            public const decimal RED_ENGINE = 5000;
            public const decimal BLUE_ENGINE = 6500;
            public const decimal YELLOW_ENGINE = 6000;
        }

        public static class Trucks
        {
            public const string RED_TRUCK = "the red truck";
            public const string BLUE_TRUCK = "the blue truck";
            public const string YELLOW_TRUCK = "the yellow truck";
        }

        public static class Trucks_Destinations
        {
            public const string RED_TRUCK = "Chicago, IL";
            public const string BLUE_TRUCK = "Overland Park, KS";
            public const string YELLOW_TRUCK = "Des Moines, IA";
        }
    }
}


IoC THROUGH THE MICROSOFT UNITY FRAMEWORK

There is one more class that I’ll implement in the project, and that’s an Inversion of Control (IoC) pattern using the Microsoft Unity Framework. I’m using Microsoft Unity 3 in this example, which can be downloaded here:

Microsoft Unity Framework 3

Basically, you just download it, or the latest version of it, and unpack it in a directory on your machine. Afterwards, you’ll be able to reference different assemblies to exact the specific actions you desire by referencing different assemblies (e.g. MVC, Service Locator, Dependency Injection, etc…) in the Unity download. Of course, you’ll still have to hand roll any patterns not provided by the Unity Library, but it will still offer you with a decent jump start in the areas that the library does specifically address.

In this example, what I’m trying to achieve is this:

  • I want to decouple object dependencies from the main assembly to any client applications, which will allow me to minimize the amount of work necessary to replace or update certain properties to the IEngine and Engine classes without necessarily forcing me to make changes to methods that leverage these classes throughout the solution.
  • I’m assuming that the client application that will eventually consume this assembly will need to know very little about its concrete implementation at compile time, so adding or subtracting certain properties to the interface and its support concrete type should pose little or no rework for the client applications.
  • Even though I didn’t follow a TDD approach, I still might want to create unit tests that perform assertions on various parts of the code base in the future, so each class and method must be able to be tested against without using any dependencies.
  • I want to decouple my classes from being responsible for locating and managing the lifetime of dependencies.
    After installing the Microsoft Unity Framework, create references to the following assemblies within the project:
  • Microsoft.Practices.Unity.dll
  • Microsoft.Practices.Unity.Configuration.dll

Next, drop in the following factory pattern to setup the ability to create an instance of the Engine object. Ideally, our client application will simply be able to call the CreateInstance method and pass the desired engineType to Register and Resolve the IoC Container for the object, as well as set a few base properties. We’ll achieve this using the Microsoft Unity Framework.


THE IoC CONTAINER FACTORY

Before I focus too deeply on the following code, I want to point out that I’ve hardcoded the classes in order to simplify the readability of the code. Normally the classes would be driven through some form of dynamic configuration. Regardless…

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Entites = THE_TruckKing.Entities;
using THE_TruckKing.Interfaces;
using Microsoft.Practices.Unity;
using Microsoft.Practices.Unity.Configuration;
using THE_TruckKing.Utilities;

namespace THE_TruckKing.Factories
{
    public class Engine
    {
        static public IEngine CreateInstance(string engineType)
        {
            IUnityContainer _container = new UnityContainer();
            _container.RegisterType(typeof(IEngine), typeof(Entities.Engine));
            IEngine obj = _container.Resolve<IEngine>();
            return obj;
        }

        static private IEngine SetValues(IEngine engine, string engineType)
        {
            try
            {
                switch (engineType)
                {
                    case Constants.Engines.RED_ENGINE:
                        {
                            engine.Type = Constants.Engines.RED_ENGINE;
                            engine.BaseWeight = Constants.Engine_Base_Weights.RED_ENGINE;
                            break;
                        }
                    case Constants.Engines.BLUE_ENGINE:
                        {
                            engine.Type = Constants.Engines.BLUE_ENGINE;
                            engine.BaseWeight = Constants.Engine_Base_Weights.BLUE_ENGINE;
                            break;
                        }
                    case Constants.Engines.YELLOW_ENGINE:
                        {
                            engine.Type = Constants.Engines.YELLOW_ENGINE;
                            engine.BaseWeight = Constants.Engine_Base_Weights.YELLOW_ENGINE;
                            break;
                        }
                    default:
                        {
                            break;
                        }
                }
                
                return engine;
            }
            catch (Exception)
            {
                throw;
            }
        }
    }
}


CREATING THE CHAIN-OF-RESPONSIBILITY COMMANDS

At this point, we’ve coded a textbook Chain-of-Responsibility design pattern. But, in order to complete the pattern we still need to establish the successive order of the handlers in the chain. So, we’ll solve this piece of the puzzle by creating a quick console application that references both the assembly we just created, as well as a couple of the Microsoft Unity Framework assemblies:

  • Microsoft.Practices.Unity.dll
  • Microsoft.Practices.Unity.Configuration.dll

Afterwards, drop the following code in:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using THE_TruckKing;
using THE_TruckKing.Utilities;
using THE_TruckKing.Interfaces;
using THE_TruckKing.Entities;
using Factories = THE_TruckKing.Factories;
using Microsoft.Practices.Unity;
using Microsoft.Practices.Unity.Configuration;


namespace CoR_Pattern_Client
{
    /// <summary>
    /// Program that signifies an engine is ready to be loaded at the dock
    /// </summary>
    class Program
    {
        static void Main(string[] args)
        {
            int x = 0;

            // Calls the Unity IoC Factory Handler to create the instances of the objects
            IEngine redEngine = Factories.Engine.CreateInstance(Constants.Engines.RED_ENGINE);
            IEngine blueEngine = Factories.Engine.CreateInstance(Constants.Engines.BLUE_ENGINE);
            IEngine yellowEngine = Factories.Engine.CreateInstance(Constants.Engines.YELLOW_ENGINE);

            while (x != 999)
            {
                Console.WriteLine("Specify an engine that is ready to be loaded:");
                Console.WriteLine("");
                Console.WriteLine("Press (R) - To Load Red Engine");
                Console.WriteLine("Press (Y) - To Load Yellow Engine");
                Console.WriteLine("Press (B) - To Load Blue Engine");
                Console.WriteLine("Press (Q) - To Quit");

                var input = Console.ReadKey();
                Console.WriteLine("");
                Console.WriteLine("");

                // Completes the Chain-of-Responsibility Pattern
                Handler h1 = new ShipmentHandlerBlue();
                Handler h2 = new ShipmentHandlerYellow();
                Handler h3 = new ShipmentHandlerRed();

                h1.SetSuccessor(h2);
                h2.SetSuccessor(h3);

                switch (input.Key.ToString().ToUpperInvariant())
                {
                    case "R":
                        {
                            redEngine.Type = Constants.Engines.RED_ENGINE;
                            redEngine.QuantityShipping = GetShipmentQuantity();
                            redEngine = h1.HandleRequest(redEngine);
                            break;
                        }
                    case "Y":
                        {
                            yellowEngine.Type = Constants.Engines.YELLOW_ENGINE;
                            yellowEngine.QuantityShipping = GetShipmentQuantity();
                            yellowEngine = h1.HandleRequest(yellowEngine);
                            break;
                        }
                    case "B":
                        {
                            blueEngine.Type = Constants.Engines.BLUE_ENGINE;
                            blueEngine.QuantityShipping = GetShipmentQuantity();
                            blueEngine = h1.HandleRequest(blueEngine);                            
                            break;
                        }
                    case "Q":
                        {
                            Environment.Exit(0);
                            break;
                        }
                    default:
                        {
                            break;
                        }
                }

                Console.ReadLine();
                Console.Clear();
            }
        }

        private static int GetShipmentQuantity()
        {
            int quantity = 0;

            try
            {
                Console.WriteLine("");
                Console.WriteLine("How many engines are you loading?");
                quantity = int.Parse(Console.ReadLine());
                Console.WriteLine("");
                return quantity;
            }
            catch (Exception)
            {
                return 0;
            }
        }
    }
}


FINALLY

Take special note of the following lines of code that exist in the previous codebase. First, we’ll focus on the Microsoft Unity, Inversion of Control aspect of it, which is exhibited in the following lines of code:

  • IEngine redEngine = Factories.Engine.CreateInstance();
  • IEngine blueEngine = Factories.Engine.CreateInstance();
  • IEngine yellowEngine = Factories.Engine.CreateInstance();

This is what allows us to decouple our object dependencies from the main assembly to any client applications. The control of each object’s creation is inverted to the Factory inside the assembly, so the client needs to know very little about creating or consuming the object itself. The Microsoft Unity Framework takes care of all of this for you.

The next interesting piece involves closing the gap on the chain-of-responsibility pattern by implementing a definitive successor chain of responsibilities using the concrete handler types. The following lines of code designate that the ShipmentHandlerBlue() concrete handler will receive the initial payload request, and if it can’t handle it then it then it will be its responsibility to pass the request message along to the ShipmentHanlderYellow() concrete handler in the chain.

Finally, if it can’t handle the payload request, then it will finally pass the responsibility down the chain to the ShipmentHandlerRead concrete handler for fulfillment. Each concrete handler acts autonomously in the chain, meaning that it doesn’t have to know anything else about any other concrete handler, enacting a true separation of concerns and a classic, textbook example of the chain-of-responsibility pattern itself.

Handler h1 = new ShipmentHandlerBlue();
Handler h2 = new ShipmentHandlerYellow();
Handler h3 = new ShipmentHandlerRed();

h1.SetSuccessor(h2);
h2.SetSuccessor(h3);

When you run the application, you should see the following results:

Image1

Image2

Image3

Thanks for reading and keep on coding! 🙂

The Observer Pattern

Author: Cole Francis, Architect

Click here to download my solution on GitHub

BACKGROUND

If you read my previous article, then you’ll know that it focused on the importance of software design patterns. I called out that there are some architects and developers in the field who are averse to incorporating them into their solutions for a variety of bad reasons. Regardless, even if you try your heart out to intentionally avoid incorporating them into your designs and solutions, the truth of the matter is you’ll eventually use them whether you intend to or not.

A great example of this is the Observer Pattern, which is arguably the most widely used software design pattern in the World. It comes in a number of different styles, with the most popular being the Model-View-Controller (MVC), whereby the View representing the Observer and the Model representing the observable Subject. People occasionally make the mistake of referring to MVC as a design pattern, but it actually an architectural style of the Observer Design Pattern.

The Observer Design Pattern’s taxonomy is categorized in the Behavioral Pattern Genus of the Software Design Pattern Family because of its object-event oriented communication structure, which causes changes in the Subject to be reflected in the Observer. In this respect, the Subject is intentionally kept oblivious, or completely decoupled from the Observer class.

Some people also make the mistake of calling the Observer Pattern the Publish-Subscribe Pattern, but they are actually two distinct patterns that just so happen to share some functional overlap. The significant difference between the two design patterns is that the Observable Pattern “notifies” its Observers whenever there’s a change in the Observed Subject, whereas the Publish-Subscribe Pattern “broadcasts” notifications to its Subscribers.

A COUPLE OF NEGATIVES

As with any software design pattern, there are some cons associated with using the Observer Pattern. For instance, the base implementation of the Observer Pattern calls for a concrete Observer, which isn’t always practical, and it’s certainly not easily extensible. Building and deploying an entirely new assembly each time a new Subject is added to the solution would require a rebuild and redeployment of the assembly each time, which is a practice that many larger, bureaucratically-structured companies often frown upon. Given this, I’ll show you how to get around this little nuance later in this article.

Another problem associated with the Observer Pattern involves the potential for memory leaks, which are also referred to as “lapsed listeners” or “latent listeners”. Despite what you call it, a memory leak by any other name is still a memory leak. Regardless, because an explicit registering and unregistering is generally required with this design pattern, if the Subjects aren’t properly unregistered (particularly ones that consume large amounts of memory) then unnecessary memory consumption is certain, as stale Subjects continue to be needlessly observed until something changes. This can result in performance degradation. I’ll explain to you how you can work around this issue.

OBSERVER DESIGN PATTERN OVERVIEW

Typically, there are three (3) distinct classes that comprise the heart and soul of the Observer design pattern, and they are the Observer class, the Subject class, and the Client (or Program). Beyond this, I’ve seen the pattern implemented in a number of different ways, and asking a roomful of architects how they might go about implementing this design pattern is a lot like asking them how they like their morning eggs. You’ll probably get a variety of different responses back.

However, my implementation of this design pattern typically deviates from the norm because I like to include a fourth class to the mix, called the LifetimeManager class. The purpose of the LifetimeManager class is to allow each Subject class to autonomously maintain its own lifetime, alleviating the need for the client to explicitly call the Unregister() method on the Subject object. It’s not that I don’t want the client program to explicitly call the Subscriber’s Unregister() method, but this cleanup call does occasionally get omitted for whatever reason. So, the inclusion of the LifeTimeManager class provides an additional safeguard to protect us against this. I’ll focus on the LifetimeManager class a little bit later in this article.

Moving on, the Observer design pattern is depicted in the class diagram below. As you can see, the Subject inherits from the LifetimeManager class and implements the ISubject interface, but the client program and the Observer are left decoupled from the Subject. You will also notice that the Subject provides the ability to allow a program to register and unregister a Subject class. By inheriting from the LifetimeManager class, the Subject class now also allows the client to establish specific lifetime requirements for the Subject class, such as whether it uses a basic or sliding expiration, its lifetime in seconds, minutes, hours, days, months, and even years. And, if the developer fails to provide this information through the Subject’s overloaded constructor, then the default constructor provides some default values to make sure the Subject is cleaned up properly.

ClassDiagram2

A MORE DETAILED EXPLANATION OF THE PATTERN

The Subject Class

The Subject class also contains a Change() method that’s exactly like the Register() method. This is something else that’s not normally a part of this design pattern, but I intentionally added this because I don’t think it makes sense to call the Register() method anytime changes are made to the Subject(). I think it makes for a bad developer experience. Instead, registering the Subject object once and then calling the Change() method anytime there are changes to the Subject object makes much more sense in my opinion. We can impose the cleanup work upon the Observer class each time the Subject object is changed.

The Observer Class

The Observer class includes an Update() method, which accepts a Subject object and the operation the Observer class needs to perform on the Subject object. For instance, if there’s an add or an update to the Subject object, then the Observers searches through its observed Subject cache to find it using it’s unique SubscriptionId and CacheId’s. If the Subject exists in the cache, then the Observer updates it by deleting the old Subject and adding the new one. If it doesn’t find it in the Subject cache, then it simply adds it. The Observer also accepts a remove action, which causes it to remove the Subject from it observed state.

The Client Program

The only other important element to remember is that anytime an action takes place, then notifications are always propagated back to the client program so that it’s always aware of what’s going on behind the scenes. When the Subject is registered, the client program is notified; When the Subject is unregistered, the client program is notified; When the observed Subject’s data changes, the client program is notified. One of the important tenets of this design pattern is that the client program is always kept aware of any changes that occur to its observed Subject.

The LifetimeManager Class

The LifetimeManager class, which is my own creation, is responsible for maintaining the lifetime of each Subject object that gets created. So, for every Subject object that gets created, a LifetimeManager class also gets created. The LiftetimeManager class includes a variety of malleable properties, which I’ll go over shortly. Also keep in mind that these properties get set by the default constructor of the Subject() class, and they can also be overridden when the Subject object first gets created by passing the override values in an overloaded constructor that I provide in my design, or they can be overridden by simply changing any one of the LifetimeManager class’s property values and then calling the Subject’s Change() method. It’s really as simple as that. Nevertheless, here are the supported properties that make up the LifetimeManager class:

1. ExpirationType: Tells the system whether the expiration type is either basic or sliding. Basic expiration means that the Subject expires at a specific point in time. Sliding expiration simply means that anytime a change is made to the Subject, the expiration slides based upon the values you provide.

2. ExpirationValue: This is an integer that is relative to the next property, TimePrecision.

3. TimePrecision: This is an enumeration that includes specific time intervals, like seconds, minutes, hours, days, months, and even years. So, if I provide a 30 for TimePrecision and provide the enumTimePrecision.Minutes for TimePrecision, then this means that I want my data cache to automatically expire, and hence self-unregister, in 30 minutes. What’s more, if you fail to provide me with these values during the time you Register() your Subject, then I default them for you in the default Constructor code in the Subject class.

So, now that you have an overview and visual understanding of the Observer pattern class structure and relationships, I’ll now spend a little time going over my implementation of the pattern by sharing my working source code with you. My intention is that you can use my source code to get your very own working model up and running. This will allow you to experiment with the pattern on your own. It would also be nice to get some feedback regarding how well you think my custom LifetimeManager class helps to avoid unwanted memory leaks by providing each Subject class with the ability to maintain its own lifetime.

THE OBSERVER CLASS SOURCE CODE

For the most part, it’s the responsibility of the Observer class to perform update operations on a given Subject when requested. Furthermore, the Observer class should respect and observe any changes to the stored Subject’s lifecycle until the Subject requests the Observer to unregister it. Here’s my working example of the Observer class:


using System;
using System.Collections.Generic;


namespace ObserverClient
{
    /// 
    /// The abstract observer base class
    /// 
    public static class Observer
    {
        #region Member Variables

        /// 
        /// The global data cache
        /// 
        private static List _data = new List();

        #endregion

        #region Methods

        /// 
        /// Provides CRUD operations on the global cache object
        /// 
        internal static bool Update(LifetimeManager data, Enums.enumSubjectAction action)
        {
            try
            {
                object o = new object();

                // This locks the critical section, just in case a timer even fires at the same
                // time the main thread's operation is in action.
                lock (o)
                {
                    switch (action)
                    {
                        case Enums.enumSubjectAction.AddChange:
                            {
                                // Finds the original object and removes it, and then it re-adds it to the list
                                _data.RemoveAll(a => a.SubscriptionId == data.SubscriptionId && a.CacheData == data.CacheId);
                                _data.Add(data);
                                break;
                            }
                        case Enums.enumSubjectAction.RemoveChild:
                            {
                                // Finds the entry in the list and removes it
                                _data.RemoveAll(a => a.SubscriptionId == data.SubscriptionId && a.CacheData == data.CacheId);
                                break;
                            }
                        case Enums.enumSubjectAction.RemoveParent:
                            {
                                // Finds the entry in the list and removes it
                                _data.RemoveAll(a => a.SubscriptionId == data.SubscriptionId);
                                break;
                            }
                        default:
                            {
                                // This is useless
                                break;
                            }
                    }

                    return true;
                }
            }
            catch (Exception)
            {
                throw;
            }
        }

        #endregion
    }
}

THE SUBJECT CLASS SOURCE CODE

Once again, the intent of the Subject class is to expose methods to the client allow for the registering and unregistering of the observable Subject. It’s the responsibility of the Subject to call the Observer class’s Update() method and request that specific actions be taken on it (e.g. add or remove).

In my code example below, the Observer class acts as a storage cache for observed Subjects, and it also provides some basic operations necessary to adequately maintain the observed Subjects.

As a side note, take a look at the default and overloaded constructors in the Subject class, below. It’s in these two areas of the Subject object that I either automatically control or allow the developer to override the Subject’s lifetime. Once the lifetime of the Subject object expires, then it is unregistered in the Observer and the client program is then automatically notified that the subject was removed from observation.


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using ObserverClient.Interface;


namespace ObserverClient
{
    /// 
    /// This is the Subject Class, which provides the ability to register and unregister and object.
    /// 
    public class Subject : LifetimeManager,  ISubject
    {
        #region Events

        /// 
        /// Handles the change notification event
        /// 
        public event NotifyChangeEventHandler NotifyChanged;

        #endregion
        
        #region Methods

        /// 
        /// The delegate for the NotificationChangeEventHandler event
        /// 
        public delegate void NotifyChangeEventHandler(T notifyinfo, Enums.enumSubjectAction action);

        /// 
        /// The register method.  This adds the entry and data to the Observer's data cache
        /// and then provides notification of the event to the caller if it's successfully added.
        /// 
        public void Register()
        {
            try
            {
                if (Observer.Update(this, Enums.enumSubjectAction.AddChange))
                {
                    this.NotifyChanged(this, Enums.enumSubjectAction.AddChange);
                }
            }
            catch (Exception)
            {
                throw;
            }
        }

        /// 
        /// The unregister method.  This removes the entry and data in the Observer's data cache
        /// and then provides notification of the event to the caller if it's successfully removed.
        /// 
        public void Unregister()
        {
            try 
	        {	        
		        if (this.SubscriptionId != null && this.CacheId == null)
                {
                    Observer.Update(this, Enums.enumSubjectAction.RemoveParent);
                    this.NotifyChanged(this, Enums.enumSubjectAction.RemoveParent);
                }
                else if (this.SubscriptionId != null && this.CacheId != null)
                {
                    Observer.Update(this, Enums.enumSubjectAction.RemoveChild);
                    this.NotifyChanged(this, Enums.enumSubjectAction.RemoveChild);
                }
	        }
	        catch (Exception)
	        {
		        throw;
	        }
        }

        /// 
        /// The change method.  This modifies the entry and data to the Observer's data cache
        /// and then provides notification of the event to the caller if successful.
        /// 
        public void Change()
        {
            try
            {
                if (Observer.Update(this, Enums.enumSubjectAction.AddChange))
                {
                    if(this.ExpirationType == Enums.enumExpirationType.Sliding)
                    {
                        this.ExpirationStart = DateTime.Now;
                        this.MonitorExpiration();
                    }

                    this.NotifyChanged(this, Enums.enumSubjectAction.AddChange);
                }
            }
            catch (Exception)
            {
                throw;
            }
        }

        /// 
        /// The event handler for object expiration notifications. It calls unregister for the current object.
        /// 
        void s_ExpiredUnregisterNow()
        {
            // Unregisters itself
            this.Unregister();
        }

        #endregion

        #region Constructor(s)

        /// 
        /// The Subject's default constructor (i.e. all the values relating to cache expiration are defaulted to 1 minute).
        /// 
        public Subject()
        {
            this.ExpirationType = Enums.enumExpirationType.Basic;
            this.ExpirationValue = 1;
            this.TimePrecision = Enums.enumTimePrecision.Minutes;
            this.ExpirationStart = DateTime.Now;

            this.NotifyObjectExpired += s_ExpiredUnregisterNow;
            this.MonitorExpiration();
        }

        /// 
        /// The overloaded Subject constructor
        /// 
        public Subject(Enums.enumExpirationType expirationType, int expirationValue, Enums.enumTimePrecision timePrecision)
        {
            this.ExpirationType = expirationType;
            this.ExpirationValue = expirationValue;
            this.TimePrecision = timePrecision;
            this.ExpirationStart = DateTime.Now;

            this.NotifyObjectExpired += s_ExpiredUnregisterNow;
            this.MonitorExpiration();
        }

        #endregion
    }
}

THE ISUBJECT INTERFACE

The ISubject interface merely defines the contract when creating Subject objects. Because the Subject class implements the ISubject interface, then it’s obligated to include the ISubject’s properties and methods. These tenets keeps all Subject objects consistent.


using System;
using System.Collections.Generic;


namespace ObserverClient.Interface
{
    /// 
    /// This is the Subject Interface
    /// 
    public interface ISubject
    {
        #region Interface Operations

        object SubscriptionId { get; set; }
        object CacheId { get; set; }
        object CacheData { get; set; }
        int ExpirationValue { get; set; }
        Enums.enumTimePrecision TimePrecision { get; set; }
        DateTime ExpirationStart { get; set; }
        Enums.enumExpirationType ExpirationType { get; set; }
        void Register();
        void Unregister();

        #endregion
    }
}

THE CLIENT PROGRAM SOURCE CODE

It’s the responsibility of the Client to call the register, unregister, and change methods on the Subjects objects, whenever applicable. The client can also control the lifetime of the Subject object it invokes by overriding the default properties that are set in the Subject’s default constructor. A developer can do this by either injecting the overridden property values in the Subject’s overloaded constructor, or it can accomplish this by simply typing in new lifetime property values on the Subject object and then call the Subject object’s Change() method.

There’s one final note here, and that is that the callback methods are defined by the client program. You’ll see evidence of this where I’ve provided these lines in the source code, below: subject1.NotifyChanged += “Your defined method here!”. This makes it completely flexible, because multiple Subject objects can either share the same notification callback method in the client program, or each instance can define its own.

Also, because the Subject object is generic, I don’t need to implement concrete Subject objects, and they can be defined on-the-fly. This means that I don’t need to redeploy the Observer assembly each time I add a new Subject. This eliminates the other negative that’s typically associated with the Observable design pattern.


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Net.NetworkInformation;
using System.Collections;


namespace ObserverClient
{
    class Program
    {
        /// 
        /// The main entry point into the application
        /// 
        static void Main(string[] args)
        {
            // Register subject 1
            Subject subject1 = new Subject { SubscriptionId = "1", CacheId = "1", CacheData = "1" };
            subject1.NotifyChanged += s_testCacheObserver1NotifyChanged_One;
            // Tie the following event handler to any notifications received on this particular subject
            subject1.ExpirationType = Enums.enumExpirationType.Sliding;
            subject1.Register();

            // Register subject 2
            Subject subject2 = new Subject { SubscriptionId = "1", CacheId = "2", CacheData = "2" };
            // Tie the following event handler to any notifications received on this particular subject
            subject2.NotifyChanged += s_testCacheObserver1NotifyChanged_One;
            subject2.Register();

            // Register subject 3
            Subject subject3 = new Subject { SubscriptionId = "1", CacheId = "1", CacheData = "Boom!" };
            // Tie the following event handler to any notifications received on this particular subject
            subject3.NotifyChanged += s_testCacheObserver1NotifyChanged_Two;
            subject3.Change();

            // Unregister subject 2. Only subject 2's notification event should fire and the
            // notification should be specific about the operations taken on it
            subject2.Unregister();

            // Change subject 1's data.  Only subject 2's notification event should fire and the
            // notification should be specific about the operations taken on it
            subject1.CacheData = "Change Me";
            subject1.Change();

            // Hang out and let the system clean up after itself.  Events should only fire for those
            // objects that are self-unregistered.  The system is capable of maintaining itself.
            Console.ReadKey();
        }

        /// 
        /// Notifications are received from the Subject whenever changes have occurred.
        /// 
        static void s_testCacheObserver1NotifyChanged_One(T notifyInfo, Enums.enumSubjectAction action)
        {
            var data = notifyInfo;
        }

        /// 
        /// Notifications are received from the Subject whenever changes have occurred.
        /// 
        static void s_testCacheObserver1NotifyChanged_Two(T notifyInfo, Enums.enumSubjectAction action)
        {
            var data = notifyInfo;
        }
    }
}

THE LIFETIME MANAGER CLASS SOURCE CODE

Again, the LifetimeManager Class is my own creation. The goal of this class, which I’ve already mentioned a couple of times in this article, is to supply default properties that will allow the Subject to maintain its own lifetime without the need for the Unregister() method having to be called explicitly by the client program.

So, while I still believe it’s imperative that the client program explicitly call the Subject object’s Unregister() method, it’s comforting knowing there’s a backup plan in place if for some reason that doesn’t happen.

I’ve also highlighted all of the granular lifetime options in the source code. as you can see for yourself, the code currently accept anything from milliseconds to years, and everything in between. (lightyears would have been really cool) I could have made it even more granular, but I can’t imagine anyone registering and unregistering an observed Subject for less than a millisecond. Also, I can’t image anyone storing observed Subject for as long as a year, even though I’ve created this implementation to observe Subject objects for as long as ±1.7 × 10 to the 308th power years. That seems sufficient, don’t you think? 🙂


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Timers;
using System.Globalization;

namespace ObserverClient
{
    /// 
    /// The SubjectDecorator class provides additional operations that the Subject class
    /// should be aware of but fall outside its immediate scope of attention.
    /// 
    public class LifetimeManager
    {
        #region Member Variables

        private Timer timer = new Timer();

        #endregion

        #region Properties

        public object SubscriptionId { get; set; }
        public object CacheId { get; set; }
        public object CacheData { get; set; }
        public int ExpirationValue { get; set; }
        public Enums.enumExpirationType ExpirationType { get; set; }
        public Enums.enumTimePrecision TimePrecision { get; set; }
        public DateTime ExpirationStart { get; set; }

        #endregion

        #region Methods

        /// 
        /// Fires when the object's time to live has expired
        /// 
        void s_TimeHasExpired(object sender, ElapsedEventArgs e)
        {
            // Delete the Observer Cache and notify the caller
            NotifyObjectExpired();
        }

        /// 
        /// The delegate for the NotificationChangeEventHandler event
        /// 
        public delegate void NotifyObjectExpiredHandler();

        /// 
        /// Provides expiration monitoring capabilities for itself (self-maintained expiration)
        /// 
        internal void MonitorExpiration()
        {
            double milliseconds = 0;

            switch (this.TimePrecision)
            {
                case Enums.enumTimePrecision.Milliseconds:
                    {
                        milliseconds = DateTime.Now.Subtract(DateTime.Now.AddMilliseconds(this.ExpirationValue)).TotalMilliseconds;
                        break;
                    }
                case Enums.enumTimePrecision.Seconds:
                    {
                        milliseconds = DateTime.Now.Subtract(DateTime.Now.AddSeconds(this.ExpirationValue)).TotalMilliseconds;
                        break;
                    }
                case Enums.enumTimePrecision.Minutes:
                    {
                        milliseconds = DateTime.Now.Subtract(DateTime.Now.AddMinutes(this.ExpirationValue)).TotalMilliseconds;
                        break;
                    }
                case Enums.enumTimePrecision.Hours:
                    {
                        milliseconds = DateTime.Now.Subtract(DateTime.Now.AddHours(this.ExpirationValue)).TotalMilliseconds;
                        break;
                    }
                case Enums.enumTimePrecision.Days:
                    {
                        milliseconds = DateTime.Now.Subtract(DateTime.Now.AddDays(this.ExpirationValue)).TotalMilliseconds;
                        break;
                    }
                case Enums.enumTimePrecision.Months:
                    {
                        milliseconds = DateTime.Now.Subtract(DateTime.Now.AddMonths(this.ExpirationValue)).TotalMilliseconds;
                        break;
                    }
                case Enums.enumTimePrecision.Years:
                    {
                        milliseconds = DateTime.Now.Subtract(DateTime.Now.AddYears(this.ExpirationValue)).TotalMilliseconds;
                        break;
                    }
                default:
                    {
                        break;
                    }

            }

            if(timer.Interval > 0)
            {
                timer.Stop();
                timer.Dispose();
                timer = new Timer(Math.Abs(milliseconds));
                timer.Elapsed += new ElapsedEventHandler(s_TimeHasExpired);
                timer.Enabled = true;
            }
            else
            {
                timer.Elapsed += new ElapsedEventHandler(s_TimeHasExpired);
                timer.Enabled = true;
            }
        }

        #endregion
        
        #region Events

        /// 
        /// Handles the change notification event
        /// 
        public event NotifyObjectExpiredHandler NotifyObjectExpired;

        #endregion        
    }
}

WRAPPING THINGS UP

Well, that’s the Observer design pattern in a nutshell. I’ve even addressed the negatives associated with the design pattern. First, I overcame the “memory leak” issue by creating and tying a configurable LifetimeManager class to the Subject object, which makes sure the Unregister() method always gets called, regardless. Secondly, because I keep the Subject object generic and static, my design only requires one concrete Observer for all Subjects. I’ve also provided you with a Subscription-based model that will allow each Subscriber to observe one or more Subjects in a highly configurable manner. So, I believe that I’ve covered all the bases here…and hopefully then some.

Feel free to stand the example up for yourself. I think I’ve provided you with all the code you need, except for the Enumeration class, which I believe most of you will be able to quickly figure out for yourselves. Anyway, test drive it if you’d like and let me know what you think. I’m particularly interested in what you think about the inclusion of the LifetimeManager class. All comments and questions are always welcome.

Thanks for reading and keep on coding! 🙂

CrossProcessMemoryMaps

Author: Cole Francis, Architect

BACKGROUND PROBLEM

Many moons ago, I was approached by a client about the possibility of injecting a COM wrapped .NET assembly between two configurable COM objects that communicated with one another. The basic idea was that a hardware peripheral would make a request through one COM object, and then that request would be would intercepted by my new COM object which would then prioritize a hardware object’s data in a cross-process, global singleton. From there, any request initiated by a peripheral would then be reordered using the values persisted in my object.

Unfortunately, the solution became infinitely more complex when I learned that peripheral requests could originate from software running on different processes on the same machine. My first attempt involved building an out-of-process storage cache used to update and retrieve data as needed. Although it all seemed perfectly logical, it lacked the split-second processing speed that the client was looking for. So, next I tried to reading and writing data to shared files on the local file system. This also worked but lacked split-second processing capabilities. As a result, I ended up going back and forth to the drawing board before finally implementing a global singleton COM object that met client’s needs (Yes, I know it’s an anti-pattern…but it worked!).

Needless to say, the outcome was a rather bulky solution, as the intermediate layer of software I wrote had to play nicely with COM objects that it was never intended to live between, as well as adhere to specific IDispatch interfaces that weren’t very well documented, and it reacted to functionality that at times seemed random. Although the effort was considered highly successful, development was also very tedious and came at a price…namely my sanity. Looking back on everything well over a decade later and applying the knowledge that I possess today, I definitely would have implemented a much more elegant solution using an API stack that I’ll go over in just a minute.

As for now, let’s switch gears and discuss a something that probably seems completely unrelated to the topic at hand, and that is memory functions. Yes, that’s right…I said memory functions. It’s my belief that when most developers think of storing object and data in memory, two memory functions immediately come to their mind, namely the Heap and Virtual memory (explained below). While these are great mechanisms for managing objects and data internal to a single process, neither of the aforementioned memory-based storage facilities can be leveraged across multiple processes without employing some sort of out-of-process mechanism to persist and share the data.

    1) Heap Memory Functions: Represent instances of a class or an array. This type of memory isn’t immediately returned when a method completes its scope of execution. Instead, Heap memory is reclaimed whenever the .NET garbage collector decides that the object is no longer needed.

    2) Virtual Memory Functions: Represent value types, also known as primitives, which reside in the Stack. Any memory allocated to virtual memory will be immediately returned whenever the method’s scope of execution completes. Using the Stack is obviously more efficient than using the Heap, but the limited lifetime of value types makes them implausible candidates to share data between different classes…let alone sharing data between different processes.

BRING ON MEMORY MAPPING

While most developers focus predominantly on managing Heap and Virtual memory within their applications, there are also a few other memory options out there that are sometimes go unrecognized, including “Local, Global Memory”, “CRT Memory”, “NT Virtual Memory”, and finally “Memory-Mapped Files”. Due to the nature of our subject matter, this article will concentrate solely on “Memory-Mapped Files” (highlighted in orange in the pictorial below).

MemoryMappedFiles

To break it down into layman’s terms, a memory-mapped file allows you to reserve an address space and then commit physical storage to that region. In turn, the physical storage stems from a file that is already on the disk instead of the memory manager, which offers two notable advantages:

    1) Advantage #1 – Accessing data files on the hard drive without taking the I/O performance hit due to the buffering of the file’s content, making it ideal to use with large data files.

    2) Advantage #2 – Memory-mapped files provide the ability to share the same data with multiple processes running on the same machine.

Make no mistake about it, memory-mapped files are the most efficient way for multiple processes running on a single machine to communicate with one another! In fact, they are often used as process loaders in many commonly used operating systems today, including Microsoft Windows. Basically, whenever a process is started the operating system accesses a memory-mapped file in order to execute the application. Anyway, now that you know this little tidbit of information, you should also know that there are two types of memory-mapped files, including:

    1) Persisted memory-mapped files: After a process is done working on a piece of data, that mapped file can then be named and persisted to a file on the hard drive where it can be shared between multiple processes. These files are extremely suitable for working with large amounts of data.

    2) Non-persisted memory-mapped files: These are files that can be shared between two or more disparate threads operating within a single process. However, they don’t get persisted to the hard drive, which means that their data cannot be accessed by other processes.

I’ve put together a working example that showcases the capabilities of persisted memory-mapped files for demonstration purposes. As a precursor, the example depicts mutually exclusive thoughts conjured up by the left and right halves of the brain. Each thought lives and gets processed using its own thread, which in turn gets routed to the cerebral cortex for thought processing. Inside the cerebral cortex, short-term and long-term thoughts get stored and retrieved in a memory-mapped file that’s availability is managed by a mutex.

    A mutex is an object that allows multiple program threads to synchronously share the same resource, such as file access. A mutex can be created with a name to leverage persisted memory-mapped files, or the mutex can be left unnamed to utilize non-persisted memory-mapped files.

In addition to this, I’ve also assembled another application that runs as a completely different process on the same physical machine but is still able to read and write to the persisted memory-mapped file created by the first application. So, let’s get started!

APPLYING VIRTUAL MEMORY TO HUMAN MEMORY

In the code below, I’ve created a console application that references two objects in Heap memory, and they are TheLeftBrain.Stimuli() and TheRightBrain.Stiuli(). I’ve accounted for asynchronous thought processes stemming from the left and right halves of the brain by employing an asynchronous LINQ operation that kicks off two asynchronously blocking sub-threads (i.e. One for the left half of the brain and the other for the right half of the brain). Once the sub-threads are kicked off, the primary thread blocks any further operations until the sub-threads complete their operations and return (or optionally error out). I’ve highlighted the code in orange where I’m performing the asynchronous LINQ operation):


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Threading;
using LeftBrain;
using RightBrain;

namespace LeftBrainRightBrain
{
    class Program
    {
        /// 
        /// The main entry point into the application
        /// 
        /// 
        static void Main(string[] args)
        {
            // Performs an asynchronous operation on both the left and right halves of the brain
            try
            {
                LeftBrain.Stimuli leftBrainStimuli = new LeftBrain.Stimuli();
                RightBrain.Stimuli rightBrainStimuli = new RightBrain.Stimuli();

                // Invoke a blocking, parallel process
                Parallel.Invoke(() =>
                {
                    leftBrainStimuli.Think();
                }, () =>
                {
                    rightBrainStimuli.Think();
                });

                Console.ReadKey();
            }
            catch (Exception)
            {
                throw;
            }
        }
    }
}

At this point, each asynchronous sub-thread calls its respective Stimuli() class. It should be obvious that both the LeftBrain() and RightBrain() objects are fundamentally similar in nature and therefore share interfaces and inherit from the same base class object, with the only significant differences being the types of thoughts they invoke and the additional millisecond I added to Sleep() invocation to the RightBrain() class to simply show some variance between the manner in which the threads are able to process.

Nevertheless, each thought lives in its own isolated thread (making them sub-sub threads) that passes information along to the Cerebral Cortex for data committal and retrieval. Here is an example of the LeftBrain() class and its thoughts:


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Threading;
using CerebralCortex;

namespace LeftBrain 
{
   /// 
    /// Stimulations from the right half of the brain
    /// 
    /// 
    public class Stimuli : Memory, IStimuli
    {
        /// 
        /// Stores memories in a list
        /// 
        /// 
        private List memories = new List();

        /// 
        /// An overloaded constructor
        /// 
        /// 
        public void Think()
        {
            try
            {
                string threadName = string.Empty;
                int threadCounter = 0;

                // Add a list of left brain memories
                memories.Add("The area of a circle is π r squared.");
                memories.Add("The Law of Inertia is Isaac Newton's first law.");
                memories.Add("Richard Feynman was a physicist known for his theories on quantum mechanics.");
                memories.Add("y = mx + b is the equation of a Line, standard form and point-slope.");
                memories.Add("A hypotenuse is the longest side of a right triangle.");
                memories.Add("A chord represents a line segment within a circle that touches 2 points on the circle.");
                memories.Add("Max Planck's quantum mechanical theory suggests that each energy element is proportional to its frequency.");
                memories.Add("A geometry proof is a written account of the complete thought process that is used to reach a conclusion");
                memories.Add("Pythagorean theorem is a relation in Euclidean geometry among the three sides of a right triangle.");
                memories.Add("A proof of Descartes' Rule for polynomials of arbitrary degree can be carried out by induction.");

                // Recount your memories
                memories.ForEach(memory =>
                {
                    this.ProcessThought(string.Format("Thread: {0} (Left Brain)", threadCounter += 1), memory);
                });
            }
            catch (Exception)
            {
                
                throw;
            }
        }

        /// 
        /// Controls the thought process for this half of the brain
        /// 
        /// 
        public void ProcessThought(string threadName, string memory)
        {
            try
            {
                Thread.Sleep(3000);
                Thread monitorThread = null;

                // Spin up a new thread delegate to invoke the thought process
                monitorThread = new Thread(delegate()
                {
                    base.InvokeThoughtProcess(threadName, memory);
                });

                // Name the thread and start it
                monitorThread.Name = threadName;
                monitorThread.Start();
            }
            catch (Exception)
            {
                throw;
            }
        }
    }
}

Likewise, shown below is an example of the RightBrain() class and its thoughts. Once again, the RightBrain() differs from the LeftBrain() mainly in terms of the types thoughts that get invoked, with the left half of the brain’s thoughts being more cognitive in nature and the right half of the brain’s thoughts being more artistic:


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Threading;
using CerebralCortex;

namespace RightBrain
{
    /// 
    /// Stimulations from the right half of the brain
    /// 
    /// 
    public class Stimuli : Memory, IStimuli
    {
        /// 
        /// Stores memories in a list
        /// 
        /// 
        private List memories = new List();

        /// 
        /// An overloaded constructor
        /// 
        /// 
        public void Think()
        {
            try
            {
                string threadName = string.Empty;
                int threadCounter = 0;

                // Add a list of right brain memories
                memories.Add("I wonder if there's a Van Gough Documentary on Television?");
                memories.Add("Isn't the color blue simply radical.");
                memories.Add("Why don't you just drop everything and hitch a ride to California, dude?");
                memories.Add("Wouldn't it be cool to be a shark?");
                memories.Add("This World really is my oyster.  Now, if only I had some cocktail sauce...");
                memories.Add("Why did I stop finger painting?");
                memories.Add("Does anyone want to go to a BBQ?");
                memories.Add("Earth tones are the best.");
                memories.Add("Heavy metal bands rock!");
                memories.Add("I like really shiny thingys.  Oh, Look!  A shiny thingy...");

                // Recount your memories
                memories.ForEach(memory =>
                {
                    this.ProcessThought(string.Format("Thread: {0} (Right Brain)", threadCounter += 1), memory);
                });
            }
            catch (Exception)
            {
                
                throw;
            }
        }

        /// 
        /// Controls the thought process for this half of the brain
        /// 
        /// 
        public void ProcessThought(string threadName, string memory)
        {
            try
            {
                Thread.Sleep(4000);
                Thread monitorThread = null;

                // Spin up a new thread delegate to invoke the thought process
                monitorThread = new Thread(delegate()
                {
                    base.InvokeThoughtProcess(threadName, memory);
                });

                // Name the thread and start it
                monitorThread.Name = threadName;
                monitorThread.Start();
            }
            catch (Exception)
            {
                throw;
            }
        }
    }
}

Regardless, the thread delegates spawned in the LeftBrain and RightBrain Stimuli() classes are responsible for contributing to short-term memory, as each thread commits its discrete memory item to a growing list of memories via the Thought() object. Each thread is also responsible for negotiating with the local mutex (highlighted below in orange) in order to access the critical sections of the code where thread-safety becomes absolutely imperative (highlighted below in wheat), as the individual threads add their messages to the global memory-mapped file (highlighted below in silver).

After each thread writes its memory to the memory-mapped file in the critical section of the code, it then releases the mutex (highlighted below in green) and allows the next sequential thread to lock the mutex and safely enter into the critical section of the code. This behavior repeats itself until all of the threads have exhausted their discrete units-of-work and safely rejoin the hive in their respective hemispheres of the brain. Once all processing completes, the block is then lifted by the primary thread and normal processing continues.


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Threading;
using System.IO;
using System.IO.MemoryMappedFiles;
using System.Runtime.InteropServices;
using System.Xml.Serialization;

namespace CerebralCortex
{
    ///    
    /// The common area of the brain that controls thought
    /// 
    /// 
    public class Memory
    {
        /// 
        /// The local and global mutexes
        /// 
        /// 
        Mutex localMutex = new Mutex(false, "CCFMutex");
        MemoryMappedFile memoryMap = null;

        /// 
        /// Shared memory between the left and right halves of the brain
        /// 
        /// 
        static List<string> Thoughts = new List<string>();

        /// 
        /// Stores a thought in memory
        /// 
        /// 
        private bool StoreThought(string threadName, string thought)
        {
            bool retVal = false;

            try
            {
                Thoughts.Add(string.Concat(threadName, " says: ", thought));
                retVal = true;
            }
            catch (Exception)
            {
                
                throw;
            }

            return retVal;
        }

        /// 
        /// Retrieves a thought from memory
        /// 
        /// 
        private string RetrieveFromShortTermMemory()
        {
            try
            {
                // Returns the last stored thought (simulates short-term memory)
                return Thoughts.Last();
            }
            catch (Exception)
            {
                
                throw;
            }
        }

        /// 
        /// Invokes the thought process (uses a local mutex to control thread access inside the same process)
        /// 
        /// 
        public bool InvokeThoughtProcess(string threadName, string thought)
        {
            try
            {
                // *** CRITICAL SECTION REQUIRING THREAD-SAFE OPERATIONS ***
                {

                    // Causes the thread to wait until the previous thread releases
                    localMutex.WaitOne();

                    // Store the thought
                    StoreThought(threadName, thought);

                    // Create or open the cross-process capable memory map and write data to it
                    memoryMap = MemoryMappedFile.CreateOrOpen("CCFMemoryMap", 2000);

                    byte[] Buffer = ASCIIEncoding.ASCII.GetBytes(string.Join("|", Thoughts));
                    MemoryMappedViewAccessor accessor = memoryMap.CreateViewAccessor();
                    accessor.Write(54, (ushort)Buffer.Length);
                    accessor.WriteArray(54 + 2, Buffer, 0, Buffer.Length);

                    // Conjures the thought back up
                    Console.WriteLine(RetrieveFromShortTermMemory());
                }
                return true;
            }
            catch (Exception)
            {
                throw;
            }
            finally
            {
                // Releases the lock on the critical section of the code
                localMutex.ReleaseMutex();
            }

            return false;
        }
    }
}

With the major portions of the code complete, I am now able to run the application and watch the threads add their memories to the list of memories in the memory-mapped file via the critical section of the cerebral cortex code (click on the pictorial below to view the results)…

MemoryMapOutput

So, to quickly wrap this article up, my final step is to create a separate console application that will run as a completely separate process on the same physical machine in order to demonstrate the cross-process capabilities of a memory-mapped file. In this case, I’ve appropriately named my console application “OmniscientProcess”.

This application will make a call to the RetrieveLongTermMemory() method in its same class in order to negotiate with the global mutex. Provided the negotiation process goes well, the “OmniscientProcess” will attempt to retrieve the data being preserved within the memory-mapped file that was created by our previous application. In theory, this example is equivalent to having some external entity (i.e. someone or something) tapping into your own personal thoughts.


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Threading;
using CerebralCortex;

namespace OmniscientProcess
{
    class Program
    {
        static Mutex globalMutex = new Mutex(false, "CCFMutex");
        static MemoryMappedFile memoryMap = null;

        static void Main(string[] args)
        {
            // Reference the memory object and retrieve our memory-mapped data
            CerebralCortex.Memory cerebralMemory = new Memory();
            List<string> longTermMemories = cerebralMemory.RetrieveLongTermMemory();

            longTermMemories.ForEach(memory =>
            {
                Console.WriteLine(memory);
            });

            Console.WriteLine(string.Empty);
            Console.WriteLine("Press any key to end...");
            Console.ReadKey();
        }

        /// 
        /// Retrieves all thoughts from memory (uses a global mutex to control thread access from different processes)
        /// 
        /// 
        private static List<string> RetrieveLongTermMemory()
        {
            try
            {
                // Causes the thread to wait until the previous thread releases
                globalMutex.WaitOne();

                string delimitedString = string.Empty;

                memoryMap = MemoryMappedFile.OpenExisting("CCFMemoryMap", MemoryMappedFileRights.FullControl);

                MemoryMappedViewAccessor accessor = memoryMap.CreateViewAccessor();
                ushort Size = accessor.ReadUInt16(54);
                byte[] Buffer = new byte[Size];
                accessor.ReadArray(54 + 2, Buffer, 0, Buffer.Length);
                string delimitedThoughts = ASCIIEncoding.ASCII.GetString(Buffer);
                return delimitedThoughts.Split('|').ToList();
            }
            catch (Exception)
            {
                throw;
            }
            finally
            {
                // Releases the lock on the critical section of the code
                globalMutex.ReleaseMutex();
            }
        }
    }
}

The aforementioned application has the ability to retrieve the state of the memory-mapped file from an external process at any point in time, except of course when the mutex is locked. It’s the responsibility of the mutex to exercise thread safety, regardless of the originating process, whenever a thread attempts to access the shared address space that comprises the memory-mapped file (see below):

Output 1 – Here’s a partial listing that was retrieved early in the process:
MemoryMapRetrieval1

Output 2 – Here’s the full listing that was retrieved after all of the threads committed their data:
OmniscientProcess

Finally, while memory-mapped files certainly aren’t a new concept (they’ve actually been around for decades), they are sometimes difficult to wrap your head around when there’s a sizable number of processes and threads flying around in the code. And, while my examples aren’t necessarily basic ones, hopefully they employ some rudimentary concepts that everyone is able to quickly and easily understand.

To recount my steps, I demonstrated calls to disparate objects getting kicked off asynchronously, which in turn conjure up a respectable number of threads per object. Each individual thread, operating in each asynchronously executing object, goes to work by negotiating with a common mutex in an attempt to commit its respective data values to the cross-process, memory-mapped file that’s accessible to applications running as entirely different processes on the same physical machine.

Thanks for reading and keep on coding! 🙂