Archive for the ‘.NET’ Category

The Results

Author:  Cole Francis, Senior Solution Architect @ The PSC Group, LLC

Download the Source Code for This Example Here!

BACKGROUND

Traditionally speaking, creating custom Microsoft Windows Services can be a real pain.  The endless and mind-numbing repetitions of using the InstallUtil command-line utility and Ctrl+Alt+P attachments to the debug the code from the Microsoft Visual Studio IDE are more than enough to discourage the average Software Developer.

While many companies are now shying away from writing Windows Services in an attempt to get better optics around job failures, custom Windows Services continue to exist in limited enterprise development situations where certain thresholds of caution are exercised.

But, if you’re ever blessed with the dubious honor of having to write a custom Windows Service, take note of the fact that there are much easier ways of approaching this task than there used to be, and in my opinion one of the easiest ways is to use a NuGet package called TopShelf.

Here are the top three benefits of using TopShelf to create a Windows Service:

  1. The first benefit of using TopShelf is that you get out from underneath the nuances of using the InstallUtil command to install and uninstall your Windows Service.
  2. Secondly, you create your Windows Service using a simple and familiar Console Application template type inside Microsoft Visual Studio.  So, not only is it extraordinarily easy to create, it’s also just as easy to debug and eventually transition into a fully-fledged Windows Service leveraging TopShelf. This involves a small series of steps that I’ll demonstrate for you shortly.
  3. Because you’ve taken the complexity and mystery out of creating, installing, and debugging your Windows Service, you can focus on writing better code.

So, now that I’ve explained some of the benefits of using TopShelf to create a Windows Service, let’s run through a quick step-by-step example of how to get one up and running.  Don’t be alarmed by the number of steps in my example below.  You’ll find that you’ll be able to work through them very quickly.


Step 1

The first step is to create a simple Console Application in Microsoft Visual Studio.  As you can see in the example below, I named mine TopShelfCWS, but you can name yours whatever you want.

The Results


Step 2

The second step is to open the NuGet Package Manager from the Microsoft Visual Studio IDE menu and then click on the Manage NuGet Packages for Solution option in the submenu as shown in the example below.

The Results


Step 3

After the NuGet Package Manager screen appears, click on the Browser option at the top of the dialog box, and then search on the words “TopShelf”.  A number of packages should appear in the list, and you’ll want to select the one shown in the example below.

The Results


Step 4

Next, select the version of the TopShelf product that aligns with your project or you can simply opt to use the default version that was automatically selected for you, which is what I have done in my working example.

Afterwards, click the Install button.  After the package successfully installs itself, you’ll see a green checkbox by the TopShelf icon, just like you see in the example below.

The Results


Step 5

Next, add a new Class to the TopShelfCWS project, and name it something that’s relevant to your solution.  As you can see in the example below, I named my class NameMeAnything.

The Results


Step 6

In your new class (e.g. NameMeAnything), add a reference to the TopShelf product, and then inherit from ServiceControl.

The Results


Step 7

Afterwards, right click on the words ServiceControl and implement its interface as shown in the example below.

The Results


Step 8

After implementing the interface, you’ll see two new events show up in your class.  They’re called Start() and Stop(), and they’re the only two events that the TopShelf product relies upon to communicate with the Windows Service Start and Stop events.

The Results


Step 9

Next, we’ll head back to the Main event inside the Program class of the Console Application.  Inside the Main event, you’ll set the service properties of your impending Windows Service.  It will include properties like:

  • The ServiceName: Indicates the name used by the system to identify this service.
  • The DisplayName: Indicates the friendly name that identifies the service to the user.
  • The Description: Gets or sets the description for the service.

For more information, see the example below.

The Results


Step 10

Let’s go back to your custom class one more time (e.g. NameMeAnything.cs), and add the code in the following example to your class.  You’ll replace this with your own custom code at some point, but following my example will give you a good idea of how things behave.

The Results


Step 11

Make sure you include some Console writes to account for all the event behaviors that will occur when you run it.

The Results


Step 12

As I mentioned earlier, you can run the Console Application simply as that, a Console Application.  You can do this by simply pressing the F5 key.  If you’ve followed my example up to this point, then you should see the output shown in the following example.

The Results


Step 13

Now that you’ve run your solution as a simple Console Application, let’s take the next step and install it as a Window Service.

To do this, open a command prompt and navigate to the bin\Debug folder of your project.   *IMPORTANT:  Make sure you’re running the command prompt in Administrator mode* as shown in the example below.

The Results


Step 14

One of the more beautiful aspects of the TopShelf product is how it abstracts you away from all the .NET InstallUtil nonsense.  Installing your Console Application as a Windows Service is as easy as typing the name of your executable, followed by the word “Install”.  See the example below.

The Results


Step 15

Once it installs, you’ll see the output shown in the example below.

The Results


Step 16

What’s more, if you navigate to the Windows Services dialog box, you should now see your Console Application show up as a fully-operable Windows Service, as depicted below.

The Results


Step 17

You can now modify the properties of your Windows Service and start it.  Since all I’m doing in my example is executing a simple timer operation and logging out console messages, I just kept all the Windows Service properties defaults and started my service.  See the example below.

The Results


Step 18

If all goes well, you’ll see your Windows Service running in the Windows Services dialog box.

The Results


Step 19

So, now that your console application is running as a Windows Service, you’re absent the the advantage of seeing your console messages being written to the console. So, how do you debug it?

The answer is that you can use the more traditional means of attaching a Visual Studio process to your running Windows Service by clicking Ctrl + Alt + P in the Visual Studio IDE, and then selecting the name of your running Windows Service, as shown in the example below.

The Results


Step 20

Next, set a breakpoint on the _timer_Elapsed event.  If everything is running and hooked up properly, then your breakpoint should be hit every second, and you can press F10 to step it though the event handler that’s responsible for writing the output to the console, as shown in the example below.

The Results


Step 21

Once you’re convinced that your Windows Service is behaving properly, you can stop it and test the TopShelf uninstallation process.

Again, TopShelf completely abstracts you away from the nuances of the InstallUtil utility, by allowing you to uninstall your Windows Service just as easily as you initially installed it.

The Results


Step 22

Finally, if you go back into the Windows Services dialog box and refresh your running Windows Services, then you should quickly see that your Windows Service has been successfully removed.

The Results


SUMMARY

In summary, I walked you through the easy steps of creating a custom Windows Service using the TopShelf NuGet package and a simple C# .NET Console application.

In the end, starting out with a TopShelf NuGet package and a simple Console application allows for a much easier and intuitive Windows Service development process, because it abstracts away all the complexities normally associated with traditional Windows Service development and debugging, resulting in more time to focus on writing better code. These are all good things!

Hi, I’m Cole Francis, a Senior Solution Architect for The PSC Group, LLC located in Schaumburg, IL.  We’re a Microsoft Partner that specializes in technology solutions that help our clients achieve their strategic business objectives.  PSC serves clients nationwide from Chicago and Kansas City.

Thanks for reading, and keep on coding!  🙂

Advertisements

CreateImagefromPDF

By: Cole Francis, Senior Architect at The PSC Group, LLC.

Let’s say you’re working on a hypothetical project, and you run across a requirement for creating an image from the first page of a client-provided PDF document.  Let’s say the PDF document is named MyPDF.pdf, and your client wants you to produce a .PNG image output file named MyPDF.png.

Furthermore, the client states that you absolutely cannot read the contents of the PDF file, and you’ll only know if you’re successful if you can read the output that your code generates inside the image file.  So, that’s it, those are the only requirements.   What do you do?

SOLUTION

Thankfully, there are a number of solutions to address this problem, and I’m going to use a lesser known .NET NuGet package to handle this problem.  Why?  Well, for one I want to demonstrate what an easy problem this is to solve.  So, I’ll start off by searching in the .NET NuGet Package Manager Library for something describing what I want to do.  Voila, I run across a lesser known package named “Pdf2Png”.  I install it in less than 5 seconds.

Pdf2Png.png

So, is the Pdf2Png package thread-safe and server-side compliant?  I don’t know, but I’m not concerned about it because it wasn’t listed as a functional requirement.  So, this is something that will show up as an assumption in the Statement-of-Work document and will be quickly addressed if my assumption is incorrect.

Next, I create a very simple console application, although this could be just about any .NET file type, as long as it has rights to the file system.  The process to create the console application takes me another 10 seconds.

Next, I drop in the following three lines of code and execute the application, taking another 5 secondsThis would actually be one line of code if I was passing in the source and target file locations and names.

 string pdf_filename = @"c:\cole\PdfToPng\MyPDF.pdf";
 string png_filename = @"c:\cole\PdfToPng\MyPDF.png";
 List errors = cs_pdf_to_image.Pdf2Image.Convert(pdf_filename, png_filename);

Although my work isn’t overwhelmingly complex, the output is extraordinary for a mere 20 seconds worth of work!  Alas, I have not one, but two files in my source folder.  One’s my source PDF document, and the other one’s the image that was produced from my console application using the Pdf2Png package.

TwoFiles.png

Finally, when I open the .PNG image file, it reveals the mysterious content that was originally inside the source PDF document:

SomeThingsArentHard.png

Before I end, I have to mention that the Pdf2Png component is not only simple, but it’s also somewhat sophisticated.  The library is a subset of Mark Redman’s work on PDFConvert using Ghostscript gsdll32.dll, and it automatically makes the Ghostscript gsdll32 accessible on a client machine that may not have it physically installed.

Thanks for reading, and keep on coding!  🙂

AngularJS SPA

By:  Cole Francis, Senior Solution Architect at The PSC Group, LLC.

PROBLEM

There’s a familiar theme running around on the Internet right now about certain problems associated with generating SEO-friendly Sitemaps for SPA-based AngularJS web applications.  They often have two funamental issues associated with their poor architectural design:

  1. There’s usually a nasty hashtag (#) or hashbang (#!) buried in the middle of the URL route, which the website ultimately relies upon for parsing purposes in order to construct the real URL route (e.g. https://www.myInheritedWebApp.com/stuff/#/items/2
  2. Because of the embedded hashtag or hashbang, the URL’s are dynamically constructed and don’t actually point to content without parsing the hashtag (or hashbang) operator first.  The underlying problem is that a Sitemap.xml document can’t be auto-generated for SEO indexing.

I realize that some people might be offended by my comment about “poor achitectural design”.  I state this loosely, because it’s really just the nature of the beast.  Why?  Because it’s really easy to get started with AngularJS, and many Software Developers simply start laying down code that’s initially decent, but at some point they start implementing hacks because of added complexity to the original functional requirements.  That’s where they begin to get themselves in trouble very creative. 🙂

If you think I’m kidding, then just try Googling the following keywords and you’ll see exactly what I mean:  AngularJS, hash, hashbang, SEO, Sitemap, problem.

SOLUTION

So, the first step is to remove the hashtag (#) or the hashbang (#!).  I know it sucks, and it’s going to require some work, but let me be clear.  Do it!  For one, generating the Sitemap will be much easier, because you won’t need to parse on a hashtag (or hashbang) to get the real URL.  Secondly, all the remediation work you do will be a reminder the next time you think about taking shortcuts.

Regardless, after correcting the hashtag problem, you still have another issue.  Your website is still an AngularJS SPA-based website, which means that all its content is dynamically generated and injected through JavaScript AJAX calls.

Given this, how will you ever be able to generate a Sitemap containing all your content (e.g. products, catalogs, people, etc…)? Even more concerning, how will people find your people or products when searching on Google?

Luckily, the answer is very simple.  Here’s a little gem that I recently ran across while trying to generate a Sitemap.xml document on an AngularJS SPA architected website, and it works like a charm:  http://botmap.io/

I literally copied the script on the BotMap website to the bottom of my shared\_Layout.cshtml file, just above the closing tag.  This gives BotMap permission to crawl your website.  After doing this, push your website to Production, then point the BotMap website to your publicly-facing URL, and finally click the button on their website to initiate the crawl.  One and done!

BotMap begins to crawl and catalog your website as if it was a real person browsing it. It doesn’t use CURL or xHttp requests to determine what to catalog. The BotMap crawler actually executes the JavaScript, which is how it ultimately learns about all of the content on your website that it will use to construct the Sitemap.  

This is why it’s so great for websites created using AngularJS or other JavaScript frameworks where content is injected inside the JavaScript code itself.  Congratulations, {{vm.youreDone}}!

Thanks for reading, and keep on coding!  🙂

DockerContainerTitle.png

By:  Cole Francis, Architect

Before you can try out the .NET Core Docker base images, you need to install Docker.  Let’s begin by installing Docker for Windows.

Docker1.png

Once Docker is successfully installed, you’ll see the following dialog box appear.

Docker2.png

Next, run a PowerShell command window in Administrator mode, and install the .NET Core by running the following Docker command.

Docker7.png

Once the download is complete, you’ll see some pertinent status information about the download attempt.

Docker8.png

Finally, run the following commands to create your very first “Hello World” Container.  Albeit, it’s not very exciting, but it is running the Container along with the ToolChain, which were pulled down from Microsoft’s Docker Hub in just a matter of seconds.

Finally, to prove that .NetCore is successfully installed,  compile and run the following code from the command line.  

Congratulations!  You’ve created your first Docker Container, and it only took a couple of minutes.

docker9

Thanks for reading and keep on coding! 🙂

MicrosoftFlow

Author: Cole Francis, Architect

Today I had the pleasure of working with Microsoft Flow, Microsoft’s latest SaaS-based workflow offering. Introduced in April, 2016 and still in Preview mode, Flow allows both developers and non-developers alike to rapidly create visual workflow sequences using a number of on-prem and cloud-based services.  In fact, anyone who is interested in “low code” or “no code” integration-centric  solutions might want to take a closer look at Microsoft Flow.

Given this, I thought my goal for today would be to leverage Microsoft Flow to create a very rudimentary workflow that gets kicked off by an ordinary email, which in turn will call a cloud-based MVC WebAPI endpoint via an HTTP GET request, and then it will ultimately crank out a second email initiated by the WebAPI endpoint.

Obviously, the custom WebAPI endpoint isn’t necessary to generate the second email, as Microsoft Flow can accomplish this on its own without requiring any custom code at all.  So, the reason I’m adding the custom WebAPI enpoint into the mix is to simply prove that Flow has the ability to integrate with a custom RESTful WebAPI endpoint.  After all, if I can successfully accomplish this, then I can foreseeably communicate with any endpoint on any codebase on any platform.  So, here’s my overall architectural design and workflow:

Microsoft Flow

To kick things off, let’s create a simple workflow using Microsoft Flow.  We’ll do this by first logging into Microsoft Office 365.  If we look closely, we’ll find the Flow application within the waffle:

Office365Portal

After clicking on the Flow application, I’m taken to the next screen where I can either choose from an impressive number of existing workflow templates, or I can optionally choose to create my own custom workflow:

FlowTemplates.png

I need to call out that I’ve just shown you a very small fraction of pre-defined templates that are actually available in Flow.  As of this writing, there are hundreds of pre-defined templates that can be used to integrate with an impressive number of Microsoft and non-Microsoft platforms.  The real beauty is that they can be used to perform some very impressive tasks without writing a lick of code.  For example, I can incorporate approval workflows, collect data, interact with various email platforms, perform mobile push notifications (incl. iOS), track productivity, interact with various social media channels, synchronize data, etc…

Moreover, Microsoft Flow comes with an impressive number of triggers, which interact with an generous number of platforms, such as Box, DropBox, Dynamics CRM, Facebook, GitHub, Google Calendar, Instagram, MailChimp, Office365, OneDrive, OneDrive for Business, Project Online, RSS, Salesforce, SharePoint, SparkPost, Trello, Twitter, Visual Studio Team Services, Wunderlist, Yammer, YouTube, PowerApps, and more.

So, let’s continue building our very own Microsoft Flow workflow object.  I’ll do this by clicking on the “My Flows” option at the top of the web page.  This navigates me to a page that displays my saved workflows.  In my case, I don’t currently have any saved workflows, so I’ll click the “Create new flow” button that’s available to me (see the image below).

MyFlows

Next, I’ll search for the word “Mail”, which presents me with the following options:

Office365Email.png

Since the company I work for uses Microsoft Office 365 Outlook, I’ll select that option.  After doing this, I’m presented with the following “Action widget”.

Office365Inbox.png

I will then click on the “Show advanced options” link, which provides me with some additional options.  I’ll fill in the information using something that meets my specific needs.  In my particular case, I want to be able to kick-off my workflow from any email that contains “Win” in the Subject line.

Office365InboxOptions

Next, I’ll click on the (+ New step) link at the bottom of my widget, and I’m presented with some additional options.  As you can see, I can either “Add another action”, “Add a condition”, or click on the “…More” option to do things like “Add an apply to each” option, “Add a do until” condition, or “Add a scope”.

Office365InboxOptions0.png

As I previously mentioned, I want to be able to call a custom Azure-based RESTful WebAPI endpoint from my custom Flow object.  So, I’ll click on the “Add an action”, and then I’ll select the “HTTP” widget from the list of actions that are available.

RESTfulWebAPIoption.png

After clicking on the “HTTP” widget, I’m now presented with the “HTTP” widget options.  At a minimum, the “HTTP” object will allow me to specify a URI for my WebAPI endpoint (e.g. http://www.microsoftazure.net/XXXEndpoint), as well as an Http Verb (e.g. GET, POST, DELETE, etc…).  You’ll need to fill in your RESTful WebAPI endpoint data according to your own needs, but mine looks like this:

HTTPOption.png

After I’m done, I’ll can save my custom Flow by clicking the “Create Flow” button at the top of the page and providing my Flow with a meaningful name.  Providing your Flow with a meaningful name is very important, because you could eventually have a hundred of these things, so being able to distinguish one from another will be key.  For example, I named my custom Flow “PSC Win Wire”.  After saving my Flow, I can now do things like create additional Flows, edit existing Flows, activate or deactivate Flows, delete Flows, and review the viability and performance of my existing Flows by clicking on the “List Runs” icon that’s available to me.

SaveFlow.png

In any event, now that I’ve completed my custom Flow object, all I’ll need to do now is quickly spin up a .NET MVC WebAPI2 solution that contains my custom WebAPI endpoint, and then push my bits to the Cloud in order to publicly expose my endpoint.  I need to point out that my solution doesn’t necessarily need to be hosted in the Cloud, as a publicly exposed on-prem endpoint should work just fine.  However, I don’t have a quick way of publicly exposing my WebAPI endpoint on-prem, so resorting to the Cloud is the best approach for me.

I also need to point out again that creating a custom .NET MVC WebAPI isn’t necessary to run Microsoft Flows.  There are plenty of OOB templates that don’t require you to write any custom code at all.  This type of versatility is what makes Microsoft Flow so alluring.

In any case, the end result of my .NET MVC WebAPI2 project is shown below.  As you can see, the core WebAPI code generates an email (my real code will have values where you only see XXXX’s in the pic below…sorry!   🙂

MVCWebAPI.png

The GetLatestEmail() method will get called from a publicly exposed endpoint in the EmailController() class.  For simplicity’s sake, my EmailController class only contains one endpoint, and its named GetLatestEmails():

The Controller.png

So, now that I’m done setting everything up, it’s time for me to publish my code to the Azure Cloud.  I’ll start this off by cleaning and building my solution.  Afterwards, I’ll right-click on my project in the Solution Explorer pane, and then I’ll click on the Publish option that appears below.

Publish1.png

Now that this is out of the way, I’ll begin entering in my Azure Publish Web profile options.  Since I’m deploying an MVC application that contains a WebAPI2 endpoint, I’ve selected the “Microsoft Azure Web Apps” option form the Profile category.

Publish2.png

Next, I’ll enter the “Connection” options and fill that information in.   Afterwards, I should now have enough information to publish my solution to the Azure Cloud.  Of course, if you’re trying this on your own, this example assumes that you already have a Microsoft Azure Account.  If you don’t have a Microsoft Azure account, then you can find out more about it by clicking here.

Publish3.png

Regardless, I’ll click the “Publish” button now, which will automatically compile my code. If the build is successful then it will publish my bits to Microsoft’s Azure Cloud.  Now comes the fun part…testing it out!

First, I’ll create an email that matches the same conditions that were specified by me in the “Office 365 Outlook – When an email arrives” Flow widget I previously created.  If you recall, that workflow widget is being triggered by the word “Win” in the Subject line of any email that gets sent to me, so I’ll make sure that my test email meets that condition.

PSCWinWireEmail

After I send an email that meets my Flow’s conditions, then my custom Flow object should get kicked-off and call my endpoint, which means that if all goes well, then I should receive another email from my WebAPI endpoint.  Hey, look!  I successfully received an email from the WebAPI endpoint, just as I expected.  That was really quick!  🙂

EmailResults.png

Now that we know that our custom Flow object works A-Z, I want tell you about another really cool Microsoft Flow feature, and that’s the ability to monitor the progress of my custom Flow objects.  I can accomplish this by clicking on the “List Runs” icon in the “My Flows” section of the Microsoft Flow main page (see below).

ListRun1.png

Doing this will conjure up the following page.  From here, I can gain more insight and visibility into the viability and efficiency of my custom Flows by simply clicking on the arrow to the right of each of the rows below.

ListRun2.png

Once I do that, I’m presented with the following page.  At this point, I can drill down into the objects by clicking on them, which will display all of the metadata associated with the selected widget.  Pretty cool, huh!

ListRun3.png

Well, that’s it for this example.  I hope you’ve enjoyed my walkthrough.  I personally find Microsoft Flow to be a very promising SaaS-based workflow offering.

Thanks for reading and keep on coding! 🙂

The Observer Pattern

Author: Cole Francis, Architect

Click here to download my solution on GitHub

BACKGROUND

If you read my previous article, then you’ll know that it focused on the importance of software design patterns. I called out that there are some architects and developers in the field who are averse to incorporating them into their solutions for a variety of bad reasons. Regardless, even if you try your heart out to intentionally avoid incorporating them into your designs and solutions, the truth of the matter is you’ll eventually use them whether you intend to or not.

A great example of this is the Observer Pattern, which is arguably the most widely used software design pattern in the World. It comes in a number of different styles, with the most popular being the Model-View-Controller (MVC), whereby the View representing the Observer and the Model representing the observable Subject. People occasionally make the mistake of referring to MVC as a design pattern, but it actually an architectural style of the Observer Design Pattern.

The Observer Design Pattern’s taxonomy is categorized in the Behavioral Pattern Genus of the Software Design Pattern Family because of its object-event oriented communication structure, which causes changes in the Subject to be reflected in the Observer. In this respect, the Subject is intentionally kept oblivious, or completely decoupled from the Observer class.

Some people also make the mistake of calling the Observer Pattern the Publish-Subscribe Pattern, but they are actually two distinct patterns that just so happen to share some functional overlap. The significant difference between the two design patterns is that the Observable Pattern “notifies” its Observers whenever there’s a change in the Observed Subject, whereas the Publish-Subscribe Pattern “broadcasts” notifications to its Subscribers.

A COUPLE OF NEGATIVES

As with any software design pattern, there are some cons associated with using the Observer Pattern. For instance, the base implementation of the Observer Pattern calls for a concrete Observer, which isn’t always practical, and it’s certainly not easily extensible. Building and deploying an entirely new assembly each time a new Subject is added to the solution would require a rebuild and redeployment of the assembly each time, which is a practice that many larger, bureaucratically-structured companies often frown upon. Given this, I’ll show you how to get around this little nuance later in this article.

Another problem associated with the Observer Pattern involves the potential for memory leaks, which are also referred to as “lapsed listeners” or “latent listeners”. Despite what you call it, a memory leak by any other name is still a memory leak. Regardless, because an explicit registering and unregistering is generally required with this design pattern, if the Subjects aren’t properly unregistered (particularly ones that consume large amounts of memory) then unnecessary memory consumption is certain, as stale Subjects continue to be needlessly observed until something changes. This can result in performance degradation. I’ll explain to you how you can work around this issue.

OBSERVER DESIGN PATTERN OVERVIEW

Typically, there are three (3) distinct classes that comprise the heart and soul of the Observer design pattern, and they are the Observer class, the Subject class, and the Client (or Program). Beyond this, I’ve seen the pattern implemented in a number of different ways, and asking a roomful of architects how they might go about implementing this design pattern is a lot like asking them how they like their morning eggs. You’ll probably get a variety of different responses back.

However, my implementation of this design pattern typically deviates from the norm because I like to include a fourth class to the mix, called the LifetimeManager class. The purpose of the LifetimeManager class is to allow each Subject class to autonomously maintain its own lifetime, alleviating the need for the client to explicitly call the Unregister() method on the Subject object. It’s not that I don’t want the client program to explicitly call the Subscriber’s Unregister() method, but this cleanup call does occasionally get omitted for whatever reason. So, the inclusion of the LifeTimeManager class provides an additional safeguard to protect us against this. I’ll focus on the LifetimeManager class a little bit later in this article.

Moving on, the Observer design pattern is depicted in the class diagram below. As you can see, the Subject inherits from the LifetimeManager class and implements the ISubject interface, but the client program and the Observer are left decoupled from the Subject. You will also notice that the Subject provides the ability to allow a program to register and unregister a Subject class. By inheriting from the LifetimeManager class, the Subject class now also allows the client to establish specific lifetime requirements for the Subject class, such as whether it uses a basic or sliding expiration, its lifetime in seconds, minutes, hours, days, months, and even years. And, if the developer fails to provide this information through the Subject’s overloaded constructor, then the default constructor provides some default values to make sure the Subject is cleaned up properly.

ClassDiagram2

A MORE DETAILED EXPLANATION OF THE PATTERN

The Subject Class

The Subject class also contains a Change() method that’s exactly like the Register() method. This is something else that’s not normally a part of this design pattern, but I intentionally added this because I don’t think it makes sense to call the Register() method anytime changes are made to the Subject(). I think it makes for a bad developer experience. Instead, registering the Subject object once and then calling the Change() method anytime there are changes to the Subject object makes much more sense in my opinion. We can impose the cleanup work upon the Observer class each time the Subject object is changed.

The Observer Class

The Observer class includes an Update() method, which accepts a Subject object and the operation the Observer class needs to perform on the Subject object. For instance, if there’s an add or an update to the Subject object, then the Observers searches through its observed Subject cache to find it using it’s unique SubscriptionId and CacheId’s. If the Subject exists in the cache, then the Observer updates it by deleting the old Subject and adding the new one. If it doesn’t find it in the Subject cache, then it simply adds it. The Observer also accepts a remove action, which causes it to remove the Subject from it observed state.

The Client Program

The only other important element to remember is that anytime an action takes place, then notifications are always propagated back to the client program so that it’s always aware of what’s going on behind the scenes. When the Subject is registered, the client program is notified; When the Subject is unregistered, the client program is notified; When the observed Subject’s data changes, the client program is notified. One of the important tenets of this design pattern is that the client program is always kept aware of any changes that occur to its observed Subject.

The LifetimeManager Class

The LifetimeManager class, which is my own creation, is responsible for maintaining the lifetime of each Subject object that gets created. So, for every Subject object that gets created, a LifetimeManager class also gets created. The LiftetimeManager class includes a variety of malleable properties, which I’ll go over shortly. Also keep in mind that these properties get set by the default constructor of the Subject() class, and they can also be overridden when the Subject object first gets created by passing the override values in an overloaded constructor that I provide in my design, or they can be overridden by simply changing any one of the LifetimeManager class’s property values and then calling the Subject’s Change() method. It’s really as simple as that. Nevertheless, here are the supported properties that make up the LifetimeManager class:

1. ExpirationType: Tells the system whether the expiration type is either basic or sliding. Basic expiration means that the Subject expires at a specific point in time. Sliding expiration simply means that anytime a change is made to the Subject, the expiration slides based upon the values you provide.

2. ExpirationValue: This is an integer that is relative to the next property, TimePrecision.

3. TimePrecision: This is an enumeration that includes specific time intervals, like seconds, minutes, hours, days, months, and even years. So, if I provide a 30 for TimePrecision and provide the enumTimePrecision.Minutes for TimePrecision, then this means that I want my data cache to automatically expire, and hence self-unregister, in 30 minutes. What’s more, if you fail to provide me with these values during the time you Register() your Subject, then I default them for you in the default Constructor code in the Subject class.

So, now that you have an overview and visual understanding of the Observer pattern class structure and relationships, I’ll now spend a little time going over my implementation of the pattern by sharing my working source code with you. My intention is that you can use my source code to get your very own working model up and running. This will allow you to experiment with the pattern on your own. It would also be nice to get some feedback regarding how well you think my custom LifetimeManager class helps to avoid unwanted memory leaks by providing each Subject class with the ability to maintain its own lifetime.

THE OBSERVER CLASS SOURCE CODE

For the most part, it’s the responsibility of the Observer class to perform update operations on a given Subject when requested. Furthermore, the Observer class should respect and observe any changes to the stored Subject’s lifecycle until the Subject requests the Observer to unregister it. Here’s my working example of the Observer class:


using System;
using System.Collections.Generic;


namespace ObserverClient
{
    /// 
    /// The abstract observer base class
    /// 
    public static class Observer
    {
        #region Member Variables

        /// 
        /// The global data cache
        /// 
        private static List _data = new List();

        #endregion

        #region Methods

        /// 
        /// Provides CRUD operations on the global cache object
        /// 
        internal static bool Update(LifetimeManager data, Enums.enumSubjectAction action)
        {
            try
            {
                object o = new object();

                // This locks the critical section, just in case a timer even fires at the same
                // time the main thread's operation is in action.
                lock (o)
                {
                    switch (action)
                    {
                        case Enums.enumSubjectAction.AddChange:
                            {
                                // Finds the original object and removes it, and then it re-adds it to the list
                                _data.RemoveAll(a => a.SubscriptionId == data.SubscriptionId && a.CacheData == data.CacheId);
                                _data.Add(data);
                                break;
                            }
                        case Enums.enumSubjectAction.RemoveChild:
                            {
                                // Finds the entry in the list and removes it
                                _data.RemoveAll(a => a.SubscriptionId == data.SubscriptionId && a.CacheData == data.CacheId);
                                break;
                            }
                        case Enums.enumSubjectAction.RemoveParent:
                            {
                                // Finds the entry in the list and removes it
                                _data.RemoveAll(a => a.SubscriptionId == data.SubscriptionId);
                                break;
                            }
                        default:
                            {
                                // This is useless
                                break;
                            }
                    }

                    return true;
                }
            }
            catch (Exception)
            {
                throw;
            }
        }

        #endregion
    }
}

THE SUBJECT CLASS SOURCE CODE

Once again, the intent of the Subject class is to expose methods to the client allow for the registering and unregistering of the observable Subject. It’s the responsibility of the Subject to call the Observer class’s Update() method and request that specific actions be taken on it (e.g. add or remove).

In my code example below, the Observer class acts as a storage cache for observed Subjects, and it also provides some basic operations necessary to adequately maintain the observed Subjects.

As a side note, take a look at the default and overloaded constructors in the Subject class, below. It’s in these two areas of the Subject object that I either automatically control or allow the developer to override the Subject’s lifetime. Once the lifetime of the Subject object expires, then it is unregistered in the Observer and the client program is then automatically notified that the subject was removed from observation.


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using ObserverClient.Interface;


namespace ObserverClient
{
    /// 
    /// This is the Subject Class, which provides the ability to register and unregister and object.
    /// 
    public class Subject : LifetimeManager,  ISubject
    {
        #region Events

        /// 
        /// Handles the change notification event
        /// 
        public event NotifyChangeEventHandler NotifyChanged;

        #endregion
        
        #region Methods

        /// 
        /// The delegate for the NotificationChangeEventHandler event
        /// 
        public delegate void NotifyChangeEventHandler(T notifyinfo, Enums.enumSubjectAction action);

        /// 
        /// The register method.  This adds the entry and data to the Observer's data cache
        /// and then provides notification of the event to the caller if it's successfully added.
        /// 
        public void Register()
        {
            try
            {
                if (Observer.Update(this, Enums.enumSubjectAction.AddChange))
                {
                    this.NotifyChanged(this, Enums.enumSubjectAction.AddChange);
                }
            }
            catch (Exception)
            {
                throw;
            }
        }

        /// 
        /// The unregister method.  This removes the entry and data in the Observer's data cache
        /// and then provides notification of the event to the caller if it's successfully removed.
        /// 
        public void Unregister()
        {
            try 
	        {	        
		        if (this.SubscriptionId != null && this.CacheId == null)
                {
                    Observer.Update(this, Enums.enumSubjectAction.RemoveParent);
                    this.NotifyChanged(this, Enums.enumSubjectAction.RemoveParent);
                }
                else if (this.SubscriptionId != null && this.CacheId != null)
                {
                    Observer.Update(this, Enums.enumSubjectAction.RemoveChild);
                    this.NotifyChanged(this, Enums.enumSubjectAction.RemoveChild);
                }
	        }
	        catch (Exception)
	        {
		        throw;
	        }
        }

        /// 
        /// The change method.  This modifies the entry and data to the Observer's data cache
        /// and then provides notification of the event to the caller if successful.
        /// 
        public void Change()
        {
            try
            {
                if (Observer.Update(this, Enums.enumSubjectAction.AddChange))
                {
                    if(this.ExpirationType == Enums.enumExpirationType.Sliding)
                    {
                        this.ExpirationStart = DateTime.Now;
                        this.MonitorExpiration();
                    }

                    this.NotifyChanged(this, Enums.enumSubjectAction.AddChange);
                }
            }
            catch (Exception)
            {
                throw;
            }
        }

        /// 
        /// The event handler for object expiration notifications. It calls unregister for the current object.
        /// 
        void s_ExpiredUnregisterNow()
        {
            // Unregisters itself
            this.Unregister();
        }

        #endregion

        #region Constructor(s)

        /// 
        /// The Subject's default constructor (i.e. all the values relating to cache expiration are defaulted to 1 minute).
        /// 
        public Subject()
        {
            this.ExpirationType = Enums.enumExpirationType.Basic;
            this.ExpirationValue = 1;
            this.TimePrecision = Enums.enumTimePrecision.Minutes;
            this.ExpirationStart = DateTime.Now;

            this.NotifyObjectExpired += s_ExpiredUnregisterNow;
            this.MonitorExpiration();
        }

        /// 
        /// The overloaded Subject constructor
        /// 
        public Subject(Enums.enumExpirationType expirationType, int expirationValue, Enums.enumTimePrecision timePrecision)
        {
            this.ExpirationType = expirationType;
            this.ExpirationValue = expirationValue;
            this.TimePrecision = timePrecision;
            this.ExpirationStart = DateTime.Now;

            this.NotifyObjectExpired += s_ExpiredUnregisterNow;
            this.MonitorExpiration();
        }

        #endregion
    }
}

THE ISUBJECT INTERFACE

The ISubject interface merely defines the contract when creating Subject objects. Because the Subject class implements the ISubject interface, then it’s obligated to include the ISubject’s properties and methods. These tenets keeps all Subject objects consistent.


using System;
using System.Collections.Generic;


namespace ObserverClient.Interface
{
    /// 
    /// This is the Subject Interface
    /// 
    public interface ISubject
    {
        #region Interface Operations

        object SubscriptionId { get; set; }
        object CacheId { get; set; }
        object CacheData { get; set; }
        int ExpirationValue { get; set; }
        Enums.enumTimePrecision TimePrecision { get; set; }
        DateTime ExpirationStart { get; set; }
        Enums.enumExpirationType ExpirationType { get; set; }
        void Register();
        void Unregister();

        #endregion
    }
}

THE CLIENT PROGRAM SOURCE CODE

It’s the responsibility of the Client to call the register, unregister, and change methods on the Subjects objects, whenever applicable. The client can also control the lifetime of the Subject object it invokes by overriding the default properties that are set in the Subject’s default constructor. A developer can do this by either injecting the overridden property values in the Subject’s overloaded constructor, or it can accomplish this by simply typing in new lifetime property values on the Subject object and then call the Subject object’s Change() method.

There’s one final note here, and that is that the callback methods are defined by the client program. You’ll see evidence of this where I’ve provided these lines in the source code, below: subject1.NotifyChanged += “Your defined method here!”. This makes it completely flexible, because multiple Subject objects can either share the same notification callback method in the client program, or each instance can define its own.

Also, because the Subject object is generic, I don’t need to implement concrete Subject objects, and they can be defined on-the-fly. This means that I don’t need to redeploy the Observer assembly each time I add a new Subject. This eliminates the other negative that’s typically associated with the Observable design pattern.


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Net.NetworkInformation;
using System.Collections;


namespace ObserverClient
{
    class Program
    {
        /// 
        /// The main entry point into the application
        /// 
        static void Main(string[] args)
        {
            // Register subject 1
            Subject subject1 = new Subject { SubscriptionId = "1", CacheId = "1", CacheData = "1" };
            subject1.NotifyChanged += s_testCacheObserver1NotifyChanged_One;
            // Tie the following event handler to any notifications received on this particular subject
            subject1.ExpirationType = Enums.enumExpirationType.Sliding;
            subject1.Register();

            // Register subject 2
            Subject subject2 = new Subject { SubscriptionId = "1", CacheId = "2", CacheData = "2" };
            // Tie the following event handler to any notifications received on this particular subject
            subject2.NotifyChanged += s_testCacheObserver1NotifyChanged_One;
            subject2.Register();

            // Register subject 3
            Subject subject3 = new Subject { SubscriptionId = "1", CacheId = "1", CacheData = "Boom!" };
            // Tie the following event handler to any notifications received on this particular subject
            subject3.NotifyChanged += s_testCacheObserver1NotifyChanged_Two;
            subject3.Change();

            // Unregister subject 2. Only subject 2's notification event should fire and the
            // notification should be specific about the operations taken on it
            subject2.Unregister();

            // Change subject 1's data.  Only subject 2's notification event should fire and the
            // notification should be specific about the operations taken on it
            subject1.CacheData = "Change Me";
            subject1.Change();

            // Hang out and let the system clean up after itself.  Events should only fire for those
            // objects that are self-unregistered.  The system is capable of maintaining itself.
            Console.ReadKey();
        }

        /// 
        /// Notifications are received from the Subject whenever changes have occurred.
        /// 
        static void s_testCacheObserver1NotifyChanged_One(T notifyInfo, Enums.enumSubjectAction action)
        {
            var data = notifyInfo;
        }

        /// 
        /// Notifications are received from the Subject whenever changes have occurred.
        /// 
        static void s_testCacheObserver1NotifyChanged_Two(T notifyInfo, Enums.enumSubjectAction action)
        {
            var data = notifyInfo;
        }
    }
}

THE LIFETIME MANAGER CLASS SOURCE CODE

Again, the LifetimeManager Class is my own creation. The goal of this class, which I’ve already mentioned a couple of times in this article, is to supply default properties that will allow the Subject to maintain its own lifetime without the need for the Unregister() method having to be called explicitly by the client program.

So, while I still believe it’s imperative that the client program explicitly call the Subject object’s Unregister() method, it’s comforting knowing there’s a backup plan in place if for some reason that doesn’t happen.

I’ve also highlighted all of the granular lifetime options in the source code. as you can see for yourself, the code currently accept anything from milliseconds to years, and everything in between. (lightyears would have been really cool) I could have made it even more granular, but I can’t imagine anyone registering and unregistering an observed Subject for less than a millisecond. Also, I can’t image anyone storing observed Subject for as long as a year, even though I’ve created this implementation to observe Subject objects for as long as ±1.7 × 10 to the 308th power years. That seems sufficient, don’t you think? 🙂


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Timers;
using System.Globalization;

namespace ObserverClient
{
    /// 
    /// The SubjectDecorator class provides additional operations that the Subject class
    /// should be aware of but fall outside its immediate scope of attention.
    /// 
    public class LifetimeManager
    {
        #region Member Variables

        private Timer timer = new Timer();

        #endregion

        #region Properties

        public object SubscriptionId { get; set; }
        public object CacheId { get; set; }
        public object CacheData { get; set; }
        public int ExpirationValue { get; set; }
        public Enums.enumExpirationType ExpirationType { get; set; }
        public Enums.enumTimePrecision TimePrecision { get; set; }
        public DateTime ExpirationStart { get; set; }

        #endregion

        #region Methods

        /// 
        /// Fires when the object's time to live has expired
        /// 
        void s_TimeHasExpired(object sender, ElapsedEventArgs e)
        {
            // Delete the Observer Cache and notify the caller
            NotifyObjectExpired();
        }

        /// 
        /// The delegate for the NotificationChangeEventHandler event
        /// 
        public delegate void NotifyObjectExpiredHandler();

        /// 
        /// Provides expiration monitoring capabilities for itself (self-maintained expiration)
        /// 
        internal void MonitorExpiration()
        {
            double milliseconds = 0;

            switch (this.TimePrecision)
            {
                case Enums.enumTimePrecision.Milliseconds:
                    {
                        milliseconds = DateTime.Now.Subtract(DateTime.Now.AddMilliseconds(this.ExpirationValue)).TotalMilliseconds;
                        break;
                    }
                case Enums.enumTimePrecision.Seconds:
                    {
                        milliseconds = DateTime.Now.Subtract(DateTime.Now.AddSeconds(this.ExpirationValue)).TotalMilliseconds;
                        break;
                    }
                case Enums.enumTimePrecision.Minutes:
                    {
                        milliseconds = DateTime.Now.Subtract(DateTime.Now.AddMinutes(this.ExpirationValue)).TotalMilliseconds;
                        break;
                    }
                case Enums.enumTimePrecision.Hours:
                    {
                        milliseconds = DateTime.Now.Subtract(DateTime.Now.AddHours(this.ExpirationValue)).TotalMilliseconds;
                        break;
                    }
                case Enums.enumTimePrecision.Days:
                    {
                        milliseconds = DateTime.Now.Subtract(DateTime.Now.AddDays(this.ExpirationValue)).TotalMilliseconds;
                        break;
                    }
                case Enums.enumTimePrecision.Months:
                    {
                        milliseconds = DateTime.Now.Subtract(DateTime.Now.AddMonths(this.ExpirationValue)).TotalMilliseconds;
                        break;
                    }
                case Enums.enumTimePrecision.Years:
                    {
                        milliseconds = DateTime.Now.Subtract(DateTime.Now.AddYears(this.ExpirationValue)).TotalMilliseconds;
                        break;
                    }
                default:
                    {
                        break;
                    }

            }

            if(timer.Interval > 0)
            {
                timer.Stop();
                timer.Dispose();
                timer = new Timer(Math.Abs(milliseconds));
                timer.Elapsed += new ElapsedEventHandler(s_TimeHasExpired);
                timer.Enabled = true;
            }
            else
            {
                timer.Elapsed += new ElapsedEventHandler(s_TimeHasExpired);
                timer.Enabled = true;
            }
        }

        #endregion
        
        #region Events

        /// 
        /// Handles the change notification event
        /// 
        public event NotifyObjectExpiredHandler NotifyObjectExpired;

        #endregion        
    }
}

WRAPPING THINGS UP

Well, that’s the Observer design pattern in a nutshell. I’ve even addressed the negatives associated with the design pattern. First, I overcame the “memory leak” issue by creating and tying a configurable LifetimeManager class to the Subject object, which makes sure the Unregister() method always gets called, regardless. Secondly, because I keep the Subject object generic and static, my design only requires one concrete Observer for all Subjects. I’ve also provided you with a Subscription-based model that will allow each Subscriber to observe one or more Subjects in a highly configurable manner. So, I believe that I’ve covered all the bases here…and hopefully then some.

Feel free to stand the example up for yourself. I think I’ve provided you with all the code you need, except for the Enumeration class, which I believe most of you will be able to quickly figure out for yourselves. Anyway, test drive it if you’d like and let me know what you think. I’m particularly interested in what you think about the inclusion of the LifetimeManager class. All comments and questions are always welcome.

Thanks for reading and keep on coding! 🙂

CouplingDesignPatterns

Author: Cole Francis, Architect

BACKGROUND PROBLEM

My last editorial focused on building out a small application using a simple Service Locator Pattern, which exposed a number of cons whenever the pattern is used in isolation. As you might recall, one of the biggest problems that developers and architects have with this pattern is the way that service object dependencies are created and then inconspicuously hidden from their callers inside the service object register of the Service Locator Class. This behavior can result in a solution that successfully compiles at build-time but then inexplicably crashes at runtime, often offering no insight into what went wrong.

THE REAL PROBLEM

I think it’s fair to say that when some developers think about design patterns they don’t always consider the possibility of combining one design pattern with another in order to create a more extensible and robust framework. The reason why opportunities like these are overlooked is because the potential for a pattern’s extensibility isn’t always obvious to its implementer.

For this very reason, I think it’s important to demonstrate how certain design patterns can be coupled together to create some very malleable application frameworks, and to prove my point I took the Service Locator Pattern I covered in my previous editorial and combined it with a very basic Factory Pattern.

Combining these two design patterns provides us with the ability to clearly separate the “what to do” from the “when to do it” concerns. It also offers build-time type checking and the ability to test each layer of the application using an object’s interface. Enough chit-chat. Let’s get on with the demo!

THE SOLUTION

Suppose we are a selective automobile manufacturer and offer two well-branded models:

    (1) A luxury model named “The Drifter”.
    (2) A sport luxury model named “The Showdown”.

To keep things simple, I’ve included very few parts for each make’s model. So, while each model is equipped with its own engine and emblem, both models share the same high-end stereo package and high-performance tires. Shown below is a snapshot of the ServiceLocator Class, which looks nearly identical to the one I included in my last editorial. For this reason, I’m not color-coding anything inside the class except where I’ve made changes to it. I’ve also kept the color-coding consistent throughout the rest of the code examples in order to depict how the different classes and design patterns get tied together:


namespace FactoryPatternExample
{
    public class ServiceLocator
    {
        #region Member Variables

        ///
        /// An early loaded dictionary object acting as a memory map for each interface's concrete type
        /// 
        private IDictionary<object, object> services;

        #endregion

        #region IServiceLocator Methods

        ///
        /// Resolves the concrete service type using a passed in interface
        /// 
        public T Resolve<T>()
        {
            try
            {
                return (T)services[typeof(T)];
            }
            catch (KeyNotFoundException)
            {
                throw new ApplicationException("The requested service is not registered");
            }
        }

        /// 
        /// Extends the service locator capabilities by allowing an interface and concrete type to 
        /// be passed in for registration (e.g. if you wrap the assembly and wish to extend the 
        /// service locator to new types added to the extended project)
        /// 
        public void Register<T>(object resolver)
        {
            try
            {
                this.services[typeof(T)] = resolver;
            }
            catch (Exception)
            {

                throw;
            }
        }

        #endregion

        #region Constructor(s)

        ///
        /// The service locator constructor, which resolves a supplied interface with its corresponding concrete type
        /// 
        public ServiceLocator()
        {
            services = new Dictionary<object, object>();

            // Registers the service in the locator
            this.services.Add(typeof(IDrifter_LuxuryVehicle), new Drifter_LuxuryVehicle());
            this.services.Add(typeof(IShowdown_SportVehicle), new Showdown_SportVehicle());
        }

        #endregion
    }
}


Where the abovementioned code differs from a basic Service Locator implementation is when we add our vehicles to the service register’s Dictionary object in the ServiceLocator() Class Constructor. When this occurs, the following parts are registered using a Factory Pattern that gets invoked in the Constructor of the shared Vehicle() Base Class (highlighted in yellow, below):


 
namespace FactoryPatternExample.Vehicles.Models
{
    public class Drifter_LuxuryVehicle : Vehicle, IDrifter_LuxuryVehicle
    {
        /// 
        /// Factory Pattern for the luxury vehicle line of automobiles
        /// 
        /// 
        public override void CreateVehicle()
        {
            Parts.Add(new Parts.Emblems.SilverEmblem());
            Parts.Add(new Parts.Engines._350_LS());
            Parts.Add(new Parts.Stereos.HighEnd_X009());
            Parts.Add(new Parts.Tires.HighPerformancePlus());
        }
    }
}



 
namespace FactoryPatternExample.Vehicles.Models
{
    public class Showdown_SportVehicle : Vehicle, IShowdown_SportVehicle
    {
        /// 
        /// Factory Pattern for the luxury vehicle line of automobiles
        /// 
        /// 
        public override void CreateVehicle()
        {
            Parts.Add(new Parts.Emblems.GoldEmblem());
            Parts.Add(new Parts.Engines._777_ProSeries());
            Parts.Add(new Parts.Stereos.HighEnd_X009());
            Parts.Add(new Parts.Tires.HighPerformancePlus());
        }
    }
}


As you can see from the code above, both subtype classes inherit from the Vehicle() Base Class, but each subtype implements its own distinctive interface (e.g. IDrifter_LuxuryVehicle and IShowdown_SportVehicle). Forcing each subclass to implement its own unique interface is what ultimately allows a calling application to distinguish one vehicle type from another.

Additionally, it’s the Vehicle() Base Class that calls the CreateVehicle() Method inside its Constructor. But, because the CreateVehicle() Method in the Vehicle() Base Class is overridden by each subtype, each subtype is given the ability to add its own set of exclusive parts to the list of parts in the base class. As you can see, I’ve hardcoded all of the parts in my example out of convenience, but they can originate just as easily from a data backing store.



namespace FactoryPatternExample.Vehicles
{
    public abstract class Vehicle : IVehicle
    {
        List _parts = new List();

        public Vehicle()
        {
            this.CreateVehicle();
        }

        public List Parts 
        { 
            get
            {
                return _parts;
            }
        }

        // Factory Method
        public abstract void CreateVehicle();
    }
}


As for the caller (e.g. a client application), it only needs to resolve an object using that object’s interface via the Service Locator in order to obtain access to its publicly exposed methods and properties. (see below):


FactoryPatternExample.ServiceLocator serviceLocator = new FactoryPatternExample.ServiceLocator();
IDrifter_LuxuryVehicle luxuryVehicle = serviceLocator.Resolve<IDrifter_LuxuryVehicle>();

if (luxuryVehicle != null)
{
     foreach (Part part in ((IVehicle)(luxuryVehicle)).Parts)
     {
          Console.WriteLine(string.Concat("   - ", part.Label, ": ", part.Description));
     }
}

Here are the results after making a few minor tweaks to the UI code:

The Results

What’s even more impressive is that the Service Locator now offers compile-time type checking and the ability to test each layer of the code in isolation thanks to the inclusion of the Factory Pattern:

BuildTimeError

In summary, many of the faux pas experienced when implementing the Service Locator Design Pattern can be overcome by coupling it with a slick little Factory Design Pattern. What’s more, if we apply this same logic both equitably and ubiquitously across all design patterns, then it seems unfair to take a single design pattern and criticize its integrity and usefulness in complete sequestration, because it’s often the combination of multiple design patterns that make frameworks and applications more integral and robust. Thanks for reading and keep on coding! 🙂