Archive for the ‘.NET Architecture’ Category

XFactor

By: Cole Francis, Architect, PSC, LLC


THE PROBLEM

So, what do you do when you’re building a website, and you have a long-running client-side call to a Web API layer. Naturally, you’re going to do what most developers do and call the Web API asynchronously.  This way, your code can continue to cruise along until a result finally return from the server.

But, what if matters are actually worse than that?  What if your Web API Controller code contacts a Repository POCO that then calls a stored procedure through the Entity Framework.  And, what if the Entity Framework leverages a project dedicated database, as well as a system-of-record database, and calls to your system-of-record database sporadically fail?

Like most software developers, you would lean towards looking at the log files, offering traceability and logging for your code.  But, what if there wasn’t any logging baked into the code?  Even worse, what if this problem only occurred sporadically?  And, when it occurs, orders don’t make it into the system-of-record database, which means that things like order changes and financial transactions don’t occur.  Have you ever been in a situation like this one?


PART I – HERE COMES ELMAH

From a programmatic perspective, let’s hypothetically assume that the initial code had the controller code calling the repository POCO in a simple For/Next loop that iterates a hardcoded 10 times.  So, if just one of the 10 iterating attempts succeeds, then it means that the order was successfully processed.  In this case, the processing thread would break free from the critical section in the For/Next loop and continue down its normal processing path.  This, my fellow readers, is what’s commonly referred to as “Optimistic Programming”.

The term, “Optimistic Programming”, hangs itself on the notion that your code will always be bug-free and operate on a normal execution path.  It’s this type of programming that provides a developer with an artificial comfort level.  After all, at least one of the 10 iterative calls will surely succeed.  Right?  Um…right?  Well, not exactly.

Jack Ganssle, from the Ganssle Group, does an excellent job explaining why this development approach can often lead to catastrophic consequences.  He does this in his 2008 online rant entitled, “Optimistic Programming“.  Sure, his article is practically ten years old at this point, but his message continues to be relevant to this very day.

The bottom line is that without knowing all of the possible failure points, their potential root cause, and all the alternative execution paths a thread can tread down if an exception occurs, then you’re probably setting yourself up for failure.  I mean, are 10 attempts really any better than one?  Are 10,000 calls really any better than 10?  Not only are these flimsy hypothesis with little or no real evidence to back them up, but they further convolute and mask the underlying root cause of practically any issue that arises.  The real question is, “Why are 10 attempts necessary when only one should suffice?”

So, what do you do in a situation when you have very little traceability into an ailing application in Production, but you need to know what’s going on with it…like yesterday!  Well, the first thing you do is place a phone call to The PSC Group, headquartered in Schaumburg, IL.  The second thing you do is ask for the help of Blago Stephanov, known internally to our organization as “The X-Factor”, and for a very good reason.  This guy is great at his craft and can accelerate the speed of development and problem solving by at least a factor 2…that’s no joke.

In this situation, Blago recommends using a platform like Elmah for logging and tracing unhandled errors.  Elmah is a droppable, pluggable logging framework that dynamically captures all unhandled exceptions.  It also offers color-coded stack traces with line numbers that can help pinpoint exactly where the exception was thrown.  Even more impressive, its very quick to implement and requires low personal involvement during integration and setup.  In a nutshell, its implementation is quick and it makes debugging a breeze.

Additionally, Elmah comes with a web page that allows you to remotely view the unhandled exceptions.  This is a fantastic function for determining the various paths, both normal and alternate, that lead up to an unhandled error. Elmah also allows developers to manually record their own information by using the following syntax.

ErrorSignal.FromCurrentContext().Raise(ex);

 

Regardless, Elmah’s capabilities go well beyond just recording exceptions. For all practical purposes, you can record just about any information you desire. If you want to know more about Elmah, then you can read up on it by clicking here.  Also, you’ll be happy to know that you can buy if for the low, low price of…free.  It just doesn’t get much better than this.


PART II – ONE REALLY COOL (AND EXTREMELY RELIABLE) RE-TRY PATTERN

So, after implementing Elmah, let’s say that we’re able to track down the offending lines of code, and in this case the code was failing in a critical section that iterates 10 times before succeeding or failing silently.  We would have been very hard-pressed to find it without the assistance of Elmah.

Let’s also assume that the underlying cause is that the code was experiencing deadlocks in the Entity Framework’s generated classes whenever order updates to the system-of-record database occur.  So, thanks to Elmah, at this point we finally have some decent information to build upon.  Elmah provides us with the stack trace information where the error occurred, which means that we would be able to trace the exception back to the offending line(s) of code.

After we do this, Blago recommends that we craft a better approach in the critical section of the code.  This approach provides more granular control over any programmatic retries if a deadlock occurs.  So, how is this better you might ask?  Well, keep in mind from your earlier reading that the code was simply looping 10 times in a For/Next loop.  So, by implementing his recommended approach, we’ll have the ability to not only control the number of iterative reattempts, but we can also control wait times in between reattempted calls, as well as the ability to log any meaningful exceptions if they occur.

 

       /// <summary>
       /// Places orders in a system-of-record DB
       /// </summary>
       /// <returns>An http response object</returns>
       [HttpGet]
       public IHttpActionResult PlaceOrder()
       {
           using (var or = new OrderRepository())
           {
               Retry.DoVoid(() => or.PlaceTheOrder(orderId));
               return Ok();
           }
       }

 

The above Retry.DoVoid() method calls into the following generic logic, which performs its job flawlessly.  What’s more, you can see in the example below where Elmah is being leveraged to log any exceptions that we might encounter.

 

using Elmah;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;

namespace PSC.utility
{
   /// <summary>
   /// Provides reliable and traceable retry logic
   /// </summary>
   public static class Retry
   {
       /// <summary>
       /// Retry logic
       /// </summary>
       /// <returns>Fire and forget</returns>
       public static void DoVoid(Action action, int retryIntervallInMS = 300, int retryCount = 5)
       {
           Do<object>(() =>
           {
               action();
               return null;
           }, retryIntervallInMS, retryCount);
       }

       public static T Do<T>(Func<T> action, int retryIntervallInMS = 300, int retryCount = 5)
       {
           var exceptions = new List<Exception>();
           TimeSpan retryInterval = TimeSpan.FromMilliseconds(retryIntervallInMS);

           for (int retry = 0; retry < retryCount; retry++)
           {
               bool success = true;

               try
               {
                   success = true;

                   if (retry > 0)
                   {
                       Thread.Sleep(retryInterval);
                   }
                   return action();
               }
               catch (Exception ex)
               {
                   success = false;
                   exceptions.Add(ex);
                   ErrorSignal.FromCurrentContext().Raise(ex);
               }
               finally
               {
                   if (retry > 0 && success) {
                       ErrorSignal.FromCurrentContext().Raise(new Exception(string.Format("The call was attempted {0} times. It finally succeeded.", retry)));
                   }
               }
           }
           throw new AggregateException(exceptions);
     }
   }
}

As you can see, the aforementioned Retry() pattern offers a much more methodical and reliable approach to invoke retry actions in situations where our code might be failing a few times before actually succeeding.  But, even if the logic succeeds, we still have to ask ourselves questions like, “Why isn’t one call enough?” and “Why are we still dealing with the odds of success?”

After all, not only do we have absolutely no verifiable proof that looping and reattempting 10 times achieves the necessary “odds of success”.  Therefore, the real question is why there should there be any speculation at all in this matter?  After all, we’re talking about pushing orders into a system-of-record database for revenue purposes, and the ability to process orders shouldn’t boil down to “odds of success”.  It should just work…every time!

Nonetheless, what this approach will buy us is one very valuable thing, and that’s enough time to track down the issue’s root cause.  So, with this approach in place, our number one focus would now be to find and solve the core problem.


PART III – PROBLEM SOLVED

So, at this point we’ve relegated ourselves to the fact that, although the aforementioned retry logic doesn’t hurt a thing,  it masks the core problem.

Blago recommends that the next step is to load test the failing method by creating a large pool of concurrent users (e.g. 1,000) all simulating the order update function at the exact same time.  I’ll also take it one step further by recommending that we also need to begin analyzing and profiling the SQL Server stored procedures that are being called by the Entity Framework and rejected.

I recommend that we first review the execution plans of the failing stored procedures, making sure their compiled execution plans aren’t lopsided.  if we happen to notice that too much time is being spent on individual tasks inside the stored procedure’s execution plan, then our goal should be to optimize them.  Ideally, what we want to see is an even distribution of time optimally spread across the various execution paths inside our stored procedures.

In our hypothetical example, we’ll assume there are a couple of SQL Server tables using complex keys to comprise a unique record on the Order table.

Let’s also assume that during the ordering process, there’s a query that leverages the secondary key to retrieve additional data before sending the order along to the system-of-record database.   However, because the complex keys are uniquely clustered, getting the data back out of the table using a single column proves to be too much of a strain for the growing table.  Ultimately, this leads to query timeouts and deadlocks, particularly under load.

To this end, optimizing the offending stored procedures by creating a non-clustered, non-unique index for the key attributes in the offending tables will vastly improve their efficiency.  Once the SQL optimizations are complete, the next step should be to perform more load tests and to leverage the SQL Server Profiling Tool to gauge the impact of our changes.  At this point, the deadlocks should disappear completely.


LET’S SUMMARIZE, SHALL WE

The moral of this story is really twofold.  (1) Everyone should have an “X-Factor” on their project; (2) You can’t beat great code traceability and logging in a solution. If option (1) isn’t possible, then at a minimum make sure that you implement option (2).

Ultimately, logging and traceability help out immeasurably on a project, particularly where root cause analysis is imperative to track down unhandled exceptions and other issues.  It’s through the introduction of Elmah that we were able to quickly identify and resolve the enigmatic database deadlock problems that plagued our hypothetical solution.

Regardless, while this particular scenario is completely conjectural, situations like these aren’t all that uncommon to run across in the field.  Regardless, most of this could have been prevented by following Jack Ganssule’s 10-year old advice, which is to make sure that you check those goesintas and goesoutas!  But, chances are that you probably won’t.

Thanks for reading and keep on coding! 🙂

meetupJoin me, Cole Francis, as I speak at the Dev Ops in the Burbs inaugural meeting on Thursday, November 3rd, in Naperville, IL. During my 30-minute presentation, I’ll discuss and demonstrate how to create and deploy a Microsoft .NET Core application to the Cloud using Docker Containers.  It should be a fun and educational evening.  I look forward to seeing you there.

Organized by Craig Jahnke and Tony Hotko.

DockerContainerTitle.png

By:  Cole Francis, Architect

Before you can try out the .NET Core Docker base images, you need to install Docker.  Let’s begin by installing Docker for Windows.

Docker1.png

Once Docker is successfully installed, you’ll see the following dialog box appear.

Docker2.png

Next, run a PowerShell command window in Administrator mode, and install the .NET Core by running the following Docker command.

Docker7.png

Once the download is complete, you’ll see some pertinent status information about the download attempt.

Docker8.png

Finally, run the following commands to create your very first “Hello World” Container.  Albeit, it’s not very exciting, but it is running the Container along with the ToolChain, which were pulled down from Microsoft’s Docker Hub in just a matter of seconds.

Finally, to prove that .NetCore is successfully installed,  compile and run the following code from the command line.  

Congratulations!  You’ve created your first Docker Container, and it only took a couple of minutes.

docker9

Thanks for reading and keep on coding! 🙂

Here’s to a Successful First Year!

Posted: September 24, 2016 in .NET Architecture
Tags:

ThankYou.png

To my Loyal Readers: 

I published my very first Möbius Straits article approximately one year ago, with an original goal of courting the technical intellect of a just a few savvy thought leaders.  In retrospect, this exercise helped me remember just how difficult it is to stay committed to a long-term writing initiative.  To aggravate things just a bit more, my hobbies are motorcycling, music, food and travel…and none of these things align incredibly well with creative technical writing.

So, in an attempt to evade a potential degradation of creativity and writer’s block, I experimented with a number of things, including co-creating a small writing group encouraging individuals to write articles based upon their own unique interests.  This, in turn, offered me a dedicated block of time to work on my own material, spurring my productivity for a while.  The bottom line is that I forced myself to find time to write about things that I thought the technical community might find both relevant and interesting.  In my humble opinion, there is no greater gift than sharing what you know with someone else who can benefit from your experience and expertise. 

Regardless, I now find myself embarking on my second year of creative technical writing, and as I pour through the first year’s readership analytics, I’m very enthusiastic about what I see.  For example, over the past four months, Möbius Straits has found a steady footing of 600-650 readers per month.  What I’ve also discovered is that many of you are returning readers representing over 130 countries from around the World.  Also, with five days still left in September, this month’s readership is projected to reach over 700 unique visitors for the first time ever.

bestmonthonrecord

Finally, from a holistic standpoint, the data is even more exciting, as the total number of non-unique Möbius Straits’ visits has grown almost 800% since January 1, 2016 (see the chart below), suggesting a very strong and loyal monthly following.  I am without words, maybe for the first time ever, and cannot thank you enough for your ongoing patronage.  As I mentioned in my original paragraph, it can be difficult to stay committed to a long-term writing initiative; however, your ongoing support is more than enough inspiration to keep me emotionally invested for another year.  I really owe this to you.  Once again, thank you so much.  Love!

2015to2016

 

Thanks for reading and keep on coding! 🙂

CreativeIntegrationIoT.png

 

Author:  Cole Francis, Architect


BACKGROUND

This weekend I picked up a Raspberry Pi 3 Model B, which is the last single-board computer from the Raspberry Pi Foundation. The Model B’s capabilities are quite impressive. For instance, it’s capable of streaming BluRay-quality video, and its 40-pin GPIO header gives you access to 27 GPIO, UART, I2C, SPI as well as both 3.3V and 5V power sources. It also comes with onboard Wi-Fi and Bluetooth, all in a compact unit that’s only slightly larger than a debit card.

What’s more, I also purchased a 7″ touch display that plugs right in to the Raspberry Pi’s motherboard.  I was feeling creative, so I decided to find a way to take the technical foundation that I came up with in my previous article and somehow incorporate the Pi into that design, essentially taking it to a whole new level.  If you read my previous article, then you already know that my original design looks like this:

Microsoft Flow

Basically, the abovementioned design workflow represents a Microsoft Office 365 Flow component monitoring my Microsoft Office 365 Exchange Inbox for incoming emails. It looks for anything with “Win” in the email subject line and automatically calls an Azure-based MVC WebAPI endpoint whenever it encounters one.  In turn, the WebAPI endpoint then calls an internal method that sends out another email to User 2. 

In any event, I created the abovementioned workflow to simply prove that we can do practically anything we want with Microsoft Flow acting as a catalyst to perform work across disparate platforms and codebases.

However, now I’m going to alter the original design workflow just a bit.  First, I’m going to change the Microsoft Flow component to start passing in email subject lines into our Azure-based WebAPI endpoint.  Secondly, I’m eliminating User 2 and substituting this person with the Raspberry Pi 3 IoT device running on a Windows 10 IoT Core OS. Never fear, in this article I’m also going to provide you with step-by-step instructions on how to install the OS on a Raspberry Pi 3 device.  Also, from this point on I’m going to refer to the Raspberry Pi 3 as “the Pi” just because it’s easier.

Once again, if you read my previous article, then you already know that the only time the Microsoft Flow component contacts the WebAPI is if an inbound email’s subject line matches the criteria we setup for it in Microsoft Flow.  In our new design, our Flow component will now pass the email subject line to a WebAPI enpoint, which will get enqueued in a static property in the cloud.

Separately, the Pi will also contact the Azure-hosted WebAPI endpoint on a regularly scheduled interval to see if an enqueued subject is being stored.  If so, then the Pi’s call to the WebAPI will cause the WebAPI to dequeue the subject line and return it to the Pi.  Finally, the Pi will interrogate the returned subject line and perform an automated action using the returned data.  The following technical design workflow probably lays it out better than I can explain it.

FlowDesign2.png


SOLUTION

Our solution will take us through a number of steps, including:

  1. Installing Microsoft Windows 10 IoT Core on the Pi.
  2. Modifying the Microsoft Flow component that we created in the previous article.
  3. Modifying the Azure-based (cloud) WebAPI2 project that I created in my previous article on Microsoft Flow.
  4. Creating a new Universal Windows Application that will reside on the Pi.

So, let’s get started by first setting up the Pi and installing Microsoft 10 IoT Core on it. We’re going to build our own little Smart Factory.


SETTING UP THE RASPBERRY PI 3

First, we’ll need to download the tools that are necessary to get the Windows IoT Core on the Pi.  You can get them here:

https://developer.microsoft.com/en-us/windows/iot/Downloads.htm

After we download the abovementioned tools, we’ll install them on our laptop or desktop.  Then we’ll be presented with the following wizard that will help guide us through the rest of the process.  The first screen that shows up is the “My devices” screen.  As you can see, it’s blank, and I can honestly say that I’ve never seen anything filled in this portion of the wizard, so you can ignore this section for now.  At this point, let’s sign into our Microsoft MSDN account and begin navigating through the wizard.

IoTWizard1

We can move onto the “Setup a new device” at this point:

IoTWizard2.png

Once we’re done adding our options, click the download and Install button in the lower right-hand corner of the screen.  It prompts us to enter an SD card if we haven’t already.

***A small word of caution***  The Raspberry Pi 3 uses a MicroSD card to host its operating system on, so take that into consideration when shopping for SD Cards.  What you’ll probably want to get is a MicroSD with a regular SD card adapter.  That’s what I did.  You’ll also want to study the SD Cards that Microsoft recommends for compatibility.  I unsuccessfully burnt through three SD cards before I gave up and went with their recommendation.  After conceding and going with a compatible SD card, I was able to render the Windows 10 IoT Core OS successfully, so don’t make the same costly mistake I made. 

Anyway, we’ll eventually get to the point where we’re asked to erase the data on the SD card we’ve inserted.  This process deletes all existing data on our SD card, formats it using a FAT32 file system, and then installs the Windows 10 IoT Core image on it.

IoTWizard4.png

You should see the following screen when the wizard starts copying the files onto the SD card:

IoTWizard5.png

Our SD card is finally ready for action.

IoTWizard6.png

At this point, we can remove the SD Card Adapter from our laptop or desktop, and also remove the micro SD card from the SD Card Adapter.  Next, insert the micro SD card into the Pi’s miniSD port and then boot it up.

Afterwards, we’ll connect an Ethernet cable from our laptop (or optionally a desktop) to the Ethernet port on the Raspberry Pi.  Then we’ll run the following command using the Pi’s local IP address.  For example, my Pi’s IP address is 169.254.16.5, but your Pi’s IP address might be different, so pay close attention to this detail.

Anyway, this sets the Pi up as a Remote Managed Trusted Host and allows us to administer it from our local machine, which in this case is a laptop.  So, now we should be able to deploy our code to the Pi and interact with in Visual Studio 2015 debug mode.

IoTWizard7.png

At this point, all of the heavy lifting for the Pi’s OS installation and communication infrastructure is complete.


MODIFYING OUR EXISTING MICROSOFT FLOW COMPONENT

So, let’s piggyback off of the previous article I wrote about on Microsoft Flow and extend it to incorporate a Pi into the mix.  But, before we do, let’s tweak our Microsoft “PSC Win Wire” Flow component just a bit, since our new design goal is to start passing in the subject line of an inbound email to an Azure-hosted WebAPI endpoint.  If you recall, in the previous article we were simply calling a WebAPI endpoint without passing a parameter.  So, let’s change the “PSC Win Wire” Flow component so that we can start passing an email subject line to a WebAPI endpoint.  We’ll accomplish this by making the changes you see in the picture below.

IoTWizard8.png

We’re now officially done with the necessary modifications to our Microsoft Flow component, so let’s save our work.

Once again, it’s the Flow component’s job is to continually monitor our email inbox for any emails that match the conditions that we set up, which in this case are if “PSC Win Wire” is included in the inbound email’s subject line.  Once this condition is met, then our Flow component will be responsible for calling the “SetWhoSoldTheBusiness” endpoint in the Azure-hosted WebAPI, and the WebAPI will enqueue this email subject line.


MICROSOFT AZURE .NET MVC WebAPI (THE CLOUD)

Now let’s focus our attention on creating a couple of new WebAPI endpoints using Visual Studio 2015.  First, let’s create a SetWhoSoldTheBusiness endpoint that accepts a string parameter, which will contain the email subject line that gets passed to us by the Flow component.   Next, we’ll create a GetWhoSoldTheBusiness endpoint, which will be called by the Pi to retrieve email subject lines, as shown in the C# code below.



namespace BlueBird.Controllers
{
    /// 
    /// The email controller
    /// 
    public class EmailController : ApiController
    {
        /// 
        /// Set the region that sold the business
        /// 
        /// The subject line of the email
        // GET: api/SetWhoSoldTheBusiness?subjectLine=""
        [HttpGet]
        public void SetWhoSoldTheBusiness(string subjectLine)
        {
            try
            {
                Email.SetWhoSoldTheBusiness(subjectLine);
            }
            catch (Exception)
            {
                throw;
            }
        }

        /// 
        /// Get the region that sold the business
        /// 
        /// 
        // GET: api/GetWhoSoldTheBusiness
        [HttpGet]
        public string GetWhoSoldTheBusiness()
        {
            try
            {
                return Email.GetWhoSoldTheBusiness();
            }
            catch (Exception)
            {
                throw;
            }
        }
    }
}


Whereas our WebAPI endpoint code above acts as a façade layer for calls being made from external callers, the concrete class below is tasked with actually accomplishing the real work, like storing and retrieving the email subject lines.  It’s the job of the BlueBird.Repository.Email class to enqueue and dequeue email subject lines whenever it’s called on to do so by the SetWhoSoldTheBusiness and GetWhoSoldTheBusiness WebAPI endpoints in the abovementioned code.



namespace BlueBird.Repository
{
    /// 
    /// The email repository
    /// 
    public static class Email //: IEmail
    {
        /// 
        /// The company that sold the business
        /// 
        public static Queue whoSoldTheBusiness = new Queue();

        /// 
        /// Determine who sold the business via the email subject line and drop it in the queue
        /// 
        /// The email subject line
        public static void SetWhoSoldTheBusiness(string subjectLine)
        {
            try
            {
                if (subjectLine.Contains("KC"))
                {
                    whoSoldTheBusiness.Enqueue("KC");
                }
                else if (subjectLine.Contains("CHI"))
                {
                    whoSoldTheBusiness.Enqueue("CHI");
                }
                else if (subjectLine.Contains("TAL"))
                {
                    whoSoldTheBusiness.Enqueue("TAL");
                }
            }
            catch (Exception)
            {

                throw;
            }
        }

        /// 
        /// Return the region that sold the business and drop it from the queue
        /// 
        /// The email subject line
        public static string GetWhoSoldTheBusiness()
        {
            string retVal = string.Empty;

            try
            {
                if (whoSoldTheBusiness != null)
                {
                    if (whoSoldTheBusiness.Count > 0)
                    {
                        retVal = whoSoldTheBusiness.Dequeue();
                    }
                }

                return retVal;
            }
            catch (Exception)
            {
                throw;
            }
        }
    }
}


Well, this represents all the work we’ll need to do in the WebAPI project, aside from deploying it to the Azure Cloud.


UNIVERSAL WINDOWS APPLICATION (e.g. UWA)

So, now let’s create a blank Universal Windows Application (herein referred to simply as UWA) in Visual Studio 2015, which will act as a second caller to the WebAPI endpoints we created above.  As a quick recap, our Microsoft Flow component calls a method in our cloud-hosted WebAPI to enqueue email subject lines anytime its conditions are met. 

Thus, it’s only fitting that our UWA, which will be hosted on the Pi, will have the ability to retrieve the data that’s enqueued in our WebAPI so that it can do something creative with that data.  As a result, it will be the responsibility of the UWA living in the Pi to ping our Azure WebAPI GetWhoSoldTheBusiness method every 10 seconds to find out if any enqueued email subject lines exist.  If so, then it will retrieve them. 

What’s more, upon retrieving an email subject line, it will interrogate it for the word “KC” (for Kansas City) or “CHI” (for Chicago) somewhere in the email subject line.  If it finds the word “KC” then we’ll have it play one song on the Pi, and if it finds “CHI” then we’ll play a different song.  So, let’s let’s start creating our UWA IoT application. We’ll use the Visual Studio 2015 (Universal Windows) template to get started.  Let’s name the new project PSCBlueBirdIoT, just like what’s shown in the screen below:

IoTWizard9.png

After creating the UWA Project, we’ll want to right-click on the project and enter in our Pi’s local IP Address.  We’ll also want to target it as a Remote Machine.  Also, let’s make sure that we check the “Uninstall and then re-install my package” option so that we’re not creating new instances of our application every time you redeploy to the Pi.  One last item of detail, let’s make sure that we check the “Allow local network loopback” option under the “Start Action” grouping as shown below.

IoTWizard10.png

Our code’s going to be really simple for the UWA Project.  Let’s create a simple timer inside of it that fires every ten seconds.  Whenever the timer fires, its sole responsibility will be to make an AJAX call to our Azure-hosted (cloud hosted) Web API endpoint, GetWhoSoldTheBusiness. And, it will pull back that value from the WebAPI queue object if an entry exists.  As previously mentioned, if the email subject line contains “KC” (e.g. “PSC Win Wire- KC”), then we’ll play one song; otherwise, we’ll play a different song if the email subject line contains “CHI” (e.g. “PSC Win Wire – CHI”).  Here’s the code for this:



namespace PSCBlueBirdIoT
{
    /// 
    /// An empty page that can be used on its own or navigated to within a Frame.
    /// 
    public sealed partial class MainPage : Page
    {
        #region Private Member Variables

        /// 
        /// Local timer
        /// 
        DispatcherTimer _timer = new DispatcherTimer();
        Queue _queueDealsWon = new Queue();

        #endregion

        #region Events

        /// 
        /// The main page
        /// 
        public MainPage()
        {
            this.InitializeComponent();
            this.DispatchTimerSetup();

        }

        /// 
        /// Fires on timer tick
        /// 
        /// The timer
        /// Any additional event arguments
        private void _timer_Tick(object sender, object e)
        {
            this.GetWhoSoldTheBusiness();
        }

        #endregion

        #region Private Methods

        /// 
        /// The setup for the dispatch timer
        /// 
        private void DispatchTimerSetup()
        {
            _timer.Tick += _timer_Tick;
            _timer.Interval = new TimeSpan(0, 1, 0);
            _timer.Start();
        }

        /// 
        /// Get who sold the business
        /// 
        private async void GetWhoSoldTheBusiness()
        {
            try
            {
                using (var client = new HttpClient())
                {
                    string retVal = string.Empty;

                    retVal = await client.GetStringAsync(new Uri("https://yourazurewebsite.net/api/Email/GetWhoSoldTheBusiness"));
                    retVal = retVal.Replace("\\", "");

                    if (retVal != string.Empty && retVal != "\"\"" && retVal != null)
                    {
                        if (retVal.Contains("CHI"))
                        {
                            retVal = "CHI.mp3";
                        }
                        else if (retVal.Contains("KC"))
                        {
                            retVal = "KC.mp3";
                        }
                        _queueDealsWon.Enqueue(retVal);

                        StorageFile file = await StorageFile.GetFileFromApplicationUriAsync(new Uri("ms-appx:///Music/" + _queueDealsWon.Dequeue()));
                        BackgroundMediaPlayer.Shutdown();
                        MediaPlayer player = BackgroundMediaPlayer.Current;
                        player.AutoPlay = false;
                        player.SetFileSource(file);
                        player.Play();

                    }
                }
            }
            catch (Exception e)
            {
                throw e;
            }
        }

        #endregion
    }
}


Now that this is done, let’s build and deploy our UWA application onto the Pi.  The pictorial below shows it doing its magic.  Because we’ve set the Pi up as a Trusted Remote Host, above, we can also do things like debug it using Visual Studio 2015 (Administrator mode) on our local machine.

IoTWizard11.png


TESTING IT ALL OUT

At this point, we’re done…as in “done, done”.  Smile  So, let’s test it end-to-end by kicking off an email to ourselves that matches the criteria we entered in our Microsoft Flow component.  If all goes as planned, then our Flow component will pick it up, call our Azure-based WebAPI endpoint and then enqueue our email subject line.

Finally, our UWA, which lives on the Pi, will separately call the other Azure-based WebAPI endpoint every 10 seconds, dequeueing and returning any email subject lines that might exist inside our Azure-hosted WebAPI.  Once the UWA application retrieves an email subject line, it will then determine if either “CHI” or “KC” is present within the subject line and play one song or another based on the response.  Pretty cool, huh?!?  Anyway, here’s a quick video of it in action…

Thanks for reading and keep on coding! 🙂

MicrosoftFlow

Author: Cole Francis, Architect

Today I had the pleasure of working with Microsoft Flow, Microsoft’s latest SaaS-based workflow offering. Introduced in April, 2016 and still in Preview mode, Flow allows both developers and non-developers alike to rapidly create visual workflow sequences using a number of on-prem and cloud-based services.  In fact, anyone who is interested in “low code” or “no code” integration-centric  solutions might want to take a closer look at Microsoft Flow.

Given this, I thought my goal for today would be to leverage Microsoft Flow to create a very rudimentary workflow that gets kicked off by an ordinary email, which in turn will call a cloud-based MVC WebAPI endpoint via an HTTP GET request, and then it will ultimately crank out a second email initiated by the WebAPI endpoint.

Obviously, the custom WebAPI endpoint isn’t necessary to generate the second email, as Microsoft Flow can accomplish this on its own without requiring any custom code at all.  So, the reason I’m adding the custom WebAPI enpoint into the mix is to simply prove that Flow has the ability to integrate with a custom RESTful WebAPI endpoint.  After all, if I can successfully accomplish this, then I can foreseeably communicate with any endpoint on any codebase on any platform.  So, here’s my overall architectural design and workflow:

Microsoft Flow

To kick things off, let’s create a simple workflow using Microsoft Flow.  We’ll do this by first logging into Microsoft Office 365.  If we look closely, we’ll find the Flow application within the waffle:

Office365Portal

After clicking on the Flow application, I’m taken to the next screen where I can either choose from an impressive number of existing workflow templates, or I can optionally choose to create my own custom workflow:

FlowTemplates.png

I need to call out that I’ve just shown you a very small fraction of pre-defined templates that are actually available in Flow.  As of this writing, there are hundreds of pre-defined templates that can be used to integrate with an impressive number of Microsoft and non-Microsoft platforms.  The real beauty is that they can be used to perform some very impressive tasks without writing a lick of code.  For example, I can incorporate approval workflows, collect data, interact with various email platforms, perform mobile push notifications (incl. iOS), track productivity, interact with various social media channels, synchronize data, etc…

Moreover, Microsoft Flow comes with an impressive number of triggers, which interact with an generous number of platforms, such as Box, DropBox, Dynamics CRM, Facebook, GitHub, Google Calendar, Instagram, MailChimp, Office365, OneDrive, OneDrive for Business, Project Online, RSS, Salesforce, SharePoint, SparkPost, Trello, Twitter, Visual Studio Team Services, Wunderlist, Yammer, YouTube, PowerApps, and more.

So, let’s continue building our very own Microsoft Flow workflow object.  I’ll do this by clicking on the “My Flows” option at the top of the web page.  This navigates me to a page that displays my saved workflows.  In my case, I don’t currently have any saved workflows, so I’ll click the “Create new flow” button that’s available to me (see the image below).

MyFlows

Next, I’ll search for the word “Mail”, which presents me with the following options:

Office365Email.png

Since the company I work for uses Microsoft Office 365 Outlook, I’ll select that option.  After doing this, I’m presented with the following “Action widget”.

Office365Inbox.png

I will then click on the “Show advanced options” link, which provides me with some additional options.  I’ll fill in the information using something that meets my specific needs.  In my particular case, I want to be able to kick-off my workflow from any email that contains “Win” in the Subject line.

Office365InboxOptions

Next, I’ll click on the (+ New step) link at the bottom of my widget, and I’m presented with some additional options.  As you can see, I can either “Add another action”, “Add a condition”, or click on the “…More” option to do things like “Add an apply to each” option, “Add a do until” condition, or “Add a scope”.

Office365InboxOptions0.png

As I previously mentioned, I want to be able to call a custom Azure-based RESTful WebAPI endpoint from my custom Flow object.  So, I’ll click on the “Add an action”, and then I’ll select the “HTTP” widget from the list of actions that are available.

RESTfulWebAPIoption.png

After clicking on the “HTTP” widget, I’m now presented with the “HTTP” widget options.  At a minimum, the “HTTP” object will allow me to specify a URI for my WebAPI endpoint (e.g. http://www.microsoftazure.net/XXXEndpoint), as well as an Http Verb (e.g. GET, POST, DELETE, etc…).  You’ll need to fill in your RESTful WebAPI endpoint data according to your own needs, but mine looks like this:

HTTPOption.png

After I’m done, I’ll can save my custom Flow by clicking the “Create Flow” button at the top of the page and providing my Flow with a meaningful name.  Providing your Flow with a meaningful name is very important, because you could eventually have a hundred of these things, so being able to distinguish one from another will be key.  For example, I named my custom Flow “PSC Win Wire”.  After saving my Flow, I can now do things like create additional Flows, edit existing Flows, activate or deactivate Flows, delete Flows, and review the viability and performance of my existing Flows by clicking on the “List Runs” icon that’s available to me.

SaveFlow.png

In any event, now that I’ve completed my custom Flow object, all I’ll need to do now is quickly spin up a .NET MVC WebAPI2 solution that contains my custom WebAPI endpoint, and then push my bits to the Cloud in order to publicly expose my endpoint.  I need to point out that my solution doesn’t necessarily need to be hosted in the Cloud, as a publicly exposed on-prem endpoint should work just fine.  However, I don’t have a quick way of publicly exposing my WebAPI endpoint on-prem, so resorting to the Cloud is the best approach for me.

I also need to point out again that creating a custom .NET MVC WebAPI isn’t necessary to run Microsoft Flows.  There are plenty of OOB templates that don’t require you to write any custom code at all.  This type of versatility is what makes Microsoft Flow so alluring.

In any case, the end result of my .NET MVC WebAPI2 project is shown below.  As you can see, the core WebAPI code generates an email (my real code will have values where you only see XXXX’s in the pic below…sorry!   🙂

MVCWebAPI.png

The GetLatestEmail() method will get called from a publicly exposed endpoint in the EmailController() class.  For simplicity’s sake, my EmailController class only contains one endpoint, and its named GetLatestEmails():

The Controller.png

So, now that I’m done setting everything up, it’s time for me to publish my code to the Azure Cloud.  I’ll start this off by cleaning and building my solution.  Afterwards, I’ll right-click on my project in the Solution Explorer pane, and then I’ll click on the Publish option that appears below.

Publish1.png

Now that this is out of the way, I’ll begin entering in my Azure Publish Web profile options.  Since I’m deploying an MVC application that contains a WebAPI2 endpoint, I’ve selected the “Microsoft Azure Web Apps” option form the Profile category.

Publish2.png

Next, I’ll enter the “Connection” options and fill that information in.   Afterwards, I should now have enough information to publish my solution to the Azure Cloud.  Of course, if you’re trying this on your own, this example assumes that you already have a Microsoft Azure Account.  If you don’t have a Microsoft Azure account, then you can find out more about it by clicking here.

Publish3.png

Regardless, I’ll click the “Publish” button now, which will automatically compile my code. If the build is successful then it will publish my bits to Microsoft’s Azure Cloud.  Now comes the fun part…testing it out!

First, I’ll create an email that matches the same conditions that were specified by me in the “Office 365 Outlook – When an email arrives” Flow widget I previously created.  If you recall, that workflow widget is being triggered by the word “Win” in the Subject line of any email that gets sent to me, so I’ll make sure that my test email meets that condition.

PSCWinWireEmail

After I send an email that meets my Flow’s conditions, then my custom Flow object should get kicked-off and call my endpoint, which means that if all goes well, then I should receive another email from my WebAPI endpoint.  Hey, look!  I successfully received an email from the WebAPI endpoint, just as I expected.  That was really quick!  🙂

EmailResults.png

Now that we know that our custom Flow object works A-Z, I want tell you about another really cool Microsoft Flow feature, and that’s the ability to monitor the progress of my custom Flow objects.  I can accomplish this by clicking on the “List Runs” icon in the “My Flows” section of the Microsoft Flow main page (see below).

ListRun1.png

Doing this will conjure up the following page.  From here, I can gain more insight and visibility into the viability and efficiency of my custom Flows by simply clicking on the arrow to the right of each of the rows below.

ListRun2.png

Once I do that, I’m presented with the following page.  At this point, I can drill down into the objects by clicking on them, which will display all of the metadata associated with the selected widget.  Pretty cool, huh!

ListRun3.png

Well, that’s it for this example.  I hope you’ve enjoyed my walkthrough.  I personally find Microsoft Flow to be a very promising SaaS-based workflow offering.

Thanks for reading and keep on coding! 🙂

AngularJS.png

Author: Cole Francis, Architect

BACKGROUND

While you may not be able to tell it by my verbose articles, I am a devout source code minimalist by nature.  Although I’m not entirely certain how I ended up like this, I do have a few loose theories.

  1. I’m probably lazy.  I state this because I’m constantly looking for ways to do more work in fewer lines of code.  This is probably why I’m so partial to software design patterns.  I feel like once I know them, then being able to recapitulate them on command allows me to manufacturer software at a much quicker pace.  If you’ve spent anytime at all playing in the software integration space, then you can appreciate how imperative it is to be quick and nimble.
  2. I’m kind of old.  I cut my teeth in a period when machine resources weren’t exactly plentiful, so it was extremely important that your code didn’t consume too much memory, throttle down the CPU (singular), or take up an extraordinary amount of space on the hard drive or network share.  If it did, people had no problem crawling out of the woodwork to scold you.
  3. I have a guilty conscience.  As much as I would like to code with reckless abandon, I simply cannot bring myself to do it.  I’m sure I would lose sleep at night if I did.  In my opinion, concerns need to be separated, coding conventions need to be followed, yada, yada, yada…  However, there are situations that sometime cause me to overlook certain coding standards in favor of a lazier approach, and that’s when simplicity trumps rigidity!

So, without further delay, here’s a perfect example of my laziness persevering.  Let’s say that an AngularJS code base exists that properly separates its concerns by implementing a number of client-side controllers that perform their own genric activities. At this point, you’re now ready to lay down the client-side service layer functions to communicate with a number of remote Web-based REST API endpoints.  So, you start to write a bunch of service functions that implement the AngularJS http directive and its implied promise pattern, and then suddenly you have an epiphany!  Why not write one generic AngularJS service function that is capable of calling most RESTful Web API endpoints?  So, you think about it for a second, and then you lay down this little eclectic dynamo instead:



var contenttype = 'application/json';
var datatype = 'json';

/* A generic async service can call a RESTful Web API inside an implied $http promise.
*/
this.serviceAction = function(httpVerb, baseUrl, endpoint, qs) {
  return $http({
    method: httpVerb,
    url: baseUrl + endpoint + qs,
    contentType: contenttype,
    dataType: datatype,
  }).success(function(data){
    return data;
  }).error(function(){
    return null;
  });
};

 
That’s literally all there is to it! So, to wrap things up on the AngularJS client-side controller, you would call the service by implementing a fleshed out version of the code snippet below. Provided you aren’t passing in lists of data, and as long as the content types and data types follow the same pattern, then you should be able to write an endless number of AngularJS controller functions that can all call into the same service function, much like the one I’ve provided above. See, I told you I’m lazy. 🙂



/* Async call the AngularJS Service (shown above)
*/
$scope.doStuff = function (passedInId) {

  // Make a call to the AngularJS layer to call a remote endpoint
  httpservice.serviceAction('GET', $scope.baseURL(), '/some/endpoint', '?id=' + passedInId).then(function (response) {
    if (response != null && response.data.length > 0) {
      // Apply the response data to two-way bound array here!
    }
  });
};

 
Thanks for reading and keep on coding! 🙂