Archive for the ‘.NET Architecture’ Category

The Results

Author:  Cole Francis, Senior Solution Architect @ The PSC Group, LLC

Download the Source Code for This Example Here!

BACKGROUND

Traditionally speaking, creating custom Microsoft Windows Services can be a real pain.  The endless and mind-numbing repetitions of using the InstallUtil command-line utility and Ctrl+Alt+P attachments to the debug the code from the Microsoft Visual Studio IDE are more than enough to discourage the average Software Developer.

While many companies are now shying away from writing Windows Services in an attempt to get better optics around job failures, custom Windows Services continue to exist in limited enterprise development situations where certain thresholds of caution are exercised.

But, if you’re ever blessed with the dubious honor of having to write a custom Windows Service, take note of the fact that there are much easier ways of approaching this task than there used to be, and in my opinion one of the easiest ways is to use a NuGet package called TopShelf.

Here are the top three benefits of using TopShelf to create a Windows Service:

  1. The first benefit of using TopShelf is that you get out from underneath the nuances of using the InstallUtil command to install and uninstall your Windows Service.
  2. Secondly, you create your Windows Service using a simple and familiar Console Application template type inside Microsoft Visual Studio.  So, not only is it extraordinarily easy to create, it’s also just as easy to debug and eventually transition into a fully-fledged Windows Service leveraging TopShelf. This involves a small series of steps that I’ll demonstrate for you shortly.
  3. Because you’ve taken the complexity and mystery out of creating, installing, and debugging your Windows Service, you can focus on writing better code.

So, now that I’ve explained some of the benefits of using TopShelf to create a Windows Service, let’s run through a quick step-by-step example of how to get one up and running.  Don’t be alarmed by the number of steps in my example below.  You’ll find that you’ll be able to work through them very quickly.


Step 1

The first step is to create a simple Console Application in Microsoft Visual Studio.  As you can see in the example below, I named mine TopShelfCWS, but you can name yours whatever you want.

The Results


Step 2

The second step is to open the NuGet Package Manager from the Microsoft Visual Studio IDE menu and then click on the Manage NuGet Packages for Solution option in the submenu as shown in the example below.

The Results


Step 3

After the NuGet Package Manager screen appears, click on the Browser option at the top of the dialog box, and then search on the words “TopShelf”.  A number of packages should appear in the list, and you’ll want to select the one shown in the example below.

The Results


Step 4

Next, select the version of the TopShelf product that aligns with your project or you can simply opt to use the default version that was automatically selected for you, which is what I have done in my working example.

Afterwards, click the Install button.  After the package successfully installs itself, you’ll see a green checkbox by the TopShelf icon, just like you see in the example below.

The Results


Step 5

Next, add a new Class to the TopShelfCWS project, and name it something that’s relevant to your solution.  As you can see in the example below, I named my class NameMeAnything.

The Results


Step 6

In your new class (e.g. NameMeAnything), add a reference to the TopShelf product, and then inherit from ServiceControl.

The Results


Step 7

Afterwards, right click on the words ServiceControl and implement its interface as shown in the example below.

The Results


Step 8

After implementing the interface, you’ll see two new events show up in your class.  They’re called Start() and Stop(), and they’re the only two events that the TopShelf product relies upon to communicate with the Windows Service Start and Stop events.

The Results


Step 9

Next, we’ll head back to the Main event inside the Program class of the Console Application.  Inside the Main event, you’ll set the service properties of your impending Windows Service.  It will include properties like:

  • The ServiceName: Indicates the name used by the system to identify this service.
  • The DisplayName: Indicates the friendly name that identifies the service to the user.
  • The Description: Gets or sets the description for the service.

For more information, see the example below.

The Results


Step 10

Let’s go back to your custom class one more time (e.g. NameMeAnything.cs), and add the code in the following example to your class.  You’ll replace this with your own custom code at some point, but following my example will give you a good idea of how things behave.

The Results


Step 11

Make sure you include some Console writes to account for all the event behaviors that will occur when you run it.

The Results


Step 12

As I mentioned earlier, you can run the Console Application simply as that, a Console Application.  You can do this by simply pressing the F5 key.  If you’ve followed my example up to this point, then you should see the output shown in the following example.

The Results


Step 13

Now that you’ve run your solution as a simple Console Application, let’s take the next step and install it as a Window Service.

To do this, open a command prompt and navigate to the bin\Debug folder of your project.   *IMPORTANT:  Make sure you’re running the command prompt in Administrator mode* as shown in the example below.

The Results


Step 14

One of the more beautiful aspects of the TopShelf product is how it abstracts you away from all the .NET InstallUtil nonsense.  Installing your Console Application as a Windows Service is as easy as typing the name of your executable, followed by the word “Install”.  See the example below.

The Results


Step 15

Once it installs, you’ll see the output shown in the example below.

The Results


Step 16

What’s more, if you navigate to the Windows Services dialog box, you should now see your Console Application show up as a fully-operable Windows Service, as depicted below.

The Results


Step 17

You can now modify the properties of your Windows Service and start it.  Since all I’m doing in my example is executing a simple timer operation and logging out console messages, I just kept all the Windows Service properties defaults and started my service.  See the example below.

The Results


Step 18

If all goes well, you’ll see your Windows Service running in the Windows Services dialog box.

The Results


Step 19

So, now that your console application is running as a Windows Service, you’re absent the the advantage of seeing your console messages being written to the console. So, how do you debug it?

The answer is that you can use the more traditional means of attaching a Visual Studio process to your running Windows Service by clicking Ctrl + Alt + P in the Visual Studio IDE, and then selecting the name of your running Windows Service, as shown in the example below.

The Results


Step 20

Next, set a breakpoint on the _timer_Elapsed event.  If everything is running and hooked up properly, then your breakpoint should be hit every second, and you can press F10 to step it though the event handler that’s responsible for writing the output to the console, as shown in the example below.

The Results


Step 21

Once you’re convinced that your Windows Service is behaving properly, you can stop it and test the TopShelf uninstallation process.

Again, TopShelf completely abstracts you away from the nuances of the InstallUtil utility, by allowing you to uninstall your Windows Service just as easily as you initially installed it.

The Results


Step 22

Finally, if you go back into the Windows Services dialog box and refresh your running Windows Services, then you should quickly see that your Windows Service has been successfully removed.

The Results


SUMMARY

In summary, I walked you through the easy steps of creating a custom Windows Service using the TopShelf NuGet package and a simple C# .NET Console application.

In the end, starting out with a TopShelf NuGet package and a simple Console application allows for a much easier and intuitive Windows Service development process, because it abstracts away all the complexities normally associated with traditional Windows Service development and debugging, resulting in more time to focus on writing better code. These are all good things!

Hi, I’m Cole Francis, a Senior Solution Architect for The PSC Group, LLC located in Schaumburg, IL.  We’re a Microsoft Partner that specializes in technology solutions that help our clients achieve their strategic business objectives.  PSC serves clients nationwide from Chicago and Kansas City.

Thanks for reading, and keep on coding!  🙂

Advertisements

CreateImagefromPDF

By: Cole Francis, Senior Architect at The PSC Group, LLC.

Let’s say you’re working on a hypothetical project, and you run across a requirement for creating an image from the first page of a client-provided PDF document.  Let’s say the PDF document is named MyPDF.pdf, and your client wants you to produce a .PNG image output file named MyPDF.png.

Furthermore, the client states that you absolutely cannot read the contents of the PDF file, and you’ll only know if you’re successful if you can read the output that your code generates inside the image file.  So, that’s it, those are the only requirements.   What do you do?

SOLUTION

Thankfully, there are a number of solutions to address this problem, and I’m going to use a lesser known .NET NuGet package to handle this problem.  Why?  Well, for one I want to demonstrate what an easy problem this is to solve.  So, I’ll start off by searching in the .NET NuGet Package Manager Library for something describing what I want to do.  Voila, I run across a lesser known package named “Pdf2Png”.  I install it in less than 5 seconds.

Pdf2Png.png

So, is the Pdf2Png package thread-safe and server-side compliant?  I don’t know, but I’m not concerned about it because it wasn’t listed as a functional requirement.  So, this is something that will show up as an assumption in the Statement-of-Work document and will be quickly addressed if my assumption is incorrect.

Next, I create a very simple console application, although this could be just about any .NET file type, as long as it has rights to the file system.  The process to create the console application takes me another 10 seconds.

Next, I drop in the following three lines of code and execute the application, taking another 5 secondsThis would actually be one line of code if I was passing in the source and target file locations and names.

 string pdf_filename = @"c:\cole\PdfToPng\MyPDF.pdf";
 string png_filename = @"c:\cole\PdfToPng\MyPDF.png";
 List errors = cs_pdf_to_image.Pdf2Image.Convert(pdf_filename, png_filename);

Although my work isn’t overwhelmingly complex, the output is extraordinary for a mere 20 seconds worth of work!  Alas, I have not one, but two files in my source folder.  One’s my source PDF document, and the other one’s the image that was produced from my console application using the Pdf2Png package.

TwoFiles.png

Finally, when I open the .PNG image file, it reveals the mysterious content that was originally inside the source PDF document:

SomeThingsArentHard.png

Before I end, I have to mention that the Pdf2Png component is not only simple, but it’s also somewhat sophisticated.  The library is a subset of Mark Redman’s work on PDFConvert using Ghostscript gsdll32.dll, and it automatically makes the Ghostscript gsdll32 accessible on a client machine that may not have it physically installed.

Thanks for reading, and keep on coding!  🙂

AngularJS SPA

By:  Cole Francis, Senior Solution Architect at The PSC Group, LLC.

PROBLEM

There’s a familiar theme running around on the Internet right now about certain problems associated with generating SEO-friendly Sitemaps for SPA-based AngularJS web applications.  They often have two funamental issues associated with their poor architectural design:

  1. There’s usually a nasty hashtag (#) or hashbang (#!) buried in the middle of the URL route, which the website ultimately relies upon for parsing purposes in order to construct the real URL route (e.g. https://www.myInheritedWebApp.com/stuff/#/items/2
  2. Because of the embedded hashtag or hashbang, the URL’s are dynamically constructed and don’t actually point to content without parsing the hashtag (or hashbang) operator first.  The underlying problem is that a Sitemap.xml document can’t be auto-generated for SEO indexing.

I realize that some people might be offended by my comment about “poor achitectural design”.  I state this loosely, because it’s really just the nature of the beast.  Why?  Because it’s really easy to get started with AngularJS, and many Software Developers simply start laying down code that’s initially decent, but at some point they start implementing hacks because of added complexity to the original functional requirements.  That’s where they begin to get themselves in trouble very creative. 🙂

If you think I’m kidding, then just try Googling the following keywords and you’ll see exactly what I mean:  AngularJS, hash, hashbang, SEO, Sitemap, problem.

SOLUTION

So, the first step is to remove the hashtag (#) or the hashbang (#!).  I know it sucks, and it’s going to require some work, but let me be clear.  Do it!  For one, generating the Sitemap will be much easier, because you won’t need to parse on a hashtag (or hashbang) to get the real URL.  Secondly, all the remediation work you do will be a reminder the next time you think about taking shortcuts.

Regardless, after correcting the hashtag problem, you still have another issue.  Your website is still an AngularJS SPA-based website, which means that all its content is dynamically generated and injected through JavaScript AJAX calls.

Given this, how will you ever be able to generate a Sitemap containing all your content (e.g. products, catalogs, people, etc…)? Even more concerning, how will people find your people or products when searching on Google?

Luckily, the answer is very simple.  Here’s a little gem that I recently ran across while trying to generate a Sitemap.xml document on an AngularJS SPA architected website, and it works like a charm:  http://botmap.io/

I literally copied the script on the BotMap website to the bottom of my shared\_Layout.cshtml file, just above the closing tag.  This gives BotMap permission to crawl your website.  After doing this, push your website to Production, then point the BotMap website to your publicly-facing URL, and finally click the button on their website to initiate the crawl.  One and done!

BotMap begins to crawl and catalog your website as if it was a real person browsing it. It doesn’t use CURL or xHttp requests to determine what to catalog. The BotMap crawler actually executes the JavaScript, which is how it ultimately learns about all of the content on your website that it will use to construct the Sitemap.  

This is why it’s so great for websites created using AngularJS or other JavaScript frameworks where content is injected inside the JavaScript code itself.  Congratulations, {{vm.youreDone}}!

Thanks for reading, and keep on coding!  🙂

XFactor

By: Cole Francis, Architect, PSC, LLC


THE PROBLEM

So, what do you do when you’re building a website, and you have a long-running client-side call to a Web API layer. Naturally, you’re going to do what most developers do and call the Web API asynchronously.  This way, your code can continue to cruise along until a result finally return from the server.

But, what if matters are actually worse than that?  What if your Web API Controller code contacts a Repository POCO that then calls a stored procedure through the Entity Framework.  And, what if the Entity Framework leverages a project dedicated database, as well as a system-of-record database, and calls to your system-of-record database sporadically fail?

Like most software developers, you would lean towards looking at the log files, offering traceability and logging for your code.  But, what if there wasn’t any logging baked into the code?  Even worse, what if this problem only occurred sporadically?  And, when it occurs, orders don’t make it into the system-of-record database, which means that things like order changes and financial transactions don’t occur.  Have you ever been in a situation like this one?


PART I – HERE COMES ELMAH

From a programmatic perspective, let’s hypothetically assume that the initial code had the controller code calling the repository POCO in a simple For/Next loop that iterates a hardcoded 10 times.  So, if just one of the 10 iterating attempts succeeds, then it means that the order was successfully processed.  In this case, the processing thread would break free from the critical section in the For/Next loop and continue down its normal processing path.  This, my fellow readers, is what’s commonly referred to as “Optimistic Programming”.

The term, “Optimistic Programming”, hangs itself on the notion that your code will always be bug-free and operate on a normal execution path.  It’s this type of programming that provides a developer with an artificial comfort level.  After all, at least one of the 10 iterative calls will surely succeed.  Right?  Um…right?  Well, not exactly.

Jack Ganssle, from the Ganssle Group, does an excellent job explaining why this development approach can often lead to catastrophic consequences.  He does this in his 2008 online rant entitled, “Optimistic Programming“.  Sure, his article is practically ten years old at this point, but his message continues to be relevant to this very day.

The bottom line is that without knowing all of the possible failure points, their potential root cause, and all the alternative execution paths a thread can tread down if an exception occurs, then you’re probably setting yourself up for failure.  I mean, are 10 attempts really any better than one?  Are 10,000 calls really any better than 10?  Not only are these flimsy hypothesis with little or no real evidence to back them up, but they further convolute and mask the underlying root cause of practically any issue that arises.  The real question is, “Why are 10 attempts necessary when only one should suffice?”

So, what do you do in a situation when you have very little traceability into an ailing application in Production, but you need to know what’s going on with it…like yesterday!  Well, the first thing you do is place a phone call to The PSC Group, headquartered in Schaumburg, IL.  The second thing you do is ask for the help of Blago Stephanov, known internally to our organization as “The X-Factor”, and for a very good reason.  This guy is great at his craft and can accelerate the speed of development and problem solving by at least a factor 2…that’s no joke.

In this situation, Blago recommends using a platform like Elmah for logging and tracing unhandled errors.  Elmah is a droppable, pluggable logging framework that dynamically captures all unhandled exceptions.  It also offers color-coded stack traces with line numbers that can help pinpoint exactly where the exception was thrown.  Even more impressive, its very quick to implement and requires low personal involvement during integration and setup.  In a nutshell, its implementation is quick and it makes debugging a breeze.

Additionally, Elmah comes with a web page that allows you to remotely view the unhandled exceptions.  This is a fantastic function for determining the various paths, both normal and alternate, that lead up to an unhandled error. Elmah also allows developers to manually record their own information by using the following syntax.

ErrorSignal.FromCurrentContext().Raise(ex);

 

Regardless, Elmah’s capabilities go well beyond just recording exceptions. For all practical purposes, you can record just about any information you desire. If you want to know more about Elmah, then you can read up on it by clicking here.  Also, you’ll be happy to know that you can buy if for the low, low price of…free.  It just doesn’t get much better than this.


PART II – ONE REALLY COOL (AND EXTREMELY RELIABLE) RE-TRY PATTERN

So, after implementing Elmah, let’s say that we’re able to track down the offending lines of code, and in this case the code was failing in a critical section that iterates 10 times before succeeding or failing silently.  We would have been very hard-pressed to find it without the assistance of Elmah.

Let’s also assume that the underlying cause is that the code was experiencing deadlocks in the Entity Framework’s generated classes whenever order updates to the system-of-record database occur.  So, thanks to Elmah, at this point we finally have some decent information to build upon.  Elmah provides us with the stack trace information where the error occurred, which means that we would be able to trace the exception back to the offending line(s) of code.

After we do this, Blago recommends that we craft a better approach in the critical section of the code.  This approach provides more granular control over any programmatic retries if a deadlock occurs.  So, how is this better you might ask?  Well, keep in mind from your earlier reading that the code was simply looping 10 times in a For/Next loop.  So, by implementing his recommended approach, we’ll have the ability to not only control the number of iterative reattempts, but we can also control wait times in between reattempted calls, as well as the ability to log any meaningful exceptions if they occur.

 

       /// <summary>
       /// Places orders in a system-of-record DB
       /// </summary>
       /// <returns>An http response object</returns>
       [HttpGet]
       public IHttpActionResult PlaceOrder()
       {
           using (var or = new OrderRepository())
           {
               Retry.DoVoid(() => or.PlaceTheOrder(orderId));
               return Ok();
           }
       }

 

The above Retry.DoVoid() method calls into the following generic logic, which performs its job flawlessly.  What’s more, you can see in the example below where Elmah is being leveraged to log any exceptions that we might encounter.

 

using Elmah;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;

namespace PSC.utility
{
   /// <summary>
   /// Provides reliable and traceable retry logic
   /// </summary>
   public static class Retry
   {
       /// <summary>
       /// Retry logic
       /// </summary>
       /// <returns>Fire and forget</returns>
       public static void DoVoid(Action action, int retryIntervallInMS = 300, int retryCount = 5)
       {
           Do<object>(() =>
           {
               action();
               return null;
           }, retryIntervallInMS, retryCount);
       }

       public static T Do<T>(Func<T> action, int retryIntervallInMS = 300, int retryCount = 5)
       {
           var exceptions = new List<Exception>();
           TimeSpan retryInterval = TimeSpan.FromMilliseconds(retryIntervallInMS);

           for (int retry = 0; retry < retryCount; retry++)
           {
               bool success = true;

               try
               {
                   success = true;

                   if (retry > 0)
                   {
                       Thread.Sleep(retryInterval);
                   }
                   return action();
               }
               catch (Exception ex)
               {
                   success = false;
                   exceptions.Add(ex);
                   ErrorSignal.FromCurrentContext().Raise(ex);
               }
               finally
               {
                   if (retry > 0 && success) {
                       ErrorSignal.FromCurrentContext().Raise(new Exception(string.Format("The call was attempted {0} times. It finally succeeded.", retry)));
                   }
               }
           }
           throw new AggregateException(exceptions);
     }
   }
}

As you can see, the aforementioned Retry() pattern offers a much more methodical and reliable approach to invoke retry actions in situations where our code might be failing a few times before actually succeeding.  But, even if the logic succeeds, we still have to ask ourselves questions like, “Why isn’t one call enough?” and “Why are we still dealing with the odds of success?”

After all, not only do we have absolutely no verifiable proof that looping and reattempting 10 times achieves the necessary “odds of success”.  Therefore, the real question is why there should there be any speculation at all in this matter?  After all, we’re talking about pushing orders into a system-of-record database for revenue purposes, and the ability to process orders shouldn’t boil down to “odds of success”.  It should just work…every time!

Nonetheless, what this approach will buy us is one very valuable thing, and that’s enough time to track down the issue’s root cause.  So, with this approach in place, our number one focus would now be to find and solve the core problem.


PART III – PROBLEM SOLVED

So, at this point we’ve relegated ourselves to the fact that, although the aforementioned retry logic doesn’t hurt a thing,  it masks the core problem.

Blago recommends that the next step is to load test the failing method by creating a large pool of concurrent users (e.g. 1,000) all simulating the order update function at the exact same time.  I’ll also take it one step further by recommending that we also need to begin analyzing and profiling the SQL Server stored procedures that are being called by the Entity Framework and rejected.

I recommend that we first review the execution plans of the failing stored procedures, making sure their compiled execution plans aren’t lopsided.  if we happen to notice that too much time is being spent on individual tasks inside the stored procedure’s execution plan, then our goal should be to optimize them.  Ideally, what we want to see is an even distribution of time optimally spread across the various execution paths inside our stored procedures.

In our hypothetical example, we’ll assume there are a couple of SQL Server tables using complex keys to comprise a unique record on the Order table.

Let’s also assume that during the ordering process, there’s a query that leverages the secondary key to retrieve additional data before sending the order along to the system-of-record database.   However, because the complex keys are uniquely clustered, getting the data back out of the table using a single column proves to be too much of a strain for the growing table.  Ultimately, this leads to query timeouts and deadlocks, particularly under load.

To this end, optimizing the offending stored procedures by creating a non-clustered, non-unique index for the key attributes in the offending tables will vastly improve their efficiency.  Once the SQL optimizations are complete, the next step should be to perform more load tests and to leverage the SQL Server Profiling Tool to gauge the impact of our changes.  At this point, the deadlocks should disappear completely.


LET’S SUMMARIZE, SHALL WE

The moral of this story is really twofold.  (1) Everyone should have an “X-Factor” on their project; (2) You can’t beat great code traceability and logging in a solution. If option (1) isn’t possible, then at a minimum make sure that you implement option (2).

Ultimately, logging and traceability help out immeasurably on a project, particularly where root cause analysis is imperative to track down unhandled exceptions and other issues.  It’s through the introduction of Elmah that we were able to quickly identify and resolve the enigmatic database deadlock problems that plagued our hypothetical solution.

Regardless, while this particular scenario is completely conjectural, situations like these aren’t all that uncommon to run across in the field.  Regardless, most of this could have been prevented by following Jack Ganssule’s 10-year old advice, which is to make sure that you check those goesintas and goesoutas!  But, chances are that you probably won’t.

Thanks for reading and keep on coding! 🙂

meetupJoin me, Cole Francis, as I speak at the Dev Ops in the Burbs inaugural meeting on Thursday, November 3rd, in Naperville, IL. During my 30-minute presentation, I’ll discuss and demonstrate how to create and deploy a Microsoft .NET Core application to the Cloud using Docker Containers.  It should be a fun and educational evening.  I look forward to seeing you there.

Organized by Craig Jahnke and Tony Hotko.

DockerContainerTitle.png

By:  Cole Francis, Architect

Before you can try out the .NET Core Docker base images, you need to install Docker.  Let’s begin by installing Docker for Windows.

Docker1.png

Once Docker is successfully installed, you’ll see the following dialog box appear.

Docker2.png

Next, run a PowerShell command window in Administrator mode, and install the .NET Core by running the following Docker command.

Docker7.png

Once the download is complete, you’ll see some pertinent status information about the download attempt.

Docker8.png

Finally, run the following commands to create your very first “Hello World” Container.  Albeit, it’s not very exciting, but it is running the Container along with the ToolChain, which were pulled down from Microsoft’s Docker Hub in just a matter of seconds.

Finally, to prove that .NetCore is successfully installed,  compile and run the following code from the command line.  

Congratulations!  You’ve created your first Docker Container, and it only took a couple of minutes.

docker9

Thanks for reading and keep on coding! 🙂

Here’s to a Successful First Year!

Posted: September 24, 2016 in .NET Architecture
Tags:

ThankYou.png

To my Loyal Readers: 

I published my very first Möbius Straits article approximately one year ago, with an original goal of courting the technical intellect of a just a few savvy thought leaders.  In retrospect, this exercise helped me remember just how difficult it is to stay committed to a long-term writing initiative.  To aggravate things just a bit more, my hobbies are motorcycling, music, food and travel…and none of these things align incredibly well with creative technical writing.

So, in an attempt to evade a potential degradation of creativity and writer’s block, I experimented with a number of things, including co-creating a small writing group encouraging individuals to write articles based upon their own unique interests.  This, in turn, offered me a dedicated block of time to work on my own material, spurring my productivity for a while.  The bottom line is that I forced myself to find time to write about things that I thought the technical community might find both relevant and interesting.  In my humble opinion, there is no greater gift than sharing what you know with someone else who can benefit from your experience and expertise. 

Regardless, I now find myself embarking on my second year of creative technical writing, and as I pour through the first year’s readership analytics, I’m very enthusiastic about what I see.  For example, over the past four months, Möbius Straits has found a steady footing of 600-650 readers per month.  What I’ve also discovered is that many of you are returning readers representing over 130 countries from around the World.  Also, with five days still left in September, this month’s readership is projected to reach over 700 unique visitors for the first time ever.

bestmonthonrecord

Finally, from a holistic standpoint, the data is even more exciting, as the total number of non-unique Möbius Straits’ visits has grown almost 800% since January 1, 2016 (see the chart below), suggesting a very strong and loyal monthly following.  I am without words, maybe for the first time ever, and cannot thank you enough for your ongoing patronage.  As I mentioned in my original paragraph, it can be difficult to stay committed to a long-term writing initiative; however, your ongoing support is more than enough inspiration to keep me emotionally invested for another year.  I really owe this to you.  Once again, thank you so much.  Love!

2015to2016

 

Thanks for reading and keep on coding! 🙂