Posts Tagged ‘PSC’

XFactor

By: Cole Francis, Architect, PSC, LLC


THE PROBLEM

So, what do you do when you’re building a website, and you have a long-running client-side call to a Web API layer. Naturally, you’re going to do what most developers do and call the Web API asynchronously.  This way, your code can continue to cruise along until a result finally return from the server.

But, what if matters are actually worse than that?  What if your Web API Controller code contacts a Repository POCO that then calls a stored procedure through the Entity Framework.  And, what if the Entity Framework leverages a project dedicated database, as well as a system-of-record database, and calls to your system-of-record database sporadically fail?

Like most software developers, you would lean towards looking at the log files, offering traceability and logging for your code.  But, what if there wasn’t any logging baked into the code?  Even worse, what if this problem only occurred sporadically?  And, when it occurs, orders don’t make it into the system-of-record database, which means that things like order changes and financial transactions don’t occur.  Have you ever been in a situation like this one?


PART I – HERE COMES ELMAH

From a programmatic perspective, let’s hypothetically assume that the initial code had the controller code calling the repository POCO in a simple For/Next loop that iterates a hardcoded 10 times.  So, if just one of the 10 iterating attempts succeeds, then it means that the order was successfully processed.  In this case, the processing thread would break free from the critical section in the For/Next loop and continue down its normal processing path.  This, my fellow readers, is what’s commonly referred to as “Optimistic Programming”.

The term, “Optimistic Programming”, hangs itself on the notion that your code will always be bug-free and operate on a normal execution path.  It’s this type of programming that provides a developer with an artificial comfort level.  After all, at least one of the 10 iterative calls will surely succeed.  Right?  Um…right?  Well, not exactly.

Jack Ganssle, from the Ganssle Group, does an excellent job explaining why this development approach can often lead to catastrophic consequences.  He does this in his 2008 online rant entitled, “Optimistic Programming“.  Sure, his article is practically ten years old at this point, but his message continues to be relevant to this very day.

The bottom line is that without knowing all of the possible failure points, their potential root cause, and all the alternative execution paths a thread can tread down if an exception occurs, then you’re probably setting yourself up for failure.  I mean, are 10 attempts really any better than one?  Are 10,000 calls really any better than 10?  Not only are these flimsy hypothesis with little or no real evidence to back them up, but they further convolute and mask the underlying root cause of practically any issue that arises.  The real question is, “Why are 10 attempts necessary when only one should suffice?”

So, what do you do in a situation when you have very little traceability into an ailing application in Production, but you need to know what’s going on with it…like yesterday!  Well, the first thing you do is place a phone call to The PSC Group, headquartered in Schaumburg, IL.  The second thing you do is ask for the help of Blago Stephanov, known internally to our organization as “The X-Factor”, and for a very good reason.  This guy is great at his craft and can accelerate the speed of development and problem solving by at least a factor 2…that’s no joke.

In this situation, Blago recommends using a platform like Elmah for logging and tracing unhandled errors.  Elmah is a droppable, pluggable logging framework that dynamically captures all unhandled exceptions.  It also offers color-coded stack traces with line numbers that can help pinpoint exactly where the exception was thrown.  Even more impressive, its very quick to implement and requires low personal involvement during integration and setup.  In a nutshell, its implementation is quick and it makes debugging a breeze.

Additionally, Elmah comes with a web page that allows you to remotely view the unhandled exceptions.  This is a fantastic function for determining the various paths, both normal and alternate, that lead up to an unhandled error. Elmah also allows developers to manually record their own information by using the following syntax.

ErrorSignal.FromCurrentContext().Raise(ex);

 

Regardless, Elmah’s capabilities go well beyond just recording exceptions. For all practical purposes, you can record just about any information you desire. If you want to know more about Elmah, then you can read up on it by clicking here.  Also, you’ll be happy to know that you can buy if for the low, low price of…free.  It just doesn’t get much better than this.


PART II – ONE REALLY COOL (AND EXTREMELY RELIABLE) RE-TRY PATTERN

So, after implementing Elmah, let’s say that we’re able to track down the offending lines of code, and in this case the code was failing in a critical section that iterates 10 times before succeeding or failing silently.  We would have been very hard-pressed to find it without the assistance of Elmah.

Let’s also assume that the underlying cause is that the code was experiencing deadlocks in the Entity Framework’s generated classes whenever order updates to the system-of-record database occur.  So, thanks to Elmah, at this point we finally have some decent information to build upon.  Elmah provides us with the stack trace information where the error occurred, which means that we would be able to trace the exception back to the offending line(s) of code.

After we do this, Blago recommends that we craft a better approach in the critical section of the code.  This approach provides more granular control over any programmatic retries if a deadlock occurs.  So, how is this better you might ask?  Well, keep in mind from your earlier reading that the code was simply looping 10 times in a For/Next loop.  So, by implementing his recommended approach, we’ll have the ability to not only control the number of iterative reattempts, but we can also control wait times in between reattempted calls, as well as the ability to log any meaningful exceptions if they occur.

 

       /// <summary>
       /// Places orders in a system-of-record DB
       /// </summary>
       /// <returns>An http response object</returns>
       [HttpGet]
       public IHttpActionResult PlaceOrder()
       {
           using (var or = new OrderRepository())
           {
               Retry.DoVoid(() => or.PlaceTheOrder(orderId));
               return Ok();
           }
       }

 

The above Retry.DoVoid() method calls into the following generic logic, which performs its job flawlessly.  What’s more, you can see in the example below where Elmah is being leveraged to log any exceptions that we might encounter.

 

using Elmah;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;

namespace PSC.utility
{
   /// <summary>
   /// Provides reliable and traceable retry logic
   /// </summary>
   public static class Retry
   {
       /// <summary>
       /// Retry logic
       /// </summary>
       /// <returns>Fire and forget</returns>
       public static void DoVoid(Action action, int retryIntervallInMS = 300, int retryCount = 5)
       {
           Do<object>(() =>
           {
               action();
               return null;
           }, retryIntervallInMS, retryCount);
       }

       public static T Do<T>(Func<T> action, int retryIntervallInMS = 300, int retryCount = 5)
       {
           var exceptions = new List<Exception>();
           TimeSpan retryInterval = TimeSpan.FromMilliseconds(retryIntervallInMS);

           for (int retry = 0; retry < retryCount; retry++)
           {
               bool success = true;

               try
               {
                   success = true;

                   if (retry > 0)
                   {
                       Thread.Sleep(retryInterval);
                   }
                   return action();
               }
               catch (Exception ex)
               {
                   success = false;
                   exceptions.Add(ex);
                   ErrorSignal.FromCurrentContext().Raise(ex);
               }
               finally
               {
                   if (retry > 0 && success) {
                       ErrorSignal.FromCurrentContext().Raise(new Exception(string.Format("The call was attempted {0} times. It finally succeeded.", retry)));
                   }
               }
           }
           throw new AggregateException(exceptions);
     }
   }
}

As you can see, the aforementioned Retry() pattern offers a much more methodical and reliable approach to invoke retry actions in situations where our code might be failing a few times before actually succeeding.  But, even if the logic succeeds, we still have to ask ourselves questions like, “Why isn’t one call enough?” and “Why are we still dealing with the odds of success?”

After all, not only do we have absolutely no verifiable proof that looping and reattempting 10 times achieves the necessary “odds of success”.  Therefore, the real question is why there should there be any speculation at all in this matter?  After all, we’re talking about pushing orders into a system-of-record database for revenue purposes, and the ability to process orders shouldn’t boil down to “odds of success”.  It should just work…every time!

Nonetheless, what this approach will buy us is one very valuable thing, and that’s enough time to track down the issue’s root cause.  So, with this approach in place, our number one focus would now be to find and solve the core problem.


PART III – PROBLEM SOLVED

So, at this point we’ve relegated ourselves to the fact that, although the aforementioned retry logic doesn’t hurt a thing,  it masks the core problem.

Blago recommends that the next step is to load test the failing method by creating a large pool of concurrent users (e.g. 1,000) all simulating the order update function at the exact same time.  I’ll also take it one step further by recommending that we also need to begin analyzing and profiling the SQL Server stored procedures that are being called by the Entity Framework and rejected.

I recommend that we first review the execution plans of the failing stored procedures, making sure their compiled execution plans aren’t lopsided.  if we happen to notice that too much time is being spent on individual tasks inside the stored procedure’s execution plan, then our goal should be to optimize them.  Ideally, what we want to see is an even distribution of time optimally spread across the various execution paths inside our stored procedures.

In our hypothetical example, we’ll assume there are a couple of SQL Server tables using complex keys to comprise a unique record on the Order table.

Let’s also assume that during the ordering process, there’s a query that leverages the secondary key to retrieve additional data before sending the order along to the system-of-record database.   However, because the complex keys are uniquely clustered, getting the data back out of the table using a single column proves to be too much of a strain for the growing table.  Ultimately, this leads to query timeouts and deadlocks, particularly under load.

To this end, optimizing the offending stored procedures by creating a non-clustered, non-unique index for the key attributes in the offending tables will vastly improve their efficiency.  Once the SQL optimizations are complete, the next step should be to perform more load tests and to leverage the SQL Server Profiling Tool to gauge the impact of our changes.  At this point, the deadlocks should disappear completely.


LET’S SUMMARIZE, SHALL WE

The moral of this story is really twofold.  (1) Everyone should have an “X-Factor” on their project; (2) You can’t beat great code traceability and logging in a solution. If option (1) isn’t possible, then at a minimum make sure that you implement option (2).

Ultimately, logging and traceability help out immeasurably on a project, particularly where root cause analysis is imperative to track down unhandled exceptions and other issues.  It’s through the introduction of Elmah that we were able to quickly identify and resolve the enigmatic database deadlock problems that plagued our hypothetical solution.

Regardless, while this particular scenario is completely conjectural, situations like these aren’t all that uncommon to run across in the field.  Regardless, most of this could have been prevented by following Jack Ganssule’s 10-year old advice, which is to make sure that you check those goesintas and goesoutas!  But, chances are that you probably won’t.

Thanks for reading and keep on coding! 🙂

productdelivery

By:  Cole Francis, Solution Architect at The PSC Group, LLC, Schaumburg, IL.

Today’s successful IT Delivery Leaders focus predominantly on the delivery of a “product” and focus less on the term “project”.  They despise heavy planning phases that require intense requirements gathering sessions, they avoid meetings that they know will produce unactionable results, they redirect unnecessary project drama and chaos, they address unmanageable timelines, and they shy away from creating redundant product artifacts that tell a story that’s already been told.

Today’s successful IT Delivery Leaders are all about orchestrating results in rapid successions to demonstrate quick and frequent progress to the Stakeholders, they manage realistic expectations across the entire Delivery Team, they allow a product and its accompanying artifacts to define themselves over a series of iterative sprints, and they work directly with the Stakeholders to help shape the final product.  That’s efficiency!  Hi, I’m Cole Francis, a Solution Architect at The PSC Group in Schaumburg, IL, and I’ve been successfully delivering custom software solutions for an impressive and growing list of well-branded clients for over twenty years.

meetup

Please join me, Cole Francis, when I speak at “Dev Ops in the Burbs” on Thursday, February 2nd, at the NIU Conference Center in Naperville, IL at 6pm sharp.  During my hour-long presentation, I’ll discuss and demonstrate how to navigate and use the comprehensive cloud-based Microsoft Visual Studio Team Services (VSTS) platform.

I’ll also talk about how I use this platform’s built-in tools and capabilities to manage my SCRUM-based Agile projects and teams, such as:  Product Backlog Items and the Kanban Board, capacity planning and management, sprint planning, and setting up a project’s areas and iterations.  I’ll also discuss general team management using the SCRUM-based Agile approach, including how to conduct your team and product stakeholder meetings.

Additionally, I’ll also talk about how your team should estimate the level-of-effort for PBI’s, and how those items should be prioritized and monitored during the course of the project.What’s more, I’ll also help you understand how to forecast when your project will be done based upon your team’s ever-fluctuating velocity and capacity.

Finally, I’ll also cover bug entry and management, PBI prioritization, when you might consider breaking PBI’s into more discrete tasks, when an epic should be used on the project, basic VSTS security, Visual Studio source code integration, how to customize the project home page, how to set up custom queries and alerts, and how to automate the build & deployment processes.

It sounds like a lot of information…and geez…it is.  🙂  I’m pretty sure that I could talk for at least a day on this platform, so I’ll have quite a bit of ground to cover in a very short amount of time, but I think I can do it.  However, just in case I can’t, please bring a sleeping bag, a change of clothes, and a day’s worth of food and water with you. 🙂  In all seriousness though, it should be a very fun and educational evening.  I look forward to seeing everyone there.  Please join me.  Click here for more details.

Organized by Craig Jahnke and Tony Hotko.

meetupJoin me, Cole Francis, as I speak at the Dev Ops in the Burbs inaugural meeting on Thursday, November 3rd, in Naperville, IL. During my 30-minute presentation, I’ll discuss and demonstrate how to create and deploy a Microsoft .NET Core application to the Cloud using Docker Containers.  It should be a fun and educational evening.  I look forward to seeing you there.

Organized by Craig Jahnke and Tony Hotko.

Containers.png

Click Here to Download My Dockerized .NET Core Solution

Author:  Cole Francis, Architect

Preface

Before I get embark on my discussion on Docker Containers, it’s important that I tell you that my appreciation for Docker Containers stems from an interesting conversation I had with a very intelligent co-worker of mine at PSC, Norm Murrin.  Norm is traditionally the guy in the office that you go to when you can’t figure something out on your own.  The breadth and depth of his technical capabilities is absolutely amazing.  Anyway, I want to thank him for the time he spent getting me up-to-speed on them, because frankly put, Containers are really quite amazing once you understand their purpose and value.  Containerization is definitely a trend your going to see used a lot more in the DevOps community, and getting to understand them now will greatly benefit you as their use becomes much more mainstream in the future.  You can navigate to Norm Murrin’s blog site by clicking here.

The Origins of OSVs

“Operating System Virtualization”, also known as OSV, was born predominantly out of a need for infrastructure teams to balance large numbers of users across a restrictive amount of physical hardware.  Virtualizing an operating system entails creating isolated partitions representing a physical instance of a server, and then virtualizing it into multiple isolated partitions that replicate the original server.  

Because the isolated partitions use normal operating system call interfaces, there’s no need for them to be emulated or executed by an intermediate virtual machine.  Therefore, the end result is that running the OSV comes with almost no overhead.  Other immediate benefits include:

  • It streamlines your organization’s machine provisioning processes,
  • It improves your organization’s applications availability and scalability,
  • It helps your organization create bullet-proof disaster recovery plans,
  • It helps reduce costly on-prem hardware vendor affinities.

What’s more, the very fact that your company is virtualizing its servers and moving away from bare metal hardware systems probably indicates that it’s not only trying to address some of the bullet-point items I’ve previously mentioned, but it’s also preparing for a future cloud migration.

The Difference Between a VM and an OSV

Virtual machines, or VMs, require that the guest system and host system each have their own operating system, libraries, and a full memory instance in order to run in complete isolation.  In turn, communication from the guest and host systems occurs in an abstracted layer known as the hypervisor.

Granted, the term “hypervisor” sounds pretty darn cool, but it’s not entirely efficient.  For instance, starting and stopping a VMs necessitates a full booting process and memory load, which significantly limits the number of software applications that can reside on the host system.  In most cases, a VM supports only one application.
hypervisor

On the contrary, OSVs offer incredibly lightweight virtual environments that incorporate a technique called “namespace isolation”.  In the development community, we commonly refer to namespace isolation as “Containers”, and it’s this container level-of-isolation that can allows hundreds of anonymous containers to live and run side-by-side with one another, in complete anonymity of one another, on a single underlying host system.

NamespaceIsolation.png

The Advantages of Using Containers

One interesting item to note is that because Containers share resources on the same host system they operate on, there is often cooperative governance in place that allows the host system to maximize the efficiency of shared CPU, memory, and common OS libraries as the demands of the Containers continually change.

Cooperative governance accomplishes this by making sure that each container is supplied with an appropriate amount of resources to operate efficiently, while at the same time not encroaching on the availability of resources required by the other running containers.  It’s also important to point out that this dynamic allocation of resources can be manually overridden.

  1. Cooperative Governance – Doesn’t require any sort of finite resource limitations or other impositions by the host.  Instead, the host dynamically orchestrates the reallocation of resources as the ongoing demand changes.
  2. Manual Governance – A Container can be limited so it cannot consume more than a certain percentage of the CPU or memory at any given time.

Other great advantages that Containers have over bare metal virtual machines are:

  1. You don’t have to install an operating system on the Container system.
  2. You also don’t have to get the latest patches for a Container system.
  3. You don’t have to install application frameworks or third-party dependency libraries.
  4. You don’t have to worry about networking issues.
  5. You don’t have to install your application on a Container system.
  6. You don’t have to configure your application so that it works properly in your Container.

All of the abovementioned concerns are handled for you by the sheer nature of the Container.

Are There Any Disadvantages

While the advantages are numerous, there are some disadvantages to be aware of, including:

  1. Containers are immutable
  2. Containers run in a single process
  3. If you’re using .NET Core as your foundation, you’ll only have access to a partial feature set…for now anyway.
  4. There are some security vulnerabilities that you’ll want to be aware of, like large attack surfaces, operating system fragmentation, and virtual machine bloat.
  5. Because this is such a new technical area, not all third-party vendors offer support for Core applications.  For example, at this point in time Oracle doesn’t offer Core capabilities for Entity Framework (EF).  See more about this by clicking here.

Are VMs Dead

The really short answer is, “No.”  Because Containers (OSVs) have so many advantages over VMs, the natural assumption is that VMs are going away, but this simply isn’t true.

In fact, Containers and VMs actually complement one another.  The idea is that you do all of the setup work one-time on an image that includes all of your dependencies and the Docker engine, and then you have it host as many Containers as you need.  This way you don’t have to fire up a separate VM and operating system for each application being hosting on the machine.

Like I mentioned in an earlier section, OSVs offer incredibly lightweight virtual environments that incorporate a technique known as “namespace isolation”, which ultimately allows containers to live and run alongside each other, and yet completely autonomously from one another, on the same host system.

Therefore, in most practical cases it will probably make sense for the underlying host system to be a VM.

Containers as Microservices

Containers can house portions of a solution, for example just the UI layer.  Or, they can store an entire solution, from the UI to the database, and everything in between.  One of the better known uses for Containers is “Micro Services”, where each container represents a separate layer of a subsystem.

What’s more, scaling the number of containers instances to meet the demands of an environment is fairly trivial.  The example below depicts a number of containers being scaled up to meet the demands of a Production environment versus a Test environment.   This can be accomplished in a few flips of a switch when a Container architecture is designed correctly.  There are also a number of tools that you can use to create Containers, or even an environment full of Containers, such as Docker and Docker Cloud.

TEST ENVIRONMENT

ContainersEnviroQA.png

PRODUCTION ENVIRONMENT

ContainersEnviroProd.png

What is Docker?

Docker is an OSV toolset that was initially released to the public on September 16, 2013.  It was created to support Containerized applications and it doesn’t include any bare-metal drivers.

Therefore, Containers are incredibly lightweight and serve as a universal, demand-based environment that both shares and reallocates pools of computing resources (e.g., computer networks, servers, storage, applications and services) as the environmtal demand changes.  

Finally, because of their raw and minimalistic nature, Containers can be rapidly provisioned and released with very little effort.

Build it and They Will Come

Lets’s go ahead and deploy our Containerized .NET Core solution to Docker Cloud.  We’re going to use Docker Cloud as the Primary Cloud Hosting Provider and Microsoft Azure as the Emergency Backup System (EBS), Cloud Hosting Provider.  Not only that, but we’re going to deploy and provision all of our new resources in the Docker Cloud and Microsoft Azure in a span of about 15 minutes.

You probably think I’m feeding you a line of B.S.  I’m not offended because if I didn’t know any better I would too.  This is why I’m going to show you, step-by-step, how we’re going to accomplish this together.

Of course, There are just a few assumptions that I’ll make before we get started.  However, even if I assume incorrectly, I’ll still make sure that you can get through the following step-by-step guide:

  1. Assumption number one:  I’m going to assume that you already have a Microsoft Azure account set up.  If you don’t, then it’s no big deal.  You can simply forgo the steps that use Azure as an “Emergency Backup Site”.    You’ll still get the full benefit of deploying to the Docker Cloud, which still covers most deployment scenarios.
  2. Assumption number two:  I’m going to assume that you already have Docker for Windows installed.  If not, then you can get it for free here.
  3. Assumption number three:  I’m going to assume that you already have a Containerized application.  Again, if you don’t, then it’s no big deal.  I’m going to give you a couple of options here.  One option is that you can use my previous post as a way to quickly create a Containerized application.  You can get to my previous post by clicking here.

Another option you can explore is downloading the Dockerized .NET Core solution that I created on my own and made available to you at the top of this page.  Basically, it’s a .NET Core MVC application, which comes with a static Admin.html page and uses AngularJS and Swagger under the hood.  Through a little bit of manipulation, I made it possible for you to visualize certain aspects of the environment that your Containerized application is being hosted in, such as the internal and external IP addresses, the operating system, the number of supporting processors, etc.

Furthermore, it also incorporates a standard Web API layer that I’ve Swashbuckled and Swaggered, so you can actually make external calls to your Containerized application’s Rest API methods while it’s being hosted in the Cloud.

Finally, I’ve already included a Dockerfile in the solution, so all of your bases should be covered as I navigate you through the following steps.  I’ll even show it working for me, just like it should work for you.  Let’s get started…

STEP 1 – If you don’t already have a Docker Cloud account, then you can create one for free by clicking here.

step-1

STEP 2 – Setup your image repository.

step-2

STEP 3 – Add yourself as a Contributor to the project, as well as anyone else you want to have access to your repository.

step-2a-be-a-contributor

STEP 4 – Open Microsoft PowerTools or a command prompt and navigate to the project directory that contains the Dockerfile.  Look for it at the project level.

step-3-build-docker-project

STEP 5 – Build the Container image using Docker.  If you look at the pictorial above, you’ll see that I used the following command to build mine (**NOTE:  You will need to include both the space and period at the end of the command):

docker build -t [Your Container Project Name Here] .

step-4-finished-building-docker-project

STEP 6 – If everything built fine, then you’ll be able to see the image you just created by running the following command:

docker images

step-5-showing-newly-built-image

STEP 7 – Unless you’re already running a Container, then your running Containers should obviously be empty.  You can verify this by running the following command:

docker ps

step-6-dockercontainers-before

STEP 8 – Run the new Docker image that you just created.  You’ll do this by running the following Docker command:

docker run -d -p 8080:80 [Your Container Project Name Here]

step-7-creating-a-docker-container

STEP 9 – Review the Container you’re now running by using the following command.  If everything went well, then you should see your new running container:

docker ps

step-8-displaying-the-new-docker-container

STEP 10a – Open a browser and test your running Docker Containerized application.  **Note that neither IIS or self-hosting isn’t used.  Don’t run it in a Visual Studio IDE.  Also note that the supporting OS is Linux and not Windows.

step-9-running-the-new-docker-container

STEP 10b – Now run it in a Visual Studio IDE and denote the differences (e.g. Server Name, extern al listening port, the number of processors, and the hosting operating system.

step-10-running-the-app-out-of-visual-studio-2015

STEP 11 – Log into the Docker Cloud from PowerShell or a command prompt using the following command:

docker login

step-11-docker-login-script

STEP 12 – Tag your container repository:

docker tag webapi supercole/dockerrepository:webapi

step-12-tag-your-docker-image

STEP 13 – Push your local container into the Docker Repository using the following command:

docker push supercole/dockerrepository:webapi

step-13-upload-your-image-using-tag

STEP 14 – Review the progress of your pushed container to the Docker repository.

step-14-upload-to-docker-success

STEP 15 – Review your Docker repository and the tag you previously created for it in Docker Cloud.

step-15-container-and-tag-in-docker

STEP 16 – Create the Docker Cloud service from your pushed container image.

step-16-start-service-in-docker-hub

STEP 17 – Review the defined environment variables for your service.

step-17-adding-options-to-start-docker-hub-service

STEP 18 – Add a volume (optional).

step-18-add-a-volume

STEP 19 – Select the number of Containers you want to host.

step-19-creating-the-service

STEP 20 – Specify a Cloud Hosting Provider.  I chose Microsoft Azure, because I already have an Azure account.  Anyway, it will ask you to enter your credentials, and it will spit out a signed certificate that you’ll use to create a one-way trust between Docker Cloud and Azure.

step-20-creating-a-docker-cert-and-affinitizing-to-azure

STEP 21 – In Microsoft Azure, I uploaded the Docker Cloud certificate in order to create the trust.

step-20a-management-settings-create-docker-to-azure-certificate-trust-account

STEP 22 – Go back to the Docker Cloud and launch your first node.

step-21-launching-my-first-node-to-azure

STEP 23- This step can take awhile, because it goes through the process of both provisioning, uploading, and activating your Docker Cloud container in Microsoft Azure.

step-22-create-a-node-cluster

STEP 24- After the provisioning and deploying process completes, review your Azure account for the new Docker resources that were created.

Step 23 - Starting to Deploy to Azure from Docker.png

STEP 25 – You can also review the Docker Cloud Node timeline for all the activities that are occurring (e.g., Provisioning, setting up the network, deploying, et al).

step-24-monitoring-the-azure-provision-process

STEP 26- Finishing up the Docker Cloud to Azure deployment.

step-25-azure-finishing-up-the-docker-provisioning-process

STEP 27- The deployment successfully completed!

step-26-volume-complete

STEP 28- Launch your new service from your Docker Cloud Container repository.

step-27-launch-the-repository

STEP 29 – Wait for it…

step-28-launching-the-service

STEP 30a – Try out your hosted Docker Container in Docker Cloud.

dockersuccess

STEP 30b – Try out your hosted Docker Container in Microsoft Azure.

AzureSuccess.png

Thanks for reading and keep on coding! 🙂

 

DockerContainerTitle.png

By:  Cole Francis, Architect

Before you can try out the .NET Core Docker base images, you need to install Docker.  Let’s begin by installing Docker for Windows.

Docker1.png

Once Docker is successfully installed, you’ll see the following dialog box appear.

Docker2.png

Next, run a PowerShell command window in Administrator mode, and install the .NET Core by running the following Docker command.

Docker7.png

Once the download is complete, you’ll see some pertinent status information about the download attempt.

Docker8.png

Finally, run the following commands to create your very first “Hello World” Container.  Albeit, it’s not very exciting, but it is running the Container along with the ToolChain, which were pulled down from Microsoft’s Docker Hub in just a matter of seconds.

Finally, to prove that .NetCore is successfully installed,  compile and run the following code from the command line.  

Congratulations!  You’ve created your first Docker Container, and it only took a couple of minutes.

docker9

Thanks for reading and keep on coding! 🙂

Here’s to a Successful First Year!

Posted: September 24, 2016 in .NET Architecture
Tags:

ThankYou.png

To my Loyal Readers: 

I published my very first Möbius Straits article approximately one year ago, with an original goal of courting the technical intellect of a just a few savvy thought leaders.  In retrospect, this exercise helped me remember just how difficult it is to stay committed to a long-term writing initiative.  To aggravate things just a bit more, my hobbies are motorcycling, music, food and travel…and none of these things align incredibly well with creative technical writing.

So, in an attempt to evade a potential degradation of creativity and writer’s block, I experimented with a number of things, including co-creating a small writing group encouraging individuals to write articles based upon their own unique interests.  This, in turn, offered me a dedicated block of time to work on my own material, spurring my productivity for a while.  The bottom line is that I forced myself to find time to write about things that I thought the technical community might find both relevant and interesting.  In my humble opinion, there is no greater gift than sharing what you know with someone else who can benefit from your experience and expertise. 

Regardless, I now find myself embarking on my second year of creative technical writing, and as I pour through the first year’s readership analytics, I’m very enthusiastic about what I see.  For example, over the past four months, Möbius Straits has found a steady footing of 600-650 readers per month.  What I’ve also discovered is that many of you are returning readers representing over 130 countries from around the World.  Also, with five days still left in September, this month’s readership is projected to reach over 700 unique visitors for the first time ever.

bestmonthonrecord

Finally, from a holistic standpoint, the data is even more exciting, as the total number of non-unique Möbius Straits’ visits has grown almost 800% since January 1, 2016 (see the chart below), suggesting a very strong and loyal monthly following.  I am without words, maybe for the first time ever, and cannot thank you enough for your ongoing patronage.  As I mentioned in my original paragraph, it can be difficult to stay committed to a long-term writing initiative; however, your ongoing support is more than enough inspiration to keep me emotionally invested for another year.  I really owe this to you.  Once again, thank you so much.  Love!

2015to2016

 

Thanks for reading and keep on coding! 🙂