If you have been looking outside in the last year, you’ll probably have noticed that Containers are trendy. Especially for technologies that are not directly related with .NET. I’ve playing with then and I have some questions and thoughts I wanted to share about them. What are containers? Why Containers? What brings respect VM? What is the price I will pay for it? How can help me solve my everyday problems and make my life easier, from a DevOps point of view?

Specially if your are thinking about splitting a monolith system into an smaller subsystems, alongside/or a DevOps strategy with Continuos Delivery, this may be useful to you. It won’t be short, but I’ll to make it as interesting and easy as possible.

What are Containers?

The first time I listed about Containers was about a year and half at everis Digital Architecture inner circles. I decided to invest some time on this, and started with Docker’s Fundamentals free course. Docker is the king at Containers technology at the moment.

Keeping it very, very, simple, a Container is a “snapshot” (Image) of a system at a given time. This Image, theoretically, runs a single application (the service or application you want to host) alongside some optional support processes that may keep the main process alive (systemd, for instance). If the main process exists, the containers is stopped. The creation process of this Container is reproducible (with Docker you used the so called “Dockerfile”), and Containers may inherit from others containers to reach the final state. Ideally, you promote the same container through all environments (Development, Testing, Staging, Production…) keeping the environment configuration out of it.

We are talking, at the end, about an alternative to Virtual Machines (VM). In an VM you need to add extra layers (hypervisor, guest os) to accomplish the same task. With Docker, for instance, you have only an extra layer over the Host OS (the Docker Engine). All applications runs very close to Host. The less layers, the better performance.

Maybe these last paragraphs are really difficult for you if, like me, have been in the Microsoft .NET world for long: You may be used to have an IIS, in a physical or VM. You install the IIS, deploy the application to the it, and let IIS the responsibility to handle de w3wp.exe processes to be up and running. In Containers you do not have an IIS, unless you create a Windows Server 2016 Server and installs IIS through PowerShell), but an executable that holds the IIS’ responsabilities. You can even automate the deployment to IIS using WebDeploy.

Why containers?

With IIS, however, you are really tied to its capabilities. You can only host some specific components (ASP.NET WebSites, Static Content, WCF Services, PHP sites even), and you need to share the whole Operating System with the rest of the applications:

  • You change an SSL certificate?
  • You upgrade the .NET version?

Beware you do not impact the whole IIS applications, or even other components. At the end, the traditional VM hosting is as simple as dangerous for your business continuity.

On the other hand, in a Container you can run whatever service you need, in a replicable, isolated way. I say replicable, because the containers creation is an automated process. You can replicate this exact containers image as many times as you want, with all the OS and Application components you need (using docker build, for instance). Need to change the SSL certificate or the .NET version? Changed the Dockerfile, build and perform a rolling update. Something went wrong? Stop and revert the rolling update to the previous container versions. You’re safe.

And also, you can run as many copies of one image as many times as you want. And this is very important for elasticity. You have more incoming traffic? Create more instances of the container image to support this punctual need. Then shut them down. It is that easy. Docker fundamental training course also talks about a nice Docker feature, that probably you’ll be asking right know: Every container I deploy uses disk space? The short answer is: No. Docker Image uses a readonly Layer systems, that makes unique. If two containers have the same layer, then it is shared. This saves a LOT of space. In other word, If you define a base image for all your company’s .NET Container images, it only occupies disk space once. Despite the fact you are running 1 or 100 instances of this image.

You can find the long explanation in Docker’s Understand images, containers, and storage drivers document

Obviously, containers come at a price.

The price of containers

While surfing, maybe you’ve found a vignette (sorry, I couldn’t find it) about two guys talking about software development. The one was trying to learn new programming language, while the other was saying to him that Containers are the new trend and that he was doing really sucks.

The vignette was a joke about how containers are getting things harder than they are. And its true, if you are a developer. Containers are a DevOps technology with, in my opinion, 80% Ops and 20% Dev. This means that it will be very difficult to you to fully understand its power, if you only focus on your 20%. More over, the 80% Ops side will not have no real effect without your 20%.

In other words, Container technologies are hard to learn. They have a lot of things you may never care about (if you are a Software Engineer). But you need to under what they are and how they work, in order to design a software that efficiently runs on a Container (and get benefited from it). You might find it is not really useful at the very beginning, but you’ll realise the true power of them while you keep learning.

Putting an existing application into a Container and make it run is possible. But is like putting a VM in Azure. That was not Cloud Computing. And this is not Containers at all. The first consequence of this mistake is the high prices you need to pay in cloud provides, to host such big containerised monsters. Containers are not expensive. Bad design and implementation are expensive.

Containers are not expensive. Bad design and implementation are expensive.

All this is specially true if you come (again, like me) from the Windows Desktop world. You’ll find yourself lost in a black universe of terminals, SSH and Command lines. But the effort is worth it.

How Containers can help you in your digital transformation

Lately,I’ve visited a bunch of clients. They all are looking for the same: Continuos Deployment pipelines. DevOps. Deployment automation. Increase the number of deployment. Decrease the time to market. At the end, they are all trying to respond the agility the Business is asking for to compete in a fierce market. They also shared the same lake: Containers where not it their roadmap.

If this is your case, keep in mind that probably you’ll need to put Containers in the equation. Even if you are using Microsoft Technologies. Microsoft officially support Docker on Windows Server 2016/Azure, and Kubernetes is even bringing support to Windows/Azure even, since Microsoft hired some of the Kubernetes founders from Google.

With Containers, you can create a powerful Continuos Deployment pipeline, with software prices splitter in terms of Images. Images you can deploy to a Image Registry (on-premise, on-demand or both) and scale up and down as you need. This will give you so much flexibility and deployment speed, rather than relaying 100% on a Continuos Deployment Pipeline, playing with code branches, compiles…

The path to Containers with Microsoft Technologies

If you are using Microsoft Technologies and you are thinking about using containers, there are two some paths you might consider:

  1. .NET Core + Kestrel + Linux + Kubernetes (Or Azure Containers Services + Kubernetes in Azure): My favorite combo at the moment.
  2. .NET Core + WebListener + Windows Server 2016 + Docker Swarm (Or Azure Container Services + Docker Swarm)
  3. NET Core + WebListener + Windows Server 2016 + Service Fabric (Or Azure Container Services + Service Fabric in Azure)
  4. .NET Full + IIS + Windows Server 2016 + Docker Swarm (Or Azure Container Services + Docker Swarm in Azure)
  5. .NET Full + IIS + Windows Server 2016 + Service Fabric (Or Azure Container Services + Service Fabric in Azure)

The option number 1 is for those who don’t care about leaving behind Windows and look for a very lightweight, fast solution to get into Containers. However, at the moment might be a problem if you have legacy software that uses Windows-based components. However, in a near future you might be able to create a Windows+Linux Kubernetes cluster, and this won’t be a problem at all.

The option number 2 and 3 are more continuist solution. You need to relay on .NET Core also, but hosting the application within Windows Server 2016 using Docker Swarm or Microsoft Service Fabric

Finally, options 4 and 5 relay on the full, traditional framework and IIS to create a Docker image (with only 1 app hosted in the IIS), and running also within Windows Server 2016 and Swarm or Service Fabric.

Which path to choose? You’ll need a reflexion exercise to understand where you are, where you want to go and which Framework version fits you best.