Further understanding of Server Containers RRS feed

  • General discussion

  • Hi,

    I am trying to understand Windows Server 2016 containers more, and although I have read this article https://azure.microsoft.com/en-gb/blog/containers-docker-windows-and-trends/ and have a greater understanding, I still have a few conceptual/logical things I am interested to know.

    I have seen an example on a virtual lab (which didn’t actually work for me at the time) hosting a web application in a container. One of my questions is: “can we package/install a normal GUI application inside one of these containers or do we need to deploy special built applications written for being inside a container?” for example, we have a payroll application which we host on a RemoteApp server (or it can be installed locally on a client), can such a normal application be installed in a container? What about when that traditional application needs to talk to the HR application, can this be installed inside a different container and the two containers talk to each other?

    I also understand that there are two main types of containers: Hyper-V and windows server containers. In a private cloud where all the infrastructure belongs to you and all applications in containers are trusted, I get the impression that Server containers could be used on bare metal machines. If you are a hosting provider then you could use Hyper-V containers which would accommodate for multi-tenant environments. Is my understanding correct here?

    In theory we could also have a virtual machine which belongs to tenant A which could have several containers inside of it (which I assume will be windows server containers because they are inside the guest, they won’t be referred to as Hyper-V containers right?). if this is the case, then I assume this could lead to much greater density of applications which run on a multi-tenant environment because you could potentially only need 1 VM per tenant and each VM could run hundreds or thousands of containers?

    Thinking a little further ahead (maybe unless I don’t quite have the full picture yet), today we can have guest clusters where an application can be made highly available, each VM in that Guest cluster sits on its own physical server, when a physical server fails there is no interruption to service. Could this hold true for containers being made highly available in the same way through either having guest cluster VM’s or (a term I will coin because I don’t know any better right now) “Guest cluster containers” whereby the container itself is sort of clustered with another running in a VM, or is this theory beyond the current scope of what containers can do right now?

    Appreciate any further information you could share on this, I will watch a channel 9 video on containers over the weekend to gain more insight.



    Saturday, April 30, 2016 11:37 AM

All replies

  • Hi Steve,

    Sorry for the delay - this isn't a forum we monitor for container questions as actively as the Container Forum (https://social.msdn.microsoft.com/Forums/en-US/home?forum=windowscontainers).

    To your first question - we are really focused on server based applications for V1.  Thus true UI based application are not really in scope, we did have the ability to RDP into Server Core for TP3 and it's going to be added back by RTM but the scenario for that is really limited to basic installers and trouble shooting.  This is a scenario we are thinking about more in the future but for the time being for true UI applications (ala word/excel etc) I would recommend remote app.

    Your understanding of the isolation differences between Window Server and Hyper-V Containers is accurate.  The other scenario where this isolation is very helpful is if you want to run a container with a different version of Windows Server - this will be most helpful looking forward (i.e. Running a Server 2016 container on a Server 201x host).

    You are also correct that you can run containers inside VM's both Windows Server and Hyper-V Containers in fact where Hyper-V Containers then run nested.

    With orchestration tooling such as Docker swarm, Apache Mesos containers are automatically placed across hosts and restarted in the event a host fails - they also support things like scaling up and down pretty well.


    Sunday, May 22, 2016 10:03 PM