You may have some confusion about the “VMs . . . on Bare Metal” portion of the title. It’s not a misprint: I am going to talk about two major things in this post, but the hot topic I am going to demystify for you is running your VMs as containers in Kubernetes. Therefore, I am hoping to make two over-arching points in this post:
- Detail the advantages of running VMs and Containers on one unified platform. And yes, that unified platform is VMware Cloud Foundation, which includes a built-in Kubernetes runtime, vSphere Kubernetes Service (VKS).
- Details on why VMs/Containers on bare metal pose real shortcomings for enterprises so you can judge for yourself. I will be stating facts that you can validate independently of any vendor.
Let’s start with level-setting about what the VMware vSphere Kubernetes Service (from here on out, VKS) is and what’s available with it.
What is VKS?
VKS is Broadcom’s enterprise-grade, CNCF certified and conformant Kubernetes runtime, which is included (read: you don’t have to pay extra), supported, and available now with VMware Cloud Foundation.
If you are currently running Kubernetes on a hyperscaler, or tried Kubernetes on vSphere previously, VKS is worth considering, because VKS was designed with enterprise functionality in mind, including an easier installation process, self-service, and the continued inclusion of components that practitioners of Kubernetes have come to expect:
This gives customers unprecedented abilities to run enterprise-grade Kubernetes on-prem in a more consumable and scalable fashion.
Furthermore, for you “dyed in the wool” FOSS fans, some food for thought about VKS:
- Over the last decade, Broadcom has been among the top 3 contributors to the Kubernetes code base (and second highest in the past 5 years!).
- CNCF conformant Kubernetes means that we follow open standards. This is important because you have flexibility and can make reasonable assumptions about components included with the platform instead of being locked into vendor-specific anti-patterns.
- VKS has evolved since VMware’s acquisition of Pivotal going back to 2019, so this is not a “Version 1.0” product.
A Note on the vSphere Supervisor
To get started with VKS, you will need to install the vSphere Supervisor. This provides, among other things, the ability to leverage all of our VCF goodness while also offering Kubernetes compliant components.
One example among many: once the vSphere Supervisor is up and running, you will create a Kubernetes Namespace, which will map to a vSphere resource pool. Inside the Kubernetes cluster, the Namespace functions exactly the way one would expect, but resources for that namespace can also be managed through the vSphere resource pool UI.
Another example is that the Kubernetes control and worker nodes literally are VMs. This is important for a point I will make in the next section:
Note also that “traditional” VMs are running side-by-side along with your Kubernetes clusters . This is the power you have with VKS in VCF.
If you remember anything else in this blog post, please remember this:
Your current VMware engineers need no special Kubernetes knowledge to manage both the VMs and the consumed Kubernetes components or clusters. Your platform engineers or developers can gain access through self service and your virtualization engineers can manage the virtual infrastructure as they always have.
Running VMs and Containers on Bare Metal Demystified
Fun fact: when you stand-up a Kubernetes cluster on a hyperscaler, that cluster runs on a set of VMs on a hypervisor. This is true for all hyperscalers as of this writing. The resultant containers don’t run directly on bare metal . . . that’s not a thing. I mention this because we’ve seen everyone from C-level folks to developers to platform engineers (who should know better) be astonished by this fact.
Furthermore, I would be remiss if I didn’t go one further and say that when you create a VM on a hyperscaler, it also runs on a hypervisor. Just because you don’t see the hypervisor doesn’t mean it’s not there.
As a result, I have noticed a lot of organizations entertain the idea of 1) hosting their containers on bare metal, and/or 2) hosting their traditional VMs as containers on bare metal.
First, hosting containers on bare metal would be a great topic of debate . . . if the year was 2017. The industry already had this argument and hosting Kubernetes on a virtualized platform won big.
The reason? Among other things: ease of management, security, more scalability, more flexibility, lower cost, and it has almost no loss of performance. I am not saying containers on bare metal doesn’t have its place, but for 98% of organizations, containers on hypervisors won the day.
One example: think about what you have with vSphere HA alone; your containers as VMs fit nicely into that story with just that one thing.
To put it another way: running containers alongside VMs on VMware is less costly while maintaining the same levels of redundancy, resilience, and high performance you have come to expect from VMware by Broadcom.
What about running VMs as a container? First of all, we have the receipts about how better performing and how much more scalable our platform is. For example, provisioning a VM using our VM Service, the engineer does not need to understand the nuances and components of Kubernetes. This is usually a better solution because shops don’t have to complete soup-to-nuts retraining or hire more expensive engineers in order to run VMs in a Kubernetes cluster.
Once you start seeing how easy it is to have the lower TCO of easy-to-implement industry-grade virtualization (I will yet again mention vSphere HA, which is an enterprise-grade High Availability technology that is configurable with about two mouse clicks), running VKS with VCF is a no-brainer.