VMs were born to run software on top of physical servers to emulate a particular hardware system. A hypervisor, or a virtual machine monitor, is software, firmware, or hardware that creates and runs VMs. It’s what sits between the hardware and the virtual machine and is necessary to virtualize the server.
Within each virtual machine runs a unique guest operating system. VMs with different operating systems can run on the same physical server — a UNIX VM can sit alongside a Linux VM, and so on. Each VM has its own binaries, libraries, and applications that it services, and the VM may be many gigabytes in size.
Some of the benefits of VMs is the ability to consolidate applications onto a single system.
Each VM includes a separate operating system image, which adds overhead in memory and storage footprint. As it turns out, this issue adds complexity to all stages of a software development lifecycle—from development and test to production and disaster recovery. This approach also severely limits the portability of applications between public clouds, private clouds, and traditional data centers.
Containers provide a way to run isolated systems on a single server or host OS.
Containers sit on top of a physical server and its host OS.
Running containers share the host OS kernel and, usually, the binaries and libraries.
Shared components are read-only, therefore containers are considered "light". They are only megabytes in size and take just seconds to start, versus gigabytes and minutes for a VM. This enables use cases such as serveless applications to become very possible for deploying a container to run a specific function.
The main difference is that containers provide a way to virtualise an operating system so that multiple workloads can run on a single OS instance.
With VMs, the hardware is being virtualised to run multiple OS instances.