VMware vs. Hyper-V: Architectural Differences

Most of us Sys Admins work with hypervisors everyday, but I am often surprised at how little some admins know about the base level layout of a hypervisor or how we’ve gotten where we are today with Hyper-V 2012 R2 and ESXi 5.5. In order to adequately understand how some functions of the hypervisor work it is important to know how it is architected at it’s most basic level.

A little history lesson is in order.

Historically there have been two types of Hypervisors, Type 1 and Type 2. Let’s take a look at the major differences here.

Type_Hypervisors

Type 2

On the left we have the type-2 Hypervisor. You will immediately see the main difference from the type 1 hypervisor being, the hypervisor layer sits on top of an OS instead of directly on top of the hardware layer. In this case the host (or parent) OS is usually Windows or Linux/Unix and the installed OS handles how resources are allocated to the hypervisor after its own needs are met. Because of this, type 2 hypervisors are typically not seen in production environments unless a very specific need is being met.

This type of setup is not without it’s uses and is most often seen in dev/test scenarios or situations where an end user may have some legacy application that requires an old version of windows. Maybe they need to launch an old Windows VM to gain access to the application and have it run properly or maybe an infrastructure admin needs to test VMs on his/her laptop prior to putting them in place on a type 1 hypervisor platform. Whatever the need, it has a very limited use case.

As people got further into this thing called virtualization it quickly became apparent that the best way to get fully optimized performance in the guest VMs was to provide as much direct access to the host hardware as possible while still maintaining separation of parent and guest VMs and maintaining separation of guest VMs from other guest VMs.

Thus we moved onto the Type 1 hypervisor.

Type 1

The Type 1 hypervisor (shown right above) was designed with the idea that guest VMs could be given more direct access to hardware. This being the case, the VMs running on the host would perform nearer to native performance as if they were installed on a physical box.

In this case, the hypervisor is installed directly on top of the host hardware and has enough brains built in to allow an administrator to connect, create, and manage VMs. There is also the added benefit of not requiring system resources to run a host OS for the hypervisor to reside on, meaning more resources for your mission critical  VMs.

Most hypervisors you run into in the field today will be type 1 as many administrators have begun to see the benefits of virtualization without the added overhead of a host OS.

Now onto a more in depth talk about the two giants in the room.

hyperv-vmware

Microkernalized vs. Monolithic

The methodology for how a type 1 hypervisor will allocate available resources, and how it handles driver use, depends on whether the hypervisor is a Microkernalized or Monolithic Hypervisor (shown below).

Type_Hypervisors2

Monolithic

For modern-day computing to occur, you need access to 4 resources: CPU, memory, storage, and networking. In a monolithic hypervisor, allocation of all resources is handled by the hypervisor itself, as is control of all hardware access and knowledge of all device drivers that talk to the physical hardware.

This methodology is very efficient with very little overhead, but there are some drawbacks.

Beings all device drivers reside within the hypervisor, VMware has to be very picky about what systems will support their hypervisor and which ones will not. Ever had to look at the VMware compatibility matrix for hardware? This is why. VMware ESXi will only run on a very select number of systems. You don’t have the vast hardware support that Microsoft has with Hyper-V.

Another downside to this configuration would be the fact that if there was a malfunction or security issue with a driver at the hypervisor layer it would potentially affect every VM on that host. This is a downside that VMware has been working diligently to shore up.

Microkernalized

When Microsoft started looking at virtualization they realized that they already had a portion of the puzzle solved. They already had a kernel with wide reaching and reliable hardware support. There was no need to re-invent the wheel. It made sense for them to leverage their kernel and incorporate it into the Hyper-V product.

In the Microkernalized model the distribution of how resources are managed and assigned are a little different.

When the Hyper-V role is installed on a host system, the hypervisor layer is actually placed directly on top of the host hardware and what WAS the host OS is P2Ved (in a sense) into what Microsoft called the parent partition (or parent VM). This parent VM handles all VM access to storage and networking while the hypervisor layer continues to handle access to CPU and memory.

Beings the parent VM is essentially a Windows Server 2012 (R2) or Windows Server 2008 (R2) VM it has the added benefit of having access to the Windows kernel for it’s hardware support. This make a lot of sense if you think about it as storage and networking is historically the place where device drivers are needed and it allows the parent VM more options when providing storage and networking infrastructure on which to host your VMs.

Now the drivers previously mentioned are strictly used to provide storage and networking access to the VMs running on the Hyper-V host, the VMs themselves still need drivers to be able to talk to the physical hardware. Unlike the Monolithic model, each Hyper-V VM holds and maintains it’s own device drivers. This make the Microkernelized model more secure and stable as one driver getting compromised or crashing outright will only affect a single VM instead of everything on the host. This also means that as far as hypervisors are concerned, the attack surface for Hyper-V is smaller than that of ESXi. Most folks are shocked when they hear that.

There is one downside to the Microkernelized methodology. Beings the parent VM provides access to storage and networking, were it ever to have issues (crash, hang…etc..etc) your VMs could be affected as well. This is why it is generally best practice to run the windows server OS, in the parent VM, in core mode as to reduce the amount of bloatware and un-needed software. This simplifies the OS and makes it much less likely that a malfunction will occur.

Summary

Hopefully this has been helpful. If you keep these underlying concepts in mind when going about your daily administration tasks, it can make troubleshooting easier and it can also make more sense as to why things are setup the way they are.

Thanks for reading.

Share Button