[转] KVM VirtIO paravirtualized drivers: why they matter

http://www.ilsistemista.net/index.php/virtualization/42-kvm-virtio-paravirtualized-drivers-why-they-matter.html?limitstart=0

As you probably already know, there are basically two different schools in the virtualiztion champ:

  • the para-virtualization one, where a modified guest OS uses specific host-side syscall (hypercall) to do its “dirty work” with physical devices
  • the full hardware virtualization one (HVM), where the guest OS run unmodified and the host system “traps” when the guest try to access a physical device

The two approach are vastly different: the former requires extensive kernel modifications on both guest and host OSes but give you maximum performance, as both kernels are virtualization-aware and so they are optimized for the typical workload they experience. The latter approach is totally transparent to the guest OS and often do not require many kernel-level changes to the host side but, as the guest OS is not virtualization aware, it generally has lower performance.

So it appear that you had to do a conscious choice between performance and guest OS compatibility: the paravirtualized approach prioritize performance, while the HVM one prioritize compatibility. However, in this case it is possible to have the best of both worlds: by using para-virtualized guest device driver in an otherwise HVM environment, you can have compatibility and performance.

In short, a paravirtualizad device driver is a limited, targeted form of paravirtualization, useful when running specific guest OSes for which paravirtualization drivers are available. While being largely transparent to the guest OS (you simply need to install a driver), they relieve the virtualizer from emulating a real physical device (which is a complex operation, as it must emulate register, port, memory, ecc), substituting the emulation with some host-side syscall. The KVM-based framework to write paravirtualized drivers is called VirtIO.

Things are much more complex than this, of course. Anyway, in this article I am not going to explain in detail how a paravirtualized driver works, but to measure the performance implication of using it. Being a targeted paravirtualization form requiring guest-specific drivers, it is obvious that VirtIO is restricted to areas where it matter most, so disk and network subsystems are prime candidates for those paravirtualized drivers. Let see if, and how, both Linux (CentOS 6 x86-64) and Windows (Win2012R2 x64) are affected from that paravirtualized goodness.

Testbed and methods

All test run on a Dell D620 laptop. The complete system specifications are:

  • Core2 T7200 CPU @ 2.0 GHz
  • 4 GB of DDR2-667 RAM
  • Quadro NVS110 videocard (used in text-only mode)
  • a Seagate ST980825AS 7200 RPM 80 GB SATA hard disk drive (in IDE compatibility mode, as the D620's BIOS does not support AHCI operation)
  • CentOS 6.5 host-side OS with kernel version 2.6.32-431.1.2.0.1.el6.x86_64
  • a 512 MB ramdisk driver used for disk speed measurements

On the guest side, we have:

  • a first CentOS 6.5 guest (kernel version 2.6.32-431.1.2.0.1.el6.x86_64)
  • a second Windows 2012 R2 x64 virtual machine

The VirtIO paravirtualized drivers are already included in the standard Linux kernel, so for the CentOS guest no special action or installation was needed. On the Windows guest, I installed the VirtIO disk and network drivers from the virtio-0.1-74.iso package.

For quick disk benchmark, I used dd on the Linux side and ATTO on the Windows one. To pose additional strain on guest disk subsystem and the host virtualizer, I run all disk tests against a ramdisk drive: in this manner I was sure that eventual differences were not masked out by the slow mechanical disk. Networking speed was measured with the same tool on both VMs: iperf, version 2.0.5.

Host CPU load was measured using mpstat.

Ok, let see the numbers...

CentOS 6 x86-64 guest

The first graph shows CentOS 6 guest disk speed with and without the paravirtualized driver:

[转] KVM VirtIO paravirtualized drivers: why they matter

Native performances are included for reference only. We can see that para-virtualized disk driver provide a good speedup versus the standard virtualized IDE controller. Anyway, both approaches are far behind the native scores.

Net speed now:

[转] KVM VirtIO paravirtualized drivers: why they matter

In this case the paravirtualized network driver makes an huge difference: while it can't touch native speed, it is way ahead of the virtualized E1000 NIC adapter. The RTL8139 was benchmarked for pure curiosity, and it show a strange behavior: while output speed is in line with NIC speed (100 Mb/s), input speed is much higher (~400 Mb/s). Strange, but true.

While host CPU load is lower on the full virtualized NICs, it is only because they deliver much lower performance. In other word, the Mb/s per CPU load ratio is much higher on the para virtualized network driver.

Windows 2012 R2 x64 guest

Let see if Windows guest has some surprise for us. Disk benchmark first:

[转] KVM VirtIO paravirtualized drivers: why they matter

This time, the fully virtualized IDE driver is much behind the para-virtualized driver. In other word: always install the paravirtualized driver when dealing with Windows guests.

Network, please:

[转] KVM VirtIO paravirtualized drivers: why they matter

The paravirtualized driver continues to be much better then the fully virtualized NICs.

Conclusions

It is obvious that the paravirtualized drivers are an important piece of the KVM ecosystem. While the fully virtualized drivers are quite efficient and the only way to support a large variety of guest OSes, you should really use a paravirtualized driver if available for your guest virtual machine.

Obviously performance are only part of the equation, stability being even more important. Anyway I found the current VirtIO drivers release very stable, at least with the tested guests.

In short: when possible, use the VirtIO paravirtualized drivers!

上一篇:聊一聊快速排序(Js)


下一篇:JS实现拖动div层移动