I/O devices are generally the most difficult hardware to virtualize in a
high-performance fashion. Traditional approaches use device emulation, whereby
the hypervisor exposes a set of hardware interfaces – device memory regions
and I/O ports, mainly – that mimic the appearance of real hardware devices.
This allows the guest to load its unmodified drivers to manage them as if they
were real hardware. While this makes it possible to run unmodified guest
operating systems, this approach is problematic because it requires a lot of
hypervisor code to emulate devices, buses, chipsets, etc., and also because
this emulation leads to a large number of traps in response to operations on
the emulated hardware (such as reads/writes on emulated I/O ports). To address
these issues, many modern hypervisors implement paravirtual devices, including
virtio
devices in KVM/Linux systems. Though paravirtual devices require
changes to guest device drivers, they lead to simpler device interfaces and
higher performance in virtual machines.
In this studio, you will:
Please complete the required exercises below. We encourage you to please work in groups of 2 or 3 people on each studio (and the groups are allowed to change from studio to studio) though if you would prefer to complete any studio by yourself that is allowed.
As you work through these exercises, please record your answers, and when finished upload them along with the relevant source code to the appropriate spot on Canvas.
Make sure that the name of each person who worked on these exercises is listed in the first answer, and make sure you number each of your responses so it is easy to match your responses with each exercise.
As the answer to the first exercise, list the names of the people who worked together on this studio.
As you did for the Virtual Machine Performance with QEMU and KVM studio, boot an Ubuntu VM using QEMU/KVM on your Raspberry Pi. We are going to need more memory in the VM, so run with 512MB instead of 256MB as follows:
sudo qemu-system-aarch64 -M virt -cpu host -m 512M -smp 2 -nographic -bios QEMU_EFI.fd -cdrom seed-kvm-bionic-01.iso -drive if=none,file=bionic-image-01.img,id=hd0 -device virtio-blk-device,drive=hd0 -device virtio-net-device,netdev=vmnic -netdev user,id=vmnic -accel kvm
Note the two "-device
" options on the QEMU command line. These
tell QEMU to expose two virtio devices to the guest: a block device
(virtio-blk-device
) and a network device
(virtio-net-device
).
Once the guest has booted, log into it (recall from the
previous studio
that the username is ubuntu
and the password is cse522studio
unless you set a different password).
Then, from the terminal of the virtual machine, run the following command:
lsmod | grep virtio
This will determine which virtio
device drivers have been loaded into the
guest kernel. As the answer to this exercise, show the output from this
command.
Next, you'll need to disable the automatic update feature in Ubuntu Server, to keep it from preventing you from installing new software packages. This can be done easily with the following commands:
sudo apt-get update
sudo systemctl mask apt-daily.service apt-daily-upgrade.service
sudo systemctl disable apt-daily.service apt-daily.timer apt-daily-upgrade apt-daily-upgrade.timer
Download two new applications in the VM that we will use to evaluate
performance: fio
, which we'll use for disk I/O performance
analysis, and iperf
, which we'll use for network performance
analysis:
sudo apt install fio iperf
(If you receive an error message indicating the apt
can't proceed because a file is locked, you may need to reboot the VM to apply the
changes you made to the automatic update settings in the guest OS.)
We'll first analyze guest file I/O performance with fio
.
For more information, see the online man page
or issue the command:
man 1 fio
As is standard for I/O performance analysis, we'll evaluate two different parameters: workloads that are read vs write heavy, and workloads that access data sequentially vs randomly. First, we'll evaluate sequential read performance with the following command:
sudo echo 3 | sudo tee /proc/sys/vm/drop_caches && fio --thread --name=read --rw=read --size=256M --filename=file.dat && rm file.dat
Note that we are first clearing the guest kernel's page cache by writing 3
to the drop_caches
sysfs file, and then using the fio
program. When this program completes, it will write a line to standard output
that start with: READ: bw=..... Record the bandwidth measurement, and then also
run the sequential write, and then random read/write benchmarks:
sudo echo 3 | sudo tee /proc/sys/vm/drop_caches && fio --thread --name=write --rw=write --size=256M --filename=file.dat && rm file.dat
sudo echo 3 | sudo tee /proc/sys/vm/drop_caches && fio --thread --name=randread --rw=randread --size=256M --filename=file.dat && rm file.dat
sudo echo 3 | sudo tee /proc/sys/vm/drop_caches && fio --thread --name=randwrite --rw=randwrite --size=256M --filename=file.dat && rm file.dat
As the answer to this exercise, show the bandwidth measurements for these 4 workloads, and explain how and why they are similar or different.
Once you are done, shut down the virtual machine:
sudo shutdown -h now
Now, we will download and run the same workloads on the host to determine how much performance overhead exists.
Repeat the previous exercise, but from a terminal on the host system, rather than within the virtual machine. You should not have to disable the automatic updates in the host, as you likely already did this in the previous studio.
As the answer to this exercise, show the bandwidth data for all 4 workloads, and determine how much performance overhead exists for each in comparison to the guest environment. If some workloads exhibit significantly more virtualized overhead than others, provide a possible explanation for why that might happen, using your knowledge of device I/O behavior and how the I/O is virtualized.
We will now investigate virtualized networking performance by
transferring data between the host and the guest over a virtualized network.
First, determine the public IP address of the host by using the
ifconfig
command and looking for the IPV4 address associated with
either your Ethernet or WiFi device, depending on which network you are using
for Internet access.
You will use iperf
to perform network throughput tests
between the host and the guest.
First,
For more information, see the online man page
or issue the command:
man 1 iperf
Now, start up an iperf
server on the host with the following
command:
iperf -s -p 8080
In a separate terminal, again boot the VM using the command from above. In the VM,
run an iperf
client with:
iperf -c $SERVER_IP -p 8080
replacing $SERVER_IP
with the host's IP address you found above. This
command will run a TCP networking test for a few seconds, and then report the
networking bandwidth it sustained. Run this test 3 times, and as the answer to
this exercise record the median bandwidth from the 3 experiments.
Teardown the VM (with sudo shutdown -h now
). We will now
boot the VM with an emulated networking device, which will appear to the guest
as an e1000 Intel Gigabit Ethernet device rather than a virtio
device.
To do this, modify your QEMU command line, removing:
-device virtio-net-device,netdev=vmnic
and replacing it with
-device e1000,netdev=vmnic
Boot and log back into the guest, and again run
the following command to determine which virtio
drivers are now loaded into the guest kernel.
lsmod | grep virtio
Then, run the same iperf
experiment 3 times.
As the answer to this exercise, report the output of the lsmod
command,
again report the median of the 3 iperf
experiments, and determine the performance loss compared with the
virtio network device. Based on on your knowledge of virtio
, explain
(briefly) why you think the performance is different in the 2 environments.
Page updated Thursday, April 7, 2022, by Marion Sudvarg and Chris Gill.