(Since writing this, much of the syntax in Xen 4.4 is different. Debian Wheezy still ships with the older versions, but Ubuntu users will likely need to refactor some of this).
Since first running Connectix Virtual PC on my childhood blueberry iMac DV, I've been unhealthily obsessed with virtualisation. Long time readers of Rubénerd would have seen me post about the first beta releases of Parallels Desktop in 2006, VMware's consumer and enterprise fare, the versatile QEMU and the nostalgic DOSBox and ScummVM.
Now that you've cleared the pointless introduction, this post will be exploring Xen, the system that powers much of the world's cloud infrastructure, and the more discerning Linux and NetBSD users' personal virtual machine collections.
Xen is a bare—metal hypervisor that runs virtual machines. Once you've provided it with a configuration file, the system spins up your virtual machine which (by default) you can access with a serial console, VNC or a few other remote access protocols. Of course, once you've enabled an SSH daemon and networking in your guests, you can access through that too.
Broadly speaking, your Xen host is referred to as dom0, for domain 0. Guests are referred to as domU, and can be started either fully virtualisaed with HVM, or paravirtualised (PV) if the guest OS supports it. What's the friggen difference? More on that shortly.
Once you've decided to try Xen, the next step is finding a hypervisor-compatible OS. Those who know me wouldn't be surprised to know I first tried Xen on NetBSD; of all the (albeit limited) non-Linux options, NetBSD's dom0 Xen support is supurb. For most of you though, I'm assuming you'll want to use Linux. Debian is what we use at the office and where I have most Xen experience, so that's what we'll be looking at here.
At a bare minimum, you'll also need a system with Intel VT-x or AMD-v support. Most decent "modern" systems have these, but this website is a great resource for checking what your CPU supports. For full hardware assisted virtualisation (HVM), you can check Intel VMX or AMD XVM support by searching this:
egrep '(vmx|svm)' /proc/cpuinfo
Once you have your OS of choice installed, these are roughly the steps to get started quickly:
- Install Xen
- Define a network bridge
- Partition a drive, ideally with logical GPT. Otherwise, create a raw disk image
- Define your new VM
- Start your VM
- Access your VM
Installation and configuration
As a tinkerer myself, I appreciate the urge to try something quickly. This is arguably the bare minimum you'll need to do to get started; you'll want to tune your system after to get an optimum setup.
To install Xen on Debian, grab the following:
# apt-get install xen-linux-system
Next, define a network bridge in
/etc/network/interfaces for your domUs to access. In this case, I'm defining an unremarkable
xenbr0 on my
# Ruben's Xen bridge auto xenbr0 iface xenbr inet dhcp bridge_ports eth0
As with most hypervisors, you have the choice to use a partition or disk image for domU storage. Using GPT and logical volumes is beyond the scope of this post (aka, stay tuned), but seems to be the accepted standard.
For now, we can create a disk image for our domUs with the following.
$ qemu-img create 5G guest.img
How to run this sucker
Here's where we have to make a decision about how to run our VM. In Xen, we can run using paravirtualisation (PV) or HVM. Briefly:
PV uses some of the dom0's resources directly, including drivers and drive volumes. The benefit is far greater performance under some circumstances, though the domU needs kernel support. xen-tools can automate the installation of some PV domUs, but for others it can be quite a bit of work.
HVM virtualises the entire hardware stack, meaning most OSs can run in it without modification. This is more like what you may be used to in other contemporary hypervisors on the desktop and otherwise. Recent OSs (such as FreeBSD 10) include so–called PVHVM drivers which will give it PV-like drivers for use in HVM, thereby giving you the best of both worlds.
With that in mind, let’s make an HVM. In this basic example, I'm creating a FreeBSD 10 domU config. You may need to adjust the Xen paths for your system.
## Ruben's freebsd.cfg file for FreeBSD HVM kernel = "/usr/lib/xen-4.1/boot/hvmloader" device_model = "/usr/lib/xen-4.1/bin/qemu-dm" builder = "hvm" memory = "256" name = "freebsd" ## Enable VNC access vnc = 1 vnclisten = "0.0.0.0" ## Virtual file devices ## Attempt to boot from "c" (hard drive) first, ## then boot "d" (cdrom). Same flags as QEMU boot = 'cd' disk = [ 'file:/var/vm/freebsd.img,hda,w', 'file:/var/vm/freebsd-10.iso,hdc:cdrom,r' ] ## Virtual network interface vif = [ 'bridge=xenbr0' ]
Now we can launch our new VM! Depending on your local install, you'll either want to use
xl or the older
# xl launch freebsd.cfg # xm launch freebsd.cfg
You can confirm the machine is running
# xm list => Name ID Mem VCPUs [..] Domain-0 0 8096 2 [..] freebsd 0 256 1 [..]
If you have an X server locally running, you can now preview with:
$ vncviewer :0
To access VNC from another machine, one option is to use an SSH tunnel:
$ ssh -X <your Xen machine IP> $ vncviewer :0
Done and done
And there you have it, your own Xen machine! Ideally, your next steps will be to install your domU guest as normal, then configure console access and/or SSH so you can access remotely without VNC. These will be discussed in future posts, and linked back to here.
Initially configuring Xen can be time consuming, but it's a lot of fun and you'll be rewarded with a high performance platform to run your workloads.