Friday, 11 June 2010

VirtualBox as an Enterprise Server Virtualization Solution

Introduction

Some posts ago I quickly argued that VirtualBox might be used as a server virtualization platform in some scenarios instead of relying on more complex enterprise solutions such as the very Oracle VM or a VMWare. VirtualBox is a great Type 2 Hypervisor that has been rapidly growing in the past few years and now supports a wide range of both host and guest operating systems. Although VirtualBox is the heart of Sun/Oracle offering for desktop virtualization and although Solaris comes with Xen as a Type 1 Hypervisor, I argue that VirtualBox may be a solution to seriously take into consideration especially when using Solaris as a host operating system since VirtualBox itself can leverage Solaris features such as:
  • ZFS.
  • Crossbow (network virtualization and resource control).
  • RBAC, Projects and Resource control.

Solaris comes with other virtualization technologies such as Zones and Containers. If you need a Solaris instance, the quickest way to virtualize one is creating a zone. If you're using Solaris, then, you might want to consider Zones instead of a Type 1 hypervisor. Having said that, VirtualBox might help you in the case you're running Zones alongside other guests: instead of dedicating physical machines to zones and other physical machines to a Type 1 hypervisor such as Oracle VM or VMWare (both based on Linux), you might want to consider OpenSolaris' Xen or VirtualBox.

OpenSolaris' Xen is a Type 1 hypervisor built on the Solaris kernel: as such, it virtualizes guest OSs alongside Solaris Zones on the same physical machine. VirtualBox, being a Type 2 hypervisor, can be executed on a Solaris host alongside Zones as well.

In this post we'll make a quick walkthrough to describe how VirtualBox can be used in a Solaris environment as a server virtualization platform.

Installing VirtualBox

Installing VirtualBox on the Solaris Operating System is very easy. Download VirtualBox, gunzip and untar the distribution (please substitute [virtualbox] with the actual file name of the software bundle you downloaded):

$ gunzip [virtualbox].tar.gz
$ tar xf [virtualbox].tar

If you're upgrading VirtualBox you've got to remove the previous version before installing the new one:

# pkgrm SUNWvbox

Install the VirtualBox native packages:

# pkgadd -d ./[virtualbox].pkg

Clone a Virtual Machine with Solaris ZFS

After installing an OS instance, ZFS may help you to spare time and space with ZFS snapshots and clones. ZFS allows you to instantly snapshot a file system and, optionally, to clone it as promote it to a ZFS file system as well. This way, for example, you could:
  • Install a guest instance (such as a Debian Linux.)
  • Take a snapshot of the virtual machine.
  • Clone it as many times as you need it.

Not only will you spare precious storage space: you'll be executing a set of identical virtual machine in practically no time. If you needed to upgrade your guest OS, you would upgrade the initial image and then you would snapshot and clone it again. If you carefully plan and analyze your requirements in advance, ZFS snapshots and clones may be a real value for your virtual machine deployments.

In an older post I made a quick ZFS snapshost and clones walkthrough.

Solaris Network Virtualization

One of the stoppers that, years ago, would prevent me to use VirtualBox in a server environment was the lack of a network virtualization layer. Basically, you were left with unsuitable choices for configuring your guests' networks on a server environment:
  • NAT: NAT wasn't flexible nor easy to administer. Since you were NAT-ting many guests on the same physical cards, you would quickly find yourself in a "port hell."
  • Dedicated adapter. This is the most flexible option, obviously, but it had a major problem: network adapters are a finite number. You would encounter this problems when configuring Solaris Zones as well.

The solution to all of this problem is called "Crossbow." You can read a previous blog post to discover Solaris Network Virtualization and get started with it.

VirtualBox introduced a feature, called Bridged Networking, that will let guests use NICs (both physical and virtual) with a "net filter" driver. When using VirtualBox Bridged Networking with Crossbow, please take into account the following:
  • A Crossbow VNIC cannot be shared between guest instances.
  • A Crossbow VNIC and the guest network interface must have the same MAC address.

Since Crossbow will let you easily create as many Virtual NIC as you need, the previous points aren't a real issue anyway.

After creating a VNIC for exclusive use of a VirtualBox guest you won't even need to plumb it and bring it up. VirtualBox will do that for you.

Configuring Bridged Networking

To configure bridged networking over a VNIC for a VirtualBox guest you can use the VirtualBox UI or VirtualBox command line interface utilities, such VBoxManage:

$ VBoxManage modifyvm <uid|name>
  --nic<1-N> bridged
  --bridgeadapter<1-N> devicename 

Configuring SMP

VirtualBox introduced SMP support some versions ago. That was a huge step forward: guests can now be assigned a number of CPUs to execute on. As usual, you can use both VirtualBox UI or CLIs to configure your guests. On the following line are summarized VBoxManage options related with CPU management:

$ VBoxManage modifyvm <uid|name>
  --cpus <number>
  --cpuidset <leaf> <eax> <ebx> <ecx> <edx>
  --cpuidremove <leaf>
  --cpuidremoveall
  --cpuhotplugging <on|off>
  --plugcpu <id>
  --unplugcpu <id>
  
Option names are self-explanatory. Nevertheless, if you need further information, please check VirtualBox official documentation.

Controlling VirtualBox Guest Resources with Solaris Resource Control

An interesting feature of Solaris is its Resource Control facility. You can, in fact, execute VirtualBox guests in a Solaris Project and apply fine-grained resource control policies to each of your running guests. That means, for example, that a VirtualBox guest with two CPUs can be executed in the context of a Solaris Project with a resource control policy that limits its cpu (project.cpu-cap) to 150%. Although your guests may use two CPUs concurrently, the total CPU that it may use is limited to 150%.

To apply resource control policies to your guest one strategy could be the following:
  • Create an user for every set of guest that will be subject to a resource control policy.
  • Create a default project for each of these users and define the resource control policies that you need to apply.
  • Execute VirtualBox guests with the defined users.

This way, Solaris will automatically apply the default resource control policies to every process of such users, such as the very same VirtualBox guest instances.

For a walkthrough to get started with Solaris Projects and Resource Controls, you can read a previous blog post.

Controlling a VirtualBox Guest Remotely

To control a VirtualBox guest remotely you can use VirtualBox command line interfaces, such as VBoxManage. With VBoxManage, for example, you will be able to:
  • Create guest instances.
  • Modify guest instances.
  • Start, pause and shutdown guest instances.
  • Control the status of your instances.
  • Teleport instances on demand to another machine.

Starting a VirtualBox Guest

To start a VirtualBox guest remotely you can use VBoxManage or the specific backend command. VBoxManage will start an instance with the following syntax:

$ VBoxManage startvm <uid|name>
  [--type gui|sdl|vrdp|headless]

VBoxManage startvm has been deprecated in favor of the specific backend commands. Since in a server environment you will probably launch guests with the headless backend, the suggested command will be:

$ VBoxHeadless --startvm <uid|name>

Please take into account that VBoxHeadless will not return until the guest has terminated its execution. To start a guest with VBoxHeadless on a remote connection to your server, then, you should use nohup for it not to terminate on shell termination:

$ nohup VBoxHeadless --startvm <uid|name> &

What if my ssh session won't exit?

You might experience a strange issue with VBoxHeadless, which is related with (Open)SSH behavior. After issuing the previous command, the ssh session will seem to hang until the guest execution is terminated. This issue is not related to VBoxHeadless but to (Open)SSH's behavior. Please read this post for an explanation. Meanwhile, the workaround I'm aware of are the following: either invoke VBoxHeadless using /dev/null as standard input:

$ nohup VBoxHeadless --startvm <uid|name> < /dev/null &

or terminate manually the ssh session with the ~. sequence after issuing the exit command.

Accessing a Remote VirtualBox Guest with RDP

VirtualBox has a built-in RDP facility that will let you access a guest console remotely using the RDP protocol. If you start an headless guest the VirtualBox RDP server will be enabled by default. To access the instance remotely, then, a suitable client such as rdesktop (for UNIX systems) will be sufficient.

Controlling a VirtualBox Guest Remotely

To control a VirtualBox guest you could either:
  • Launch the shutdown sequence on the guest itself, which is the procedure I recommend.
  • Use VBoxManage controlvm to send a suitable signal to the guest such as an acpipowerbutton, acpisleepbutton or a hard poweroff signal:

$ VBoxManage controlvm <uid|name>
  pause|resume|poweroff|savestate|
    acpipowerbutton|acpisleepbutton

VirtualBox, as outline in the syntax of the preceding example, will let you pause a virtual machine or even saving its state to disk for a later quick resume.

Teleporting a VirtualBox Guest to Another Server

VirtualBox now supports guest instances teleporting. Teleporting lets you move a running instance to another server with minimal service disruption. To teleport a VirtualBox (teleport-enabled) guest to another (VirtualBox-enabled) machine you can just issue the following command:

$ VBoxManage <uid|name> \
  teleport --host <name> --port <port> \
  --maxdowntime <ms> \
  --password <passwd> \

Flush Requests

VirtualBox, by default, might ignore IDE/SATA FLUSH requests issued by its guests. This is an issue if you're using Solaris ZFS which, by design, assumes that FLUSH requests are never ignored. In such case, just configure your virtual machine not to ignore such requests.

For IDE disks:

$ VBoxManage setextradata "name" \
  "VBoxInternal/Devices/piix3ide/0/LUN#[x]/Config/IgnoreFlush" 0

The x parameters is:

ValueDescription
0Primary master
1Primary slave
2Secondary master
3Secondary slave


For SATA disks:

VBoxManage setextradata "name" \
  "VBoxInternal/Devices/ahci/0/LUN#[x]/Config/IgnoreFlush"

In this case the x parameter is just the disk number.

Next Step

As you can see, VirtualBox is a sophisticated piece of software which is now ready for basic enterprise server virtualization. This post just shows you the beginning, though. VirtualBox offers you many other services I haven't covered (yet.) The Solaris operating system will offer you rock-solid enterprise service that will enhance your overall VirtualBox experience when used as a host.

If you're planning to virtualize guest operating systems in your environment and if your requirements fits in the picture, I suggest you strongly consider using VirtualBox on a Solaris host.

If you already use Solaris, VirtualBox will live alongside other Solaris virtualization facilities such as Solaris Zones.









No comments:

Post a Comment