Showing posts with label solaris. Show all posts
Showing posts with label solaris. Show all posts

Sunday, 13 November 2011

Network Configuration in Solaris 11 with NWAM Profiles

Oracle has just released the Solaris 11 operating system, the new production ready Solaris major release. It's an "interesting" release since it's the first Solaris major release under the Oracle egemony and it inherits all of the technologies many of us have been using, in the last few years, in the Solaris Express and OpenSolaris releases that Sun Microsystems used to provide.

This blog post is the first part of a series of quick wrap ups for the impatient to quickly start and configure their new Solaris 11 servers. My advice is always the same: read the manual.

Network Configuration Changes in Solaris 11

Network configuration in Solaris 11 is pretty different than it was in earlier Solaris releases (including Solaris Express) and many administrators may be taken by surprise. Some of these changes were introduced in the corresponding OpenSolaris projects, such as Crossbows, and may be known by many of us. To sum things up, the major differences are the following:
  • Network configuration is now managed by a profile.
  • The dladm command now centralizes the configuration of layer 2 datalinks: many tasks performed by the ifconfig command on previous Solaris releases are now to be performed using the dladm command.
  • Datalink names aren't bound to their hardware driver name any longer.
  • IP interfaces on layer 3 are configured by using the ipadm command: the venerable ifconfig command has been deprecated in the Solaris 11 release.
  • IP network multi pathing (IPMP) groups are now implemented as IP interface and as such, configured with the ipadm command.
  • The new ipmpstat command has been introduced to gather statistics about IPMP groups.
  • Network virtualization has been implemented on the network device level.

The Solaris 11 Network Stack

The new Solaris 11 network stack is similar to Solaris 10's. Yet, some improvements have been introduced that administrators are simply going to love.
In the new network stack, the software layer has been decoupled from the hardware layer: this means that:
  • The network configuration of a system (or a zone) is insulated from the hardware it's running upon. 
  • As already stated, datalink names can be customized.
  • Many network abstractions (such as VNICs) are managed in the datalink layer: this means that all of the datalink configurations can be centrally managed with one administrative interface.
On Solaris 11, then, datalinks aren't named from the underlying physical devices and, by default, are named using the netn scheme, where n is a 0-based integer index. This apparently minor modification has a very important consequence: if you modify the underlying hardware, a network configuration may still be valid if the datalink name is left unchanged. This is really handy, for example:
  • If the underlying hardware of a box changes.
  • If you migrate zones across systems.
  • If you write generic configurations for a wide set of boxes.
The mapping between a datalink and the underlying physical device can be inspected with the dladm command:

$ dladm show-phys
LINK  MEDIA     STATE  SPEED  DUPLEX  DEVICE
net0  Ethernet  up     1000   full    e1000g0
net1  Ethernet  up     1000   full    e1000g1

Network Auto-Magic (NWAM)

Long time users of older Solaris Express releases will remember the introduction of the Network Auto-Magic feature into the operating system. NWAM is a feature that automates the basic network configuration of a Solaris box. NWAM in Solaris 11 has been greatly enhanced and it now supports the following concepts:
  • NCP.
  • Location.  
An NCP is a an administrative unit that specifies the configuration of the components of the network setup such as physical links and IP interfaces. An NCP is itself made up of NCUs (Network Configuration Units) representing the configuration of a physical link or interface.

A Location profile is another administrative unit that let the administrator specify:
  • The conditions under which a profile should be activated.
  • The naming service configuration.
  • The domain name.
  • The IP filter rules.
  • The IPSec policy.
At a given time, only one NCP and one Location profile will be active in a Solaris system.

NWAM is handy when a system network configuration is changed often and an administrator, in those cases, can encapsulate the different and required configurations in profiles (NCPs and Location profiles) and activate them when needed.

If you're using the Solaris 11 desktop, you can use the Network Preferences application (which can be found into the System/Administration menu) to quickly build NCPs and Location profiles.

Network Preferences

In the following sections we will use some NWAM administrative commands but we won't dig into this subject any more and let NWAM administration be the topic of another post.

Configuring the Network

Depending on how a newly Solaris 11 installation has been performed, your initial network configuration may differ. If you've installed it from the Live CD, the Automatic NCP and the Automatic Location profile are active. These profiles are pretty simple: they configure every IP interface and the name service using DHCP, leaving any other configuration option (IP filters, IPSec, etc.) disabled.

If you're using Solaris on your PC this configuration may be good for you but chances are you might be installing some server that requires a less trivial network configuration.

Creating an NCP profile
The first thing you're going to do is creating a new NCP:

$ netcfg create ncp datacenter

The datacenter NCP will be the container of our configuration and we will add the NCU that we need for every link and IP interface we're going to configure.

# netcfg
netcfg> select ncp datacenter
netcfg:ncp:datacenter> create ncu phys net0
Created ncu 'net0'.  Walking properties ...
activation-mode (manual) [manual|prioritized]> 
link-mac-addr> 
link-autopush> 
link-mtu> 
netcfg:ncp:datacenter:ncu:net0> end
Committed changes
netcfg:ncp:datacenter> create ncu ip net0
Created ncu 'net0'.  Walking properties ...
ip-version (ipv4,ipv6) [ipv4|ipv6]> ipv4
ipv4-addrsrc (dhcp) [dhcp|static]> static
ipv4-addr> 192.168.1.53
ipv4-default-route> 192.168.1.1
netcfg:ncp:datacenter:ncu:net0> end
Committed changes
netcfg:ncp:datacenter> exit

With the netcfg command we created an NCP with the following characteristics:

  • It has an NCU for a physical interface (net0). This NCU has been configured with default values for all of its properties (such as MAC address or MTU).
  • It has an NCU for an IP interface (net0). This NCU has been configured with a static IPv4 address and a default router.
If you activate this profile, your system will reconfigure the network according to the settings of this NCP:

# netadm enable -p ncp datacenter
Enabling ncp 'datacenter'

If we now check the IP interfaces we can see how they've been configured according to the above-mentioned NCUs: the net1 IP interface is up while the net0 interface has disappeared.

# ipadm show-if
IFNAME  CLASS     STATE  ACTIVE OVER
lo0     loopback  ok     yes    --
net1    ip        ok     yes    --

If we check the IP addresses currently used, the ipadm command confirms that only net1 has been assigned an address which is the static address we configured in the NCU. Again, net0 has disappeared.

# ipadm show-addr
ADDROBJ  TYPE    STATE  ADDR
lo0/v4   static  ok     127.0.0.1/8
net1/_a  static  ok     192.168.1.53/24
lo0/v6   static  ok     ::1/128

If we know check the state of the datalinks, we can see that net0 is in the unknown state while net1 is up.

# dladm show-phys
LINK  MEDIA     STATE    SPEED  DUPLEX  DEVICE
net0  Ethernet  unknown  1000   full    e1000g0
net1  Ethernet  up       1000   full    e1000g1

If we wanted to add both the net0 datalink and IP interface into the profile, we could simply modify it and create the corresponding NCUs.

If we now try to resolve some name, however, we discover that it's not going to work. If you remember, we're still using the Automatic location profile which configure the name resolver using DHCP. In this case, however, DHCP isn't being used so that the resolver is not going to resolve any name.

What we need now, is a corresponding location profile.

Creating a Location Profile
To configure the resolver settings, we can now create a new location profile, using once more the netcfg command:

netcfg> create loc datacenter
Created loc 'datacenter'.  Walking properties ...
activation-mode (manual) [manual|conditional-any|conditional-all]> 
nameservices (dns) [dns|files|nis|ldap]> 
nameservices-config-file ("/etc/nsswitch.dns")> 
dns-nameservice-configsrc (dhcp) [manual|dhcp]> manual
dns-nameservice-domain> 
dns-nameservice-servers> 192.168.1.1
dns-nameservice-search> yourdomain.com
dns-nameservice-sortlist> 
dns-nameservice-options> 
nfsv4-domain> 
ipfilter-config-file> 
ipfilter-v6-config-file> 
ipnat-config-file> 
ippool-config-file> 
ike-config-file> 
ipsecpolicy-config-file> 
netcfg:loc:datacenter> 
netcfg:loc:datacenter> end
Committed changes
netcfg> end

As soon as we enable the newly created location profile, the resolver is going to use the configured settings and it's just going to work:

$ netadm enable -p loc datacenter
Enabling loc 'datacenter'

$ nslookup www.oracle.com
Server: 192.168.1.1
Address: 192.168.1.1#53

Non-authoritative answer:
www.oracle.com canonical name = www.oracle.com.edgekey.net.
www.oracle.com.edgekey.net canonical name = e4606.b.akamaiedge.net.
Name: e4606.b.akamaiedge.net
Address: 2.20.190.174

Conclusion

As you can see, configuring the basic network settings in a Solaris 11 system is clean and easy. The new administrative interface lets you easily define, store and activate on-demand multiple network configuration for your system without the need of writing and maintaing multiple copies of the old style Solaris network configuration files.

Monday, 15 November 2010

Upgrading OpenSolaris to Oracle Solaris 11 Express

Today Oracle released Solaris 11 Express and, as described in the Getting Started Guide, there's plenty of installation options:
  • An interactive GUI installer in a LiveCD.
  • A text installer.
  • An automated installer.
  • An upgrade path from OpenSolaris.

Yes. If you're running some OpenSolaris machines while waiting for the release that never came, here's the option for you. The upgrade instructions are detailed in the Solaris 11 Express Release Notes:
  • If your preferred publisher is not opensolaris.org (release), set it:

# pkg set-publisher -P -O http://pkg.opensolaris.org/release/ opensolaris.org

  • Perform an image-update:

# pkg image-update

  • Reboot into the new boot environment.
  • Set the new publishers:

# pkg set-publisher --non-sticky opensolaris.org
# pkg set-publisher --non-sticky extra
# pkg set-publisher -P -g http://pkg.oracle.com/solaris/release/ solaris

  • Read the license:

# pkg image-update 2>&1 | less

  • If you accept the license, perform the last image-update:

# pkg image-update --accept

  • Reboot into your brand new Oracle Solaris 11 Express boot environment

Oracle Releases Oracle Solaris 11 Express (2010.11)

More than one year after the latest and last OpenSolaris build (2009.06) was released Oracle has finally released a new snapshot of the next generation Solaris 11 Operating System: Solaris 11 Express (snv_151a).

Solaris Express is now available as:
  • A text installer: this installer does not install any window manager but it can be easily installed using the Solaris package manager (pkg).
  • A LiveCD with a GUI installer (does install the GNOME Desktop Environment).
  • An automated installer for network deployments.
  • USB install images.

If you want to discover what's new in this release, have a look at this presentation. The most important features I was waiting for are:
  • Boot Environments.
  • Fast Reboot.
  • IPS packaging system.
  • Boot Environments for Zones.
  • ZFS as the default file system.
  • ZFS deduplication.
  • ZFS diff.
  • ZFS dataset encryption.
  • Network virtualization (Project Crossbow).

Sunday, 14 November 2010

OpenSolaris (and OpenIndiana) Spends 50% of CPU Time in Kernel

A couple of days ago my client decided to prepare some new Java EE development environments and, when asked about which OS to choose, I suggested that he gave Solaris a try: since my client production servers run Solaris 10, he would benefit from a more homogeneous set of environments.

We installed a couple of test machines, one with Solaris 10 and another with OpenSolaris 2009.06, and we began installing the development environments and the required runtime components. The installation packages were SVR4: installation went straightforward on Solaris 10 while on OpenSolaris we had to resolve a couple of glitches. After a couple of day, test users were inclined towards OpenSolaris mostly because of its newer desktop environment: we started installing the remaining machines and started upgrading OpenSolaris to the latest dev release (b134).

Reduced Performance: CPU Time in Kernel When Idle 

The latest OpenSolaris dev release (b134) has got some known issues I wasn't concerned with since I already fought with in the past and can easily be resolved.

The surprise was discovering that all of the upgraded machines were affected by another problem: as soon as users rebooted into their b134 boot environment, the performance of the machine seemed to be pretty worse than when using the oldest (b111) boot environment.

prstat was showing no misbehaving process while vmstat indicated that the system was spending a constant 50% of the time in the kernel. With a quick search I easily pointed out this bug:


Repeating the steps outlined in the bug discussion confirmed me that we were hitting the same bug. We thus disabled cpupm in /etc/power.conf and the problem disappeared.

Upgrading to OpenIndiana

Although the bug is still listed as ACCEPTED, we decided to give OpenIndiana a try and upgrade a machine following the upgrade path from OpenSolaris b134. The upgrade went smooth and in no time we were rebooting into OpenIndiana b147.

The cpupm bug is still there, though. Nevertheless, it's a great opportunity for my client to test drive OpenIndiana and decide if it fits its needs. Nowadays, users will appreciate almost no differences between OpenSolaris and OpenIndiana (except for the branding.) As time goes by, we'll discover if and when Oracle will put back sources into OpenSolaris or if OpenIndiana is destined to diverge from its step-brother.



Sunday, 3 October 2010

Some Reasons Why Solaris Is a Great Java Development Platform

Some days ago I posted "The Death of OpenSolaris: Choosing an OS for a Java Developer" in which I stated that Solaris is a great platform for a Java developer. The point of that post was simply wondering about which Solaris version I'd use since the demise of OpenSolaris. What the post did fail in clarifying, as Neil's comment made me realize, were the reasons why you should choose Solaris as your development platforms. I decided to write this follow up to that post to quickly summarize my favorite ones introducing some use cases where such technologies come in handy.

Software Availability

Although Solaris continues to be a niche OS (such as many other platforms are, anyway) in the last few years Sun and the community made an excellent job at promoting it as a desktop alternative for developers. There existed even a specific distribution for developers: Solaris Express Developer Edition. It was discontinued and there really is no need for it nowadays, anyway. Late Solaris distributions (such as SXCE, OpenSolaris, OpenIndiana), include (either bundled or in the official package repository):
  • Data bases (MySQL, PostgreSQL).
  • Web Servers (Apache, Java Enterprise System Web Server, etc.).
  • Application servers (Glassfish).
  • The SAMP stack (Solaris + Apache + MySQL + PHP).
  • IDEs (NetBeans, Eclipse).
  • Support for other popular languages (Ruby, Groovy, etc.).
  • Identity management (LDAP, Java Enterprise System Identity Server).

Solaris also is a platform of choice in the enterprise hence common enterprise software packages are supported and you, as a Java developer or Java architect, won't miss the pieces you need to build your development environment. The very basic software packages I often need as a Java developers are:
  • Oracle RDBMS.
  • Oracle WebLogic Application Server.
  • IBM WebSphere Application Server.
  • JBoss Application Server.

Solaris' Technologies

Solaris has got some unique technologies that other UNIX (and UNIX-like) systems that might be used as development platforms are lacking (or ported from Solaris.) What's important here is not "technologies on their own" or technologies that are helpful only in big enterprise environments, but the fact that:
  • They're pretty well integrated in Solaris and are built to take advantage of each other.
  • There are common use cases in which these technologies are really helpful to a developer.

Each one of them would deserve several posts on their own, however, I'll try to make some concise examples.

Solaris Service Management Facility

Although this technology is probably most useful to a system administrator, as a developer I often took advantage of it. SMF is a framework that provides a unified model for services and services management. The basic recipe only needs an XML descriptor for a service. SMF lets you:
  • Define a service: startup scripts location, parameters and semantics.
  • Establish dependencies between services:
    • Services and service instances may depend on other service instances.
    • Service startup is preformed in parallel respecting service dependencies.
  • Enhanced security and fine-grained role based access control:
    • A service can be assigned only the minimum required set of privileges it needs to run.
    • Service management can be delegated to non-root users using Solaris RBAC (Role-Based Access Control).
  • Service health control:
    • Service auto-restarts.
    • Service health is enhanced by cooperation with Solaris Fault Manager which prevents service degradation when hardware failures occur.
  • Automatic inetd Services Wrapper: SMF automatically wraps inetd services.

A Typical Use Case

Every software package I use has its own SMF descriptor (either provided with the package or defined by me) and it dramatically reduces the time I need to set up a development machine. In the case of WebSphere Application Server, for example, I have separate service instances for:
  • WebSphere IHS.
  • WebSphere Application Server.
  • WebSphere Application Server DMGR.
  • WebSphere Application Server cluster nodes.

Dependencies are defined between them and I can startup the required WebSphere services with just a line of code:

svcadm enable [websphere-service-name]

and SMF will take care of everything.

The usage pattern for SMF can be enhanced further. Let's suppose you're working in one or more projects and each one of them requires distinct set of running services. What usually happens is one of the following:
  • You install them all and let them run.
  • You install them all and start and stop them manually when you switch working project.

Resources are always few for a developers and some are paranoid about sparing them. With SMF you can:
  • Define a SMF service for each of your projects.
  • For every projects, define dependencies with the services you need.

This way, at a minimum, you can start and shutdown, with a single command, every service you need for a specific project. No more:
  • Custom shell scripts for every service.
  • Custom configuration entries for inetd services (such as Subversion, Apache, etc.)
  • Specific OS customization.
  • Running services when you don't need them and waste resources you could use otherwise.

Example of SMF service manifest customization can be found in the following posts:

ZFS

The ZFS filesystem is unique as far as it concerns flexibility and ease of use. With an incredibly lean set of commands, you can:
  • Create file systems on the fly.
  • Snapshot file systems on the fly.
  • Clone file system on the fly with almost null space usage overhead.

There's a huge literature about ZFS and I'll limit to describe my favorite use cases.

Use case: Multiplexing Your Development Environment.

Software installations are just the beginning of your user experience. Often, we spend time:
  • Configuring our environments.
  • Fine-tuning them.
  • Defining the set of additional libraries we need.
  • Defining the set of server resources (JDBC, JMS, etc.) our applications use.

And so on. The list is endless.

Sometimes it's necessary to prepare different environments for different projects or different development stages of the same application. Instead of losing time and resource to build different environments I'll usually proceed as follows:
  • Install and configure my environment.
  • Make a ZFS snapshot of it.
  • Make a ZFS clone of it for every additional setup I need.

Oracle JDeveloper is a good example of an application I often clone. JDeveloper is fundamentally a single user environment, despite adopting the common approach of using a per-user configuration directory in the user's home directory. Instead of fiddling with scripts to set per-user configuration parameters, I just install it once, snapshot it's installation directory and make a ZFS clone, one per environment. I use several clones of the JDeveloper environment myself, in my user home directory.

The power of ZFS clones can be used by the Zones infrastructure, as we'll see in the following section, thus enhancing further its power. Cloning a ZFS filesystem is also advantageous while dealing with big installations such as disk images of your favorite virtualization technologies.

Additional posts I wrote about ZFS that could clarify some of its use cases are:

Containers and Other Virtualization Technologies

I consider Solaris a superior desktop virtualization platform. Once again, with a couple of commands. you can easily create a paravirtualized Solaris instance (a Zone). The zones infrastructure is ZFS-aware and can take advantage of it.

Zones can be configured with a command line interface to its XML configuration file. Creating a zone is straightforward and, since they're a lightweight technology, you can create as much zones as you need. If you're using ZFS, the process of cloning a zone is incredibly simple and fast.

Use Case: Clustering an Application Server

During the development of your Java EE application you will tipically need an instance of one (or more) of the following:
  • An application server.
  • A web server.
  • A data base;
  • An user registry.

It's also desirable to have them running on isolated environments so that you can mimic the expected production configuration. With zones it's easy: just create as many zones as you need and each one of them will behave as a separate Solaris instance: every zone will have, for example:
  • Its own network card(s) and IP configuration.
  • Its own users, groups, roles and security policies.
  • Its own services.

Instead of installing and configuring an environment multiple times, you will prepare "master" zones with the services you need. I've got a "master" zone for every one of the following:
  • WebSphere Application Server.
  • WebLogic Application Server.
  • Oracle DB.
  • MySQL DB.
  • LDAP directory.

and so forth. With one simple command (zoneadm clone [-m copy] [-s zfs_snapshot] source_zone) you'll end up with a brand new working environment in a question of minutes.

Use Case: VirtualBox and ZFS

Sometimes you'll rather work on a virtualized instance of some other OS, such as GNU/Linux, FreeBSD and Windows. Solaris is a great VirtualBox host and the power of ZFS will let you:
  • Create "master" images for every OS or every "OS role" you need.
  • Clone them on the fly to create a brand new virtual OS image.

In my case, I've got:
  • A master Windows 7 client with Visual Studio for .NET development.
  • A master Windows Server 2008.
  • A master Windows Server 2008 (a clone of the previous one) with SQL Server 2008.
  • A master Debian GNU/Linux.

Every time I need a new instance I just have to clone the disk image. In a matter of seconds I've got the environment I need. Not only I'm sparing precious time, I'm also sparing a vast amount of disk space. Should I store all of the images (and zones) I use without the ZFS technology and I'd need at least 4 times as much disks as I've got.

Use Case: A Virtualized Network Stack

Solaris provides you pretty powerful network virtualization capabilities. You can, for example, create as many virtual NICs as you need and use them independently either in Solaris Zones or as network cards for other virtualization technologies (such as VirtualBox.) Network cards can be interconnected with virtual switch (etherstubs) and enable you to create "networks in a box." Not only you can use virtualized instances to mimic your production environment: you'll be able to create a virtualized network to emulate the complex network policies your environment could need.

If you need to test an environment whose configuration would be impossible to replicate without additional physical machines, that's where virtualization technologies (such as Zones or VirtualBox) and the virtualized network stack come in handy. My developer environment for a project I'm working for is made up of:
  • Two zones with two load balanced IBM IHS instances.
  • A zone with an LDAP directory.
  • Two zones with two clustered instances of IBM WebSphere Application Server.
  • A Zone with an instance of IBM WebSphere DMGR.

With Solaris, I can replicate the production environment in my box and respect each and every network configuration we use. Without these technologies, it would have been much harder to accomplish this goal or I would end up with custom configurations (for example, to avoid port clashes). In all cases, I'd lose much more time on the administration and configuration of such environments if zones weren't so easy to use.

DTrace

DTrace power is extremely easy to explain to a developer. At the same time, it's difficult to grasp its usefulness without trying it yourself. DTrace on Solaris provides tens of thousand of probes out of the box and others can be created on the fly. This "probes" provide you an extremely powerful mean of troubleshoot problems in either your applications and the underlying operating systems. To use the probes you've got to use scripts written in the D language. Fortunately, this language is pretty easy by design and you can write powerful D scripts in a few lines of code.

DTrace is unobtrusive and let you troubleshoot problems immediately, without modifying your application, even in a production environment. Some IDEs, such as NetBeans, have powerful plugins that let you write D scripts and see the data collected by the probes in beautiful graphics.

As a developer, I valued DTrace usefulness more than once. Instead of troubleshooting problems having to dig into the source code and introduce additional code (even in the cases in which aspects come in handy), I could use a D script to observe the application from the outside and quickly collect data that could help me determine where the problem could be.

In some cases, moreover, you could find yourself dealing with situations in which there's no code available. I could quickly troubleshoot a problem I was having with WebSphere Application Server with a D script instead of relying on WebSphere tracing facilities and the task of interpreting log files.

Conclusion

So much for an introductory post. The possibility of building a development environment as close as possible as your target environment is a "must" for any development platform. Additionally, I consider that working on a environment as close as possible as the production environment not only gives you additional value and insights during an application development stage, but should also considered a mandatory requirement for every project we're involved into. Solaris provides all of the tools a developer need to accomplish this goal.

Solaris is a complex enterprise operating system with many features you won't probably ever use. Nevertheless, there's a use case for many others of them, as I tried to point out in this post. Since some of these technologies were developed with an open source license, they are also available on other operating systems: ZFS is available on FreeBSD and there exist a community effort to port it to OS X; DTrace is available on OS X, Linux and FreeBSD.

The "Solaris advantage" is that all of these technologies are highly integrated and take advantage of each other. The result is worth more than the sum of them. These technologies have got a very polished and easy to use administrative interfaces: when time is important, "How you do it" is fundamental.

I hope that these insights might help you understand if and when the Solaris operating system might be useful to you. Even if you consider that it's not, I suggest you give it a try anyway: it's always good to add new technologies to your tool box.

Wednesday, 29 September 2010

The Death of OpenSolaris: Choosing an OS for a Java Developer

A Bit of History: The Struggles of OpenSolaris

This is no news: you probably know all about it.

As a long time Solaris user, the recent years have been full of good news for me.

I remember starting with GNU/Linux at home to have "something similar" to the Solaris workstations I used at work. It was the time when software would most likely compile on Solaris rather than on Linux.

Years later I bought my first Sun Workstation: it was the time when trying to compile on Solaris packages that would supposedly compile on a POSIX system was a pain. Still, I continued to regard Solaris as a stable platform to keep on using it for my work duties, such as Java programming.

Then came Solaris 10 and all of its wonderful technologies such as ZFS, Zones and DTrace, just to cite a few. With it, there came the Solaris Express distributions which, at last, filled a long standing gap between Solaris and other operating systems, providing us a pretty modern desktop environment.

In late 2008 came the first OpenSolaris distribution. I installed it, played with it, but kept on using SXCE for my workstations. The main reason was compatibility with many Sun packages, such as the Java Enterprise System or the Sun Ray Server Software, that had more than one glitch on OpenSolaris.

When SXCE was discontinued, I waited for the 2010.xx OpenSolaris release to upgrade my systems. Unfortunately, that release will never see the light.

The Oracle Leaked Memo (the dignifying uppercase is a must given Oracle prolonged silence over the subject) shed a light over Oracle plans for Solaris proper and OpenSolaris. Part of the "good news" is that the Solaris Express program has been resurrected and the first binary distribution is expected later this year.

The bad news is that the code, at least the parts of it that will be released with an open source license, will be released after the releases of the full Solaris Operating Systems. Basically, our privileged observation point over the development of the operating system has been shut down.

Lots of ink has been been spilled since the Leaked Memo and plenty of information, discussions and wars are going on in the blogosphere. I'm not an authoritative source to speak about the subject and it's not even clear to me what I'm going to do, now.

Benefits of Solaris for a Java Developer

Solaris has been my operating system of choice since before I started working in the IT industry. As a student, I grew up with Solaris at the data center of my University and the Slackware I used at home seemed like a kid toy, compared to it. After graduating, I started working as a design engineer for a leading microprocessors producer. Needless to say, Solaris was the platform we ran our design software upon. Then, I moved to a consulting firm and started my career as a Java architect.

Solaris was and is the platform of choice for most of the clients I've been working for. Even today, the application servers, the cluster software, the database, and most of the infrastructure used by my clients run on Solaris. It always seemed a sound choice to me, then, developing software on the same platform that will run it in production.

IDEs, Tools and Runtimes

As a Java developer, I can run all of my tools I need on a supported platform. My favorite IDEs (NetBeans and JDeveloper), the application servers my clients use (WebLogic and WebSphere, mostly), the databases used by my applications (MySQL, Oracle RDBMS, PostgreSQL): all of them run and are supported on Solaris. Some of them are even bundled with it or readily available by Sun sponsored package repositories. The Eclipse platform, to cite a widely use IDE for Java, is available in the OpenSolaris IPS repository, too.

Solaris Technologies

Solaris 10 also integrates DTrace, a powerful, unobtrusive framework that allows you to observe and troubleshoot application and OS problem in real time, even in production systems with an almost null overhead. DTrace has allowed us to diagnose strange production quirks with no downtime: once you've tried DTrace and the D language, there's no going back to "just" a debugger, even in the development stages of your project.

Other kinds of problems does not show up in your debugger or are extremely hard to catch. It might be the case of network or file systems problems. That's where DTrace comes in handy: it lets you observe with an incredibly high detail what's going on in your application and in the kernel of the operating systems, if it's necessary to dig so deep.

Solaris Virtualization Technologies

Solaris is also an ideal host virtualization platform. Solaris can "virtualize itself" with Containers, Zones and Logical Domains: you can start a Zone in no time (and almost no space overhead), assign a resource cap to it and also build a virtualized network in a box to simulate a complex network environment.

One of the problems that I encountered during the development of big enterprise system is that the development environment, and sometimes even the integration environment, is very different than the production one. It surely is a methodology problem: nevertheless, developers have few weapons to counteract. For example, applications that appear to run fine on a single node may not run correctly on a server cluster, or scale badly.

The more you wait to catch a bug, the more impact will have a patch for it. That's why in my development team, for example, we use Solaris Zones to simulate a network cluster of IBM WebSphere Application Servers and a DB cluster. All of them run in completely isolated Zones in one physical machine and communicate on a virtual network with virtual etherstubs (sort of a network switch), VLANs and routing rules. This environment lets us simulate exactly how the software will behave in the production system. Without a flexible and lightweight virtualization technology it would have been much more difficult and expensive to prepare a similar environment.

And if you (still) need to run other platforms, you can use Xen or VirtualBox to run, for example, your favorite Linux distro, Windows, or *BSD.

Summarizing

Enumerating the advantages of Solaris is difficult in such a reduced space, however I'll try:
  • It's a commercially supported operating system: that's an option, since Solaris is free for development purpose. Nonetheless, it's an important point to take into account.
  • Is (very) well documented: there's plenty of official and unofficial documentation.
  • It's easy to administer: Solaris is pretty easy to administer, even if you're not a seasoned system administrator.
  • It's an UNIX system: with all of its bells and whistles.
  • It is a great virtualization platform.
  • It has some unique technologies that add much value to its offering, such as ZFS and DTrace.


If you're a Java developer and haven't given Solaris I try, I strongly suggest you do it. Maybe you'll start to benefit from other Solaris 10 technologies such as Zones and ZFS, even for running your home file or media server.

Complaints

I often hear complaints about Solaris coming from different sources and with the most imaginative arguments: proprietary, closed, old, difficult to use. I usually answer inviting users to try it and see for themselves before judging it (if that's the case). Most of the times I'm not surprised to discover that the complaining guy had minimal or null exposure to Solaris.

Also, I'd like to point out that no OS I tried is a swiss army knife. Solaris is a server-oriented OS with a good desktop but it's not comparable with other operating systems for such an use. So: no comparison with Linux, please. It would be so unjust as comparing Linux and Mac OS X for the average home user. ;)

Alternatives

Since Java "runs anywhere", there's plenty of choice for a Java developer.

Since I own a laptop with Mac OS X, I've built a small development environment with all of the tools I need. Mac OS X is a great operating systems that comes with many fancy features out of the box and, although it has some idiosyncrasy with Java (read: you have to use the JVM shipped by Apple), it's a good OS for a Java developer. Since the Mac OS X hype has begun, there's plenty of packages for it and a big ecosystem which is still growing. Still, many software packages run in the enterprise aren't supported on Mac OS X. Since I prefer to have an environment as close as possible as the production one, I think that OS X is not the best choice for the average Java EE architect.

I've also been an hardcore Slackware and Debian user for a long time. An enterprise Java developer would miss nothing in a modern GNU/Linux distribution, nowadays, and most of the software packages you'll find in the enterprise will run on your GNU/Linux distribution.

No need to talk about Windows, either.

So, why Solaris? Every OS has its own advantages and disadvantages. The point is to just recognize them. Mac OS X, in my opinion, is the best OS for a home user. I would change it for no Windows and no Linux. But as far as it concerns my developers' duties, every other OS just lacks the features and the stability that make Solaris great. ZFS, DTrace and Zones, for my use cases, are killer features.

What's Next?

You've decided to give Solaris a try, so: which is Your distribution? I don't know.

Solaris Express/Oracle Solaris

I strongly suspect that my wait will be prolonged and I will finally upgrade my machines as soon as Solaris Express has been released. Upgrading to Solaris 10 09/10 is not possible since I'm using some ZFS pools whose version is not yet supported by Solaris proper but it is a sound choice for a starter.

The advantage I see in using one of these versions is the availability of optional support and the good level of integration with the most commonly used software packages that Oracle is likely to guarantee.

OpenIndiana

You should also know that OpenSolaris sources have been (sort-of) forked and two new projects are born: Illumos and OpenIndiana. The project were started by Nexenta employees and volunteers of the OpenSolaris community. The first projects aims at maintaining the OpenSolaris code and the parts of the code that are closed or code that upstream might choose not to maintain. The OpenIndiana project aims at producing binary distribution of the operating system built upon the Illumos source code. OpenIndiana will provide a really open source, binary compatible alternative to Solaris and Solaris Express.

Sounds good and I'll willingly support it. In the meantime I've installed OpenIndiana in a couple of virtual machines and the first impressions are very good. I suppose it hasn't passed enough time yet for diverging changes to have emerged.

If you prefer a more modern desktop with a recent Gnome interface, drop Solaris 10 and go for OpenIndiana, if you don't feel like waiting for Solaris Express. In any case, switching between the two shouldn't pose any problems. What's clear to me is that I won't consider using both operating systems: I'll have to make a choice.

Support Availability

As an enterprise user and a Java developer, I've always been more concerned about OS support and support for the packages I use, rather than about eye candy. Even at the cost of running a proprietary platform.

In conclusion: I'll wait for Solaris Express to be released and only then will decide which one I'll use for my purposes between Oracle Solaris Express and  OpenIndiana. My heart is betting for OpenIndiana. My brain is betting for Oracle Solaris Express and Solaris proper. Only time will tell which one is right (for me.)

Follow-Up

A follow-up of this blog post is avaible at this address. In this post I'll try to summarize some use cases in which the technology we introduced in this post are effective and add real value to your development duties.

I hope you enjoy it.



Tuesday, 28 September 2010

A Shell Script to Find and Remove the BOM Marker

Introduction

Have you ever seen this characters while dumping the contents of some of your text files, haven't you?



If you have, you found a BOM marker! The BOM marker is a Unicode character with code point U+FEFF that specifies the endianness of an Unicode text stream.

Since Unicode characters can be encoded as a multibyte sequence with a specific endianness, and since different architectures may adopt distinct endianness types, it's fundamental to signal the receiver about the endianness of the data stream being sent. Dealing with the BOM, then, it's part of the game.

If you want to know more about when to use the BOM you can start by reading this official Unicode FAQ.

UTF-8

UTF-8 is one of the most widely used Unicode characters encoding on software and protocols that have to deal with textual data stream. UTF-8 represents each Unicode character with a sequence of 1 to 4 octects. Each octect contains control bits that are used to identify the beginning and the length of an octect sequence. The Unicode code point is simply the concatenation of the non control bits in the sequence. One of the advantages of UTF-8 is that it retains backwards compatibility with ASCII in the ASCII [0-127] range since such characters are represented with the same octect in both encodings.

If you feel curious about how the UTF-8 encoding works, I've written an introductory post about it.

Common Problems

Because of its design, the UTF-8 encoding is not endianness-sensible and using the BOM with this encoding is discouraged by the Unicode standard. Unfortunately some common utilities, notably Microsoft Notepad, keep on adding a BOM in your UTF-8 files thus breaking those application that aren't prepared to deal with it.

Some programs could, for example, display the following characters at the beginning of your file:



A more serious problem is that a BOM will break a UNIX shell script interfering with the shebang (#!).

A Shell Scripts to Check for BOMs and Remove Them

The Byte Order Mark (BOM) is a Unicode character with code point U+FEFF. Its UTF-8 representation is the following sequence of 3 octects:

1110 1111 1011 1011 1011 1111
E    F    B    B    B    F

The quickest way I know of to process a text file and perform this operation is sed. The following syntax will instruct sed to remove the BOM from the first line of its input file:

sed '1 s/\xEF\xBB\xBF//' < input > output

A Warning for Solaris Users

I haven't found a way (yet) to correctly use a sed implementation bundled with Solaris 10 to perform this operation, neither using /usr/bin/sed nor /usr/xpg4/bin/sed. If you're a Solaris user, please consider installing GNU sed to use the following script.

The quickest way to install sed and a lot of fancy Solaris packages is using Blastwave or OpenCSW. I've also written a post about loopback-mounting Blastwave/OpenCSW installation directory in Solaris Zones to simplify Blastwave/OpenCSW software administration.

A Suggestion for Windows Users

If you want to execute this script in a Windows environment, you can install CygWin. The base install with bash and the core utilities will be sufficient for this script to work on your CygWin environment.

Source

This is the source code of a skeleton implementation of a bash shell script that will remove the BOM from its input files. The script support recursive scanning of directories to "clean" an entire file system tree and a flag (-x) to avoid descending in a filesystem mounted elsewhere. The script uses temporary files while doing the conversion and the original file will be overwritten only if the -d option is not specified.

#!/bin/bash

set -o nounset
set -o errexit

DELETE_ORIG=true
DELETE_FLAG=""
RECURSIVE=false
PROCESSING_FILES=false
SED_EXEC=sed
TMP_CMD="mktemp"
TMP_OPTS="--tmpdir="
XDEV=""

if [ $(uname) == "SunOS" ] ; then
  TMP_OPTS="-p "
  
  if [ -x /usr/gnu/bin/sed ] ; then
    echo "Using GNU sed..."
    SED_EXEC=/usr/gnu/bin/sed
  fi
  
fi

function usage() {
  echo "bom-remove [-dr] [-s sed-name] files..."
  echo ""
  echo "  -d    Do not overwrite original files and do not remove temp files."
  echo "  -r    Scan subdirectories."
  echo "  -s    Specify an alternate sed implementation."
  echo "  -x    Don't descend directories in other file systems."
}

function checkExecutable() {
  if ( ! which "$1" > /dev/null 2>&1 ); then
    echo "Cannot find executable:" $1
    exit 4
  fi
}

function parseArgs() {
  while getopts "dfrs:x" flag
  do
    case $flag in
      r) RECURSIVE=true ;;
      f) PROCESSING_FILES=true ;;
      s) SED_EXEC=$OPTARG ;;
      d) DELETE_ORIG=false ; DELETE_FLAG="-d" ;;
      x) XDEV="-xdev" ;;
      *) echo "Unknown parameter." ; usage ; exit 2 ;;
    esac
  done

  shift $(($OPTIND - 1))

  FILES="$@"
  if [ ! -n "$FILES" ] ; then
    echo "No files specified. Exiting."
    exit 2
  fi

  if [ $RECURSIVE == true ]  && [ $PROCESSING_FILES == true ] ; then
    echo "Cannot use -r and -f at the same time."
    usage
    exit 1
  fi

  checkExecutable $SED_EXEC
  checkExecutable $TMP_CMD
}

function processFile() {
  TEMPFILENAME=$($TMP_CMD $TMP_OPTS$(dirname "$1"))
  echo "Processing $1 using temp file $TEMPFILENAME"

  cat "$1" | $SED_EXEC '1 s/\xEF\xBB\xBF//' > "$TEMPFILENAME"

  if [ $DELETE_ORIG == true ] ; then
    if [ ! -w "$1" ] ; then
      echo "$1 is not writable. Leaving tempfile."
    else
      echo "Removing temp file..."
      mv "$TEMPFILENAME" "$1"
    fi
  fi
}

function doJob() {
  # Check if the script has been called from the outside.
  if [ $PROCESSING_FILES == true ] ; then
    for i in $FILES ; do
      processFile "$i"
    done
  else
    # processing every file
    for i in $FILES ; do
      # checking if file or directory exist
      if [ ! -e "$i" ] ; then echo "File not found: $i. Skipping..." ; continue ; fi
      
      # if a parameter is a directory, process it recursively if RECURSIVE is set
      if [ -d "$i" ] ; then
        if [ $RECURSIVE == true ] ; then
          find "$i" $XDEV -type f -exec "$0" $DELETE_FLAG -f "{}" +
        else
          echo "$i is a directory. Skipping..."
        fi
      else
        processFile "$i"
      fi
    done
  fi
}

parseArgs $@
doJob

Examples

Assuming the script is in your $PATH and it's called bom-remove, you can "clean" a bunch of files invoking it this way:

$ bom-remove file-to-clean ...

If you want to clean the files in an entire directory, you can use the following syntax:

$ bom-remove -r dir-to-clean

If your sed installation is not in your $PATH or you have to use an alternate version, you can invoke the script with the following syntax:

$ bom-remove -s path/to/sed file-to-clean

If you want to clean a directory in which other file systems might be mounted, you can use the -x option so that the script does not descend them:

$ bom-remove -xr dir-to-clean

Next Steps

The most effective way to fight the BOM is avoiding spreading it. Microsoft Notepad, if there's anybody out there using it, isn't the best tool to edit your UTF-8 files so, please, avoid it.

However, should your file system be affected by the BOM-desease, I hope this script will be a good starting point to build a BOM-cleaning solution for your site.

Enjoy!






Thursday, 9 September 2010

No news... good news? Solaris 10 licensing terms have changed

Long time no blog. Partly because I enjoyed a(n almost) relaxing summer. Partly because I was standing by, sad and astonished, at what was happening to Solaris and OpenSolaris in the final stages of the Oracle and Sun Microsystems merger.

There's no need for me to blog about the well known changes that OpenSolaris and its communities underwent in the last months. I feel a little bit of sadness but things haven't changed so dramatically. The supported Sun's OpenSolaris distribution has been obliterated in favor of... yet another Solaris Express. Back to the old days. Ben Rockwood has written a good piece in his blog in which he analyses the leaked Oracle memo and makes some insightful considerations. I agree with his analysis.

Until Solaris 11 Express sees the light, I was thinking about upgrading my workstations to Solaris 10 09/10, which was released yesterday. As you'll notice when accepting the OTN license, Solaris 10 licensing terms have changed.

Except for any included software package or file that is licensed to you by Oracle under different license terms, we grant you a perpetual (unless terminated as provided in this agreement), nonexclusive, nontransferable, limited License to use the Programs only for the purpose of developing, testing, prototyping and demonstrating your applications, and not for any other purpose.

After so many fears and speculations, that's good news.



Friday, 11 June 2010

VirtualBox as an Enterprise Server Virtualization Solution

Introduction

Some posts ago I quickly argued that VirtualBox might be used as a server virtualization platform in some scenarios instead of relying on more complex enterprise solutions such as the very Oracle VM or a VMWare. VirtualBox is a great Type 2 Hypervisor that has been rapidly growing in the past few years and now supports a wide range of both host and guest operating systems. Although VirtualBox is the heart of Sun/Oracle offering for desktop virtualization and although Solaris comes with Xen as a Type 1 Hypervisor, I argue that VirtualBox may be a solution to seriously take into consideration especially when using Solaris as a host operating system since VirtualBox itself can leverage Solaris features such as:
  • ZFS.
  • Crossbow (network virtualization and resource control).
  • RBAC, Projects and Resource control.

Solaris comes with other virtualization technologies such as Zones and Containers. If you need a Solaris instance, the quickest way to virtualize one is creating a zone. If you're using Solaris, then, you might want to consider Zones instead of a Type 1 hypervisor. Having said that, VirtualBox might help you in the case you're running Zones alongside other guests: instead of dedicating physical machines to zones and other physical machines to a Type 1 hypervisor such as Oracle VM or VMWare (both based on Linux), you might want to consider OpenSolaris' Xen or VirtualBox.

OpenSolaris' Xen is a Type 1 hypervisor built on the Solaris kernel: as such, it virtualizes guest OSs alongside Solaris Zones on the same physical machine. VirtualBox, being a Type 2 hypervisor, can be executed on a Solaris host alongside Zones as well.

In this post we'll make a quick walkthrough to describe how VirtualBox can be used in a Solaris environment as a server virtualization platform.

Installing VirtualBox

Installing VirtualBox on the Solaris Operating System is very easy. Download VirtualBox, gunzip and untar the distribution (please substitute [virtualbox] with the actual file name of the software bundle you downloaded):

$ gunzip [virtualbox].tar.gz
$ tar xf [virtualbox].tar

If you're upgrading VirtualBox you've got to remove the previous version before installing the new one:

# pkgrm SUNWvbox

Install the VirtualBox native packages:

# pkgadd -d ./[virtualbox].pkg

Clone a Virtual Machine with Solaris ZFS

After installing an OS instance, ZFS may help you to spare time and space with ZFS snapshots and clones. ZFS allows you to instantly snapshot a file system and, optionally, to clone it as promote it to a ZFS file system as well. This way, for example, you could:
  • Install a guest instance (such as a Debian Linux.)
  • Take a snapshot of the virtual machine.
  • Clone it as many times as you need it.

Not only will you spare precious storage space: you'll be executing a set of identical virtual machine in practically no time. If you needed to upgrade your guest OS, you would upgrade the initial image and then you would snapshot and clone it again. If you carefully plan and analyze your requirements in advance, ZFS snapshots and clones may be a real value for your virtual machine deployments.

In an older post I made a quick ZFS snapshost and clones walkthrough.

Solaris Network Virtualization

One of the stoppers that, years ago, would prevent me to use VirtualBox in a server environment was the lack of a network virtualization layer. Basically, you were left with unsuitable choices for configuring your guests' networks on a server environment:
  • NAT: NAT wasn't flexible nor easy to administer. Since you were NAT-ting many guests on the same physical cards, you would quickly find yourself in a "port hell."
  • Dedicated adapter. This is the most flexible option, obviously, but it had a major problem: network adapters are a finite number. You would encounter this problems when configuring Solaris Zones as well.

The solution to all of this problem is called "Crossbow." You can read a previous blog post to discover Solaris Network Virtualization and get started with it.

VirtualBox introduced a feature, called Bridged Networking, that will let guests use NICs (both physical and virtual) with a "net filter" driver. When using VirtualBox Bridged Networking with Crossbow, please take into account the following:
  • A Crossbow VNIC cannot be shared between guest instances.
  • A Crossbow VNIC and the guest network interface must have the same MAC address.

Since Crossbow will let you easily create as many Virtual NIC as you need, the previous points aren't a real issue anyway.

After creating a VNIC for exclusive use of a VirtualBox guest you won't even need to plumb it and bring it up. VirtualBox will do that for you.

Configuring Bridged Networking

To configure bridged networking over a VNIC for a VirtualBox guest you can use the VirtualBox UI or VirtualBox command line interface utilities, such VBoxManage:

$ VBoxManage modifyvm <uid|name>
  --nic<1-N> bridged
  --bridgeadapter<1-N> devicename 

Configuring SMP

VirtualBox introduced SMP support some versions ago. That was a huge step forward: guests can now be assigned a number of CPUs to execute on. As usual, you can use both VirtualBox UI or CLIs to configure your guests. On the following line are summarized VBoxManage options related with CPU management:

$ VBoxManage modifyvm <uid|name>
  --cpus <number>
  --cpuidset <leaf> <eax> <ebx> <ecx> <edx>
  --cpuidremove <leaf>
  --cpuidremoveall
  --cpuhotplugging <on|off>
  --plugcpu <id>
  --unplugcpu <id>
  
Option names are self-explanatory. Nevertheless, if you need further information, please check VirtualBox official documentation.

Controlling VirtualBox Guest Resources with Solaris Resource Control

An interesting feature of Solaris is its Resource Control facility. You can, in fact, execute VirtualBox guests in a Solaris Project and apply fine-grained resource control policies to each of your running guests. That means, for example, that a VirtualBox guest with two CPUs can be executed in the context of a Solaris Project with a resource control policy that limits its cpu (project.cpu-cap) to 150%. Although your guests may use two CPUs concurrently, the total CPU that it may use is limited to 150%.

To apply resource control policies to your guest one strategy could be the following:
  • Create an user for every set of guest that will be subject to a resource control policy.
  • Create a default project for each of these users and define the resource control policies that you need to apply.
  • Execute VirtualBox guests with the defined users.

This way, Solaris will automatically apply the default resource control policies to every process of such users, such as the very same VirtualBox guest instances.

For a walkthrough to get started with Solaris Projects and Resource Controls, you can read a previous blog post.

Controlling a VirtualBox Guest Remotely

To control a VirtualBox guest remotely you can use VirtualBox command line interfaces, such as VBoxManage. With VBoxManage, for example, you will be able to:
  • Create guest instances.
  • Modify guest instances.
  • Start, pause and shutdown guest instances.
  • Control the status of your instances.
  • Teleport instances on demand to another machine.

Starting a VirtualBox Guest

To start a VirtualBox guest remotely you can use VBoxManage or the specific backend command. VBoxManage will start an instance with the following syntax:

$ VBoxManage startvm <uid|name>
  [--type gui|sdl|vrdp|headless]

VBoxManage startvm has been deprecated in favor of the specific backend commands. Since in a server environment you will probably launch guests with the headless backend, the suggested command will be:

$ VBoxHeadless --startvm <uid|name>

Please take into account that VBoxHeadless will not return until the guest has terminated its execution. To start a guest with VBoxHeadless on a remote connection to your server, then, you should use nohup for it not to terminate on shell termination:

$ nohup VBoxHeadless --startvm <uid|name> &

What if my ssh session won't exit?

You might experience a strange issue with VBoxHeadless, which is related with (Open)SSH behavior. After issuing the previous command, the ssh session will seem to hang until the guest execution is terminated. This issue is not related to VBoxHeadless but to (Open)SSH's behavior. Please read this post for an explanation. Meanwhile, the workaround I'm aware of are the following: either invoke VBoxHeadless using /dev/null as standard input:

$ nohup VBoxHeadless --startvm <uid|name> < /dev/null &

or terminate manually the ssh session with the ~. sequence after issuing the exit command.

Accessing a Remote VirtualBox Guest with RDP

VirtualBox has a built-in RDP facility that will let you access a guest console remotely using the RDP protocol. If you start an headless guest the VirtualBox RDP server will be enabled by default. To access the instance remotely, then, a suitable client such as rdesktop (for UNIX systems) will be sufficient.

Controlling a VirtualBox Guest Remotely

To control a VirtualBox guest you could either:
  • Launch the shutdown sequence on the guest itself, which is the procedure I recommend.
  • Use VBoxManage controlvm to send a suitable signal to the guest such as an acpipowerbutton, acpisleepbutton or a hard poweroff signal:

$ VBoxManage controlvm <uid|name>
  pause|resume|poweroff|savestate|
    acpipowerbutton|acpisleepbutton

VirtualBox, as outline in the syntax of the preceding example, will let you pause a virtual machine or even saving its state to disk for a later quick resume.

Teleporting a VirtualBox Guest to Another Server

VirtualBox now supports guest instances teleporting. Teleporting lets you move a running instance to another server with minimal service disruption. To teleport a VirtualBox (teleport-enabled) guest to another (VirtualBox-enabled) machine you can just issue the following command:

$ VBoxManage <uid|name> \
  teleport --host <name> --port <port> \
  --maxdowntime <ms> \
  --password <passwd> \

Flush Requests

VirtualBox, by default, might ignore IDE/SATA FLUSH requests issued by its guests. This is an issue if you're using Solaris ZFS which, by design, assumes that FLUSH requests are never ignored. In such case, just configure your virtual machine not to ignore such requests.

For IDE disks:

$ VBoxManage setextradata "name" \
  "VBoxInternal/Devices/piix3ide/0/LUN#[x]/Config/IgnoreFlush" 0

The x parameters is:

ValueDescription
0Primary master
1Primary slave
2Secondary master
3Secondary slave


For SATA disks:

VBoxManage setextradata "name" \
  "VBoxInternal/Devices/ahci/0/LUN#[x]/Config/IgnoreFlush"

In this case the x parameter is just the disk number.

Next Step

As you can see, VirtualBox is a sophisticated piece of software which is now ready for basic enterprise server virtualization. This post just shows you the beginning, though. VirtualBox offers you many other services I haven't covered (yet.) The Solaris operating system will offer you rock-solid enterprise service that will enhance your overall VirtualBox experience when used as a host.

If you're planning to virtualize guest operating systems in your environment and if your requirements fits in the picture, I suggest you strongly consider using VirtualBox on a Solaris host.

If you already use Solaris, VirtualBox will live alongside other Solaris virtualization facilities such as Solaris Zones.









Getting Started with Solaris Network Virtualization ("Crossbow")

Solaris Network Virtualization

OpenSolaris Project Crossbow aim is bringing a flexible Network Virtualization and Resource Control layer to Solaris. A Crossbow-enabled version of Solaris enables the administrator to create virtual NICs (and switches) which, from a guest operating system or Zone standpoint, are indistinguishable from physical NICs. You will be able to create as many NICs as your guests need and configure them independently. More information on Crossbow and official documentation can be found on the project's homepage.

This post is just a quick walkthrough to get started with Solaris Network Virtualization capabilities.

Creating a VNIC

To create a VNIC on a Solaris host you can use the procedure described hereon. Show the physical links and decide which one you'll use:

$ dladm show-link
LINK        CLASS     MTU    STATE    BRIDGE     OVER
e1000g0     phys      1500   up       --         --
vboxnet0    phys      1500   unknown  --         --

In this machine I only have one physical link, e1000g0. Create a VNIC using the physical NIC you chose:

# dladm create-vnic -l e1000g0 vnic1

Your VNIC is now created and you can use it with Solaris network monitoring and management tools:

$ dladm show-link
LINK        CLASS     MTU    STATE    BRIDGE     OVER
e1000g0     phys      1500   up       --         --
vboxnet0    phys      1500   unknown  --         --
vnic1       vnic      1500   up       --         e1000g0

Note that a random MAC address has been chosen for your VNIC:

$ dladm show-vnic
LINK         OVER         SPEED  MACADDRESS        MACADDRTYPE         VID
vnic1        e1000g0      100    2:8:20:a8:af:ce   random              0

You can now use your VNIC as a "classical" physical link. You can plumb it and bring it up with the classical Solaris procedures like ifconfig and Solaris configuration files.

Resource Control

Solaris network virtualization is tightly integrated with Solaris Resource Control. After a VNIC is created you can attach resource control parameters to it such as a control for maximum bandwidth consumption or CPU usage.

Bandwidth Management

As if it were a physical link, you can use the dladm command to establish a maximum bandwidth limit on a whole VNIC:

# dladm set-linkprop -p maxbw=300 vnic4
# dladm show-linkprop vnic4
LINK         PROPERTY        PERM VALUE          DEFAULT        POSSIBLE
vnic4        autopush        --   --             --             -- 
vnic4        zone            rw   --             --             -- 
vnic4        state           r-   unknown        up             up,down 
vnic4        mtu             r-   1500           1500           1500 
vnic4        maxbw           rw     300          --             -- 
vnic4        cpus            rw   --             --             -- 
vnic4        priority        rw   high           high           low,medium,high 
vnic4        tagmode         rw   vlanonly       vlanonly       normal,vlanonly 
vnic4        protection      rw   --             --             mac-nospoof,
                                                                ip-nospoof,
                                                                restricted 
vnic4        allowed-ips     rw   --             --             -- 

vnic4 maximum bandwidth limit is now set to 300.

If you want to read an introduction to Solaris Projects and Resource Control you can read this blog post.

Using VNICs

VNICs are useful on a variety of use cases. VNICs are one of the building blocks of a full fledged network virtualization layer offered by Solaris. The possibility of creating VNICs on the fly will open the door to complex network setups and resource control policies.

VNICs are especially useful when used in conjunction with other virtualization technologies such as:
  • Solaris Zones.
  • Oracle VM.
  • Oracle VM VirtualBox.

Using VNICs with Solaris Zones

Solaris Zones can use a shared or an exclusive IP stack. An exclusive IP stack has its own instance of variables used by the TCP/IP stack and are not shared with the global zone. This basically means that a Solaris Zone with an exclusive IP stack can have:
  • Its own routing table.
  • Its own ARP table.

and whatever parameter Solaris lets you set on your IP stack.

Before Crossbow the number of physical links on a server was a serious problem when you needed to set up a large number of Solaris Zones when an exclusive IP stack was desirable. Crossbow now removes that limit and having a large number of exclusive IP stack non global Zones is not an issue any longer.

Other Virtualization Software

The same reasoning applies for other virtualization software such as Oracle VM or Oracle VM VirtualBox. For every guest instance you need, you will create the VNICs you'll need for exclusive use of your guest operating system.

On another post I'll focus on VirtualBox and describe how VNICs can be used with its guests.

Next Steps

There's more to Solaris Network Virtualization, these are just the basics. For instance, you will be able to fully virtualize a network topology by using:
  • VNICs.
  • Virtual Switches.
  • Etherstubs.
  • VLANs.

As far as it concerns resource control, bandwith limit is just the beginning. Solaris Network Virtualization will let you finely control your VNIC usage on a:
  • Per-transport basis.
  • Per-protocol basis.
  • CPU consumption per VNIC basis.

To discover what else Solaris Network Virtualization can do for you, keep on reading this blog and checkout the official project documentation. You could also install an OpenSolaris guest with VirtualBox and experiment yourself. There's nothing like a hands-on session.