Tuesday, 27 January 2009

Free Ubuntu Pocket Guide and Reference

Ubuntu neophytes and gurus alike have cause to rejoice. Here is a gem of a deal - a free ebook on Ubuntu authored by none other than Kier Thomas. The book is divided into seven chapters spanning 170 pages. And it contain a wealth of information right from an introduction to Ubuntu, to installing and configuring it on your machine, coming to grips with the desktop, a lucid explanation of the various system administration tasks you should carry out to maintain your system, and of course the security aspects of Ubuntu which is dealt with in the seventh chapter of this book.

This is a wonderful book which is sure to be a ready pocket reference to all Linux users running Ubuntu on their machine. And true to Keir's class, the language is lucid and to the point. The sapient advice he belts out in this book to many of the common problems faced by Ubuntu users puts this book on a singular level of its own.

While the author has been gracious in providing the ebook as a free download, the real value of this book lies in buying a print edition of the book. As owing to its pocket size (8 x 5.2 x 0.4), it is ideal to carry the book where ever you go without much hassle. You can order a print copy of this book from Amazon for a nominal price of $9.94.

Monday, 26 January 2009

Linus Torvalds ditches KDE 4 for GNOME

Linus Torvalds says he has ditched KDE for good and is now sleeping with its arch rival GNOME. Oh well, rhetoric apart, he says the move over to GNOME happened because in KDE 4, he found it quite bothersome that he couldn't get his Right mouse button to access the right menu he wanted. In short, he ran into usability issues while using KDE 4.0.

In an interview given to Rodney Gedda of "Computer World" - Australia, he had this to say, and I quote :
I used to be a KDE user. I thought KDE 4.0 was such a disaster I switched to GNOME. I hate the fact that my right button doesn't do what I want it to do. But the whole "break everything" model is painful for users and they can choose to use something else.

I realise the reason for the 4.0 release, but I think they did it badly. They did so may changes it was a half-baked release. It may turn out to be the right decision in the end and I will re-try KDE, but I suspect I'm not the only person they lost.


I am sure the GNOME camp must be rejoicing in having won over a high profile Linux user to their side. This when a few years back, Linus Torvalds had gone on record severely criticizing GNOME for over simplifying the user interface.

Linus Torvalds was in Australia to attend the annual linux.conf.au organised by Linux Australia. While he was rather critical of KDE 4 in its current form, he did say it was a good thing for Nokia to release Qt as LGPL. Among other things, he also gives his views on Microsoft Windows 7 advising Microsoft to release sooner and decouple the operating system from the applications. A really interesting interview.

Thursday, 22 January 2009

New features in OpenOffice.org 3.1

OpenOffice.org rocks! Big time. The OpenOffice.org office suite which comes bundled with Ubuntu 8.10 Intrepid Ibex which I am running on my machine, is version 2.4 even though version 3.0 has been released for some time now. But supposedly big things are happening in the yet to be officially released OpenOffice.org 3.1.

Some of the visual enhancements being - Antialiasing of images making them smooth, improvements in charts, grammar checking and hyperlinks management, just to name a few. Will OpenOffice.org 3.1 be a Microsoft Office 2007 killer ? May be not. But it is closing the gap by leaps and bounds passing each (minor) version release. More over, it can be obtained by one and all at an unbeatable price - Free.

Learn more about the new features being introduced in OpenOffice.org 3.1 which is due to be officially released 63 days hence.

Sun Microsystems releases Sun xVM VirtualBox v. 2.1.2

My post@reacciona.es.

On January 21, 2009, Sun Microsystems announced the release of Sun xVM VirtualBox v. 2.1.2.

This is a minor releases with stability improvements and bug fixes. The list of changes from the official changelog is:
  • USB: improved support for recent Linux hosts
  • VMM: fixed guru meditation for PAE guests on non-PAE hosts (AMD-V)
  • VMM: fixed guru meditation on Mac OS X hosts when using VT-x
  • VMM: allow running up to 1023 VMs on 64-bit hosts (used to be 127)
  • VMM: several FreeBSD guest related fixes (bugs #2342, #2341, #2761)
  • VMM: fixed guru meditation when installing Suse Enterprise Server 10U2 (VT-x only; bug #3039)
  • VMM: fixed guru meditation when booting Novell Netware 4.11 (VT-x only; bug #2898)
  • VMM: fixed VERR_ADDRESS_TOO_BIG error on some Mac OS X systems when starting a VM
  • VMM: clear MSR_K6_EFER_SVME after probing for AMD-V (bug #3058)
  • VMM: fixed guru meditation during Windows 7 boot with more than 2 GB guest RAM (VT-x, nested paging only)
  • VMM: fixed hang during OS/2 MCP2 boot (AMD-V and VT-x only)
  • VMM: fixed loop during OpenBSD 4.0 boot (VT-x only)
  • VMM: fixed random crashes related to FPU/XMM with 64 bits guests on 32 bits hosts
  • VMM: fixed occasional XMM state corruption with 64 bits guests
  • VMM: speed improvements for real mode and protected mode without paging (software virtualization only)
  • GUI: raised the RAM limit for new VMs to 75% of the host memory
  • GUI: added Windows 7 as operating system type
  • VBoxSDL: fixed -fixed fixedmode parameter (bug #3067)
  • Clipboard: stability fixes (Linux and Solaris hosts only, bug #2675 and #3003)
  • 3D support: fixed VM crashes for certain guest applications (bugs #2781, #2797, #2972, #3089)
  • LsiLogic: improved support for Windows guests (still experimental)
  • VGA: fixed a 2.1.0 regression where guest screen resize events were not properly handled (bug #2783)
  • VGA: significant performance improvements when using VT-x/AMD-V on Mac OS X hosts
  • VGA: better handling for VRAM offset changes (fixes GRUB2 and Dos DOOM display issues)
  • VGA: custom VESA modes with invalid widths are now rounded up to correct ones (bug #2895)
  • IDE: fixed ATAPI passthrough support (Linux hosts only; bug #2795)
  • Networking: fixed kernel panics due to NULL pointer dereference in Linux kernels <>
  • Networking: fixed intermittend BSODs when using the new host interface (Windows hosts only; bugs #2832, #2937, #2929)
  • Networking: fixed several issues with displaying hostif NICs in the GUI (Windows hosts only; bugs 2814, #2842)
  • Networking: fixed the issue with displaying hostif NICs without assigned IP addresses (Linux hosts only; bug #2780)
  • Networking: fixed the issue with sent packets coming back to internal network when using hostif (Linux hosts only; bug #3056).
  • NAT: fixed port forwarding (Windows hosts only; bug #2808)
  • NAT: fixed booting from the builtin TFTP server (bug #1959)
  • NAT: fixed occasional crashes (bug #2709)
  • SATA: vendor product data (VPD) is now configurable
  • SATA: raw disk partitions were not recognized (2.1.0 regression, Windows host only, bug #2778)
  • SATA: fixed timeouts in the guest when using raw VMDK files (Linux host only, bug #2796)
  • SATA: huge speed up during certain I/O operations like formatting a drive
  • SATA/IDE: fixed possible crash/errors during VM shutdown
  • VRDP: fixed loading of libpam.so.1 from the host (Solaris hosts only)
  • VRDP: fixed RDP client disconnects
  • VRDP: fixed VRDP server misbehavior after a broken client connection
  • VBoxManage showvminfo: fixed assertion for running VMs (bug #2773)
  • VBoxManage convertfromraw: added parameter checking and made it default to creating VDI files; fixed and documented format parameter (bug #2776)
  • VBoxManage clonehd: fixed garbled output image when creating VDI files (bug #2813)
  • VBoxManage guestproperty: fixed property enumeration (incorrect parameters/exception)
  • VHD: fixed error when attaching certain container files (bug #2768)
  • Solaris hosts: added support for serial ports (bug #1849)
  • Solaris hosts: fix for Japanese keyboards (bug #2847)
  • Solaris hosts: 32-bit and 64-bit versions now available as a single, unified package
  • Linux hosts: don’t depend on libcap1 anymore (bug #2859)
  • Linux hosts: compile fixes for 2.6.29-rc1
  • Linux hosts: don’t drop any capability if the VM was started by root (2.1.0 regression)
  • Mac OS X hosts: save the state of running or paused VMs when the host machine’s battery reaches critical level
  • Mac OS X hosts: improved window resizing of the VM window
  • Mac OS X hosts: added GUI option to disable the dock icon realtime preview in the GUI to decrease the host CPU load when the guest is doing 3D
  • Mac OS X hosts: polished realtime preview dock icon
  • Windows Additions: fixed guest property and logging OS type detection for Windows 2008 and Windows 7 Beta
  • Windows Additions: added support for Windows 7 Beta (bugs #2995, #3015)
  • Windows Additions: fixed Windows 2000 guest freeze when accessing files on shared folders (bug #2764)
  • Windows Additions: fixed Ctrl-Alt-Del handling when using VBoxGINA
  • Windows Additions Installer: Added /extract switch to only extract (not install) the files to a directory (can be specified with /D=path)
  • Linux installer and Additions: added support for the Linux From Scratch distribution (bug #1587) and recent Gentoo versions (bug #2938)
  • Additions: added experimental support for X.Org Server 1.6 RC on Linux guests
  • Linux Additions: fixed bug which prevented to properly set fmode on mapped shared folders (bug #1776)
  • Linux Additions: fixed appending of files on shared folders (bug #1612)
  • Linux Additions: ignore noauto option when mounting a shared folder (bug #2498)
  • Linux Additions: fixed a driver issue preventing X11 from compiling keymaps (bug #2793 and #2905)
  • X11 Additions: workaround in the mouse driver for a server crash when the driver is loaded manually (bug #2397)

Monday, 19 January 2009

Linux Commands - 10 Useful tricks for Admins

I have heard a saying - the one thing which sets apart a Linux administrator Guru from a Novice is how much more the former is able to accomplish with so few keystrokes. I do not know the veracity of that saying. But IBM Developerworks has - as usual, put together a collection of tricks which will help raise the efficiency of any system administrator by a notch or two. I am no Guru, nor am I a novice. I guess I fall somewhere in the grey areas between the two extremes, which makes reading the tricks really informative for me.

Among the tricks described are Linux commands which we seldom use such as - fuser, reset, screen, as well as invaluable tips like - resetting the root password, SSH back door entry, using VNC to SSH tunnel to a remote machine, checking your bandwidth, a couple of command line tricks, spying on the console and finally Random system information collection.

Read the article to learn more.

Saturday, 17 January 2009

Perigean moon above the woods of Castilla

As usual, I was driving my car from Madrid to Segovia county taking my girlfriend home. Saturday and Sunday night were two fantastic full moon nights and we stopped our car at the Alto del León to observe the beauty of our satellite. Maybe it was the perfectly clear atmosphere, maybe it was the complete absence of other light sources, maybe it was just the magic of the moment, but the moon seemed really different, that night. Rarely I did see such a big and bright moon! When we stopped the car, the trees in the wood at night were casting shadows as if it were the evening. Impressive.

This morning I remembered the phenomenon and had a rapid check just to confirm the obvious: it was a perigean moon. As you (should) remember when you studied the Kerpler's laws of planetary motion, the orbit of a planet is an ellipsis with the sun at the focus. In the earth-moon problem, the moon's orbit is an ellipsis with the earth at the focus. Now, if you remember what an ellipsis is, it's clear that there exist two points in the orbit, called perigee and apogee, in which the distance earth-moon is, respectively, minimum and maximum.



Long story short: what we were observing Saturday night was a full moon at his perigee. Approximately 15% bigger and brighter! I think it was my first perigean moon and, moreover, I enjoyed it with the best company!

Which format do you archive your audio files with?

Good question. It used to be... No doubt that as time passes by, even if storage is order of magnitudes cheaper than it was 10 years ago, the most popular format is mp3. I remember when I started enconding mp3 files: my poor Pentium 133 could encode a standard audio CD in one night. I used the vintage Fraunhofer encoder at 128kbit/s. And the quality was obviously bad.

I get really angry when I hear or read the words "CD quality" associated with mp3. The possibilities are:
  • people just repeat what they hear without testing
  • people really don't have an idea of what they're speaking about
  • people don't even know what sampling and Shannon theorem is...
Whichever the reason, it's not that hard making a test to discover that it's not that hard to hear how it sounds different after an encoding. Even Apple's iTunes AAC codec is not that good, after all.

If you're interested to detail, you can check this. Or better, rip a dynamic CD such as a classical music CD and test various lossy encoders in their full range of options. You'll easily discover where the loss is.

Don't say no to good music, just use FLAC or another lossless codec of your choice.

Don't overlook largefile support when using Solaris 10

This is the typical thing you don't think about unless something you carefully planned fails unexpectedly. I was answering a post (yet another) in an Internet forum and it seems that people doesn't know/think about large files and large files support when issuing their commands.

Largefile

As man largefile states:

Standards, Environments, and Macros largefile(5)

NAME
largefile - large file status of utilities

DESCRIPTION
A large file is a regular file whose size is greater than or equal to 2 Gbyte ( 2^31 bytes). A small file is a regular file whose size is less than 2 Gbyte.

You're using Solaris, aren't you? So it's nothing strange the directory you're backing up (for example) contains some file with these characteristics. When in doubt, and before wasting precious CPU time, check:
  • if you need large file support
  • if the utilities you're going to use support large files
Both tasks are easy, and for the latter the already mentioned largefile man page is all you need.

pax, tar, cpio

As far as it concerns these utilities, their large file status is:
  • pax and cpio are large files aware, but have anyway a limit: they "only" support files up to 8 GB - 1 byte
  • tar has full large file support
That's the reason why I usually go with tar, always. And that's the reason I always use the e option (which stops tar in case of error) and usually never use the v option, which clutters the terminal with a flood of messages between which it's sometimes impossible to spot warning and errors. Remember the good old standard output and standard error redirection, if you really want to know if something has gone bad.

Backing up ZFS file systems

This is one of the good things ZFS has brought us. Backing up a file system is a ubiquitous problem, even in your home PC, if you're wise and care about your data. As many things in ZFS, due to the telescoping nature of this file system (using words of ZFS' father, Jeff Bonwick), backing up is tightly connected to other ZFS' concepts: in this case, snapshots and clones.

Snapshotting

ZFS lets the administrator perform inexpensive snapshots of a mounted filesystem. Snapshots are just what their name implies: a photo of a ZFS file system in a given point in time. Since that moment, the file system from which the snapshot was generated and the snapshot itself begin to branch and the space required by the snapshot will roughly be the space occupied by the differences between these two entities. If you delete a 1 GB file from a snapshotted filesystem, for example, the space accounted for that file will go in charge of the snapshot which, obviously, must keep track of it because that file existed when the snapshot was created. So far, so good (and easy). Creating snapshot is also incredibly easy: provided that you have a role with the required privileges you just issued the following command:

$ pfexec zfs snapshot zpool-name/filesystem-name@snapshot-name

Now you have a photo of the zpool-name/filesystem-name ZFS file system in a given point in time. You can check about its existence by issuing:

$ zfs list -t snapshot

which in this moment, in my machines, gives me:

$ zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
rpool/export/home/enrico@20081231 71.3M - 14.9G -
[...]

This means that the ZFS file system which hosts my home directory has been snapshotted and there's a snapshot named 20081231.

Cloning

Cloning is pretty much like snapshotting with the difference that the result of the operation is another ZFS file system, obviously mounted in another mount point, which can be used like whichever file system. Like snapshots, the clone and the originating file system will begin to diverge and differences will begin to occupy space in the clone. The official ZFS administration documentation has detailed and complete information about this topic.

Backing up

This isn't really how documentation calls it: they just refer to it with ZFS send and receive operations. As seen, we've got a mean to snapshot a file system: there's no need to unmount a file system or run the risk of getting a set of inconsistent data because a modification occurred during the operation. This alone is worth switching to ZFS, in my opinion. Now there's more: a snapshot can be dumped (serialized) to a file with a simple command:

$ pfexec zfs send zpool-name/filesystem-name@snapshot-name > dump-file-name

This file contains the entire ZFS file system: files and all the rest of metadata. Everything. The good thing is that you can receive a ZFS file system just doing:

$ pfexec zfs receive another-zpool-name/another-filesystem-name <>

This operation creates the another-filesystem-name on pool another-zpool-name (it can even be the same zpool you generated the dump from) and a snapshot called snapshot-name will also be created. In the case of full dumps, the destination file system must not exist and will be created for you. Easy. Full back up with just two lines, a bit of patience and sufficient disk space.

There are the usual variations on the theme. You don't really need store the dump in a file, you could just pipe send into receive and do it in just one line with no need of extra storage for the dump file:

# zfs send zpool-name/filesystem-name@snapshot-name | zfs receive another-zpool-name/another-filesystem-name

And if you want to send it to another machine, no problems at all:

# zfs send zpool-name/filesystem-name@snapshot-name | ssh anothermachine zfs receive another-zpool-name/another-filesystem-name

Incredibly simple. ZFS is really revolutionary.

Incremental backups

ZFS, obviously, lets you do incremental send and receive with the -i option which lets you send the differences between one snapshot and another. These differences will be loaded and applied at the receiver side: in this case, obviously, the source snapshot must already exist. You start with a full send and then you go on with increments. It's the way I'm backing up our machines and it's fast, economic and reliable. A great reason to switch to ZFS, let alone Solaris.

Setting Up Solaris 10 Projects to Control Resource Usage

Overview

This is one of my favorite features of the Solaris 10 operating system. Nowadays, even in relatively small machines, the resources such as CPUs, memory, etc., allow the execution of a great number of processes in a single box. The control an administration can enforce on resource utilization is fundamental for the machine power to be fully utilized without jeopardizing the responsiveness of certain processes. Moreover, the usage profile of the resource may be a complex function which depends also of parameters such as time. Solaris 10, by default, would adapt itself dynamically so that all of the running applications have equal access to resources. The default behavior can be customized so that applications can access resources on a preferential basis or even be denied access under certain conditions.

This post will give a quick and basic description of resource management on the Solaris 10 operating system. As usual, the official documentation can be consulted on Sun Microsystems' Documentation Center.

Projects and Tasks

Projects and tasks are the basic entities which are used to identify workloads in the Solaris 10 operating system. A project is associated with a set of users and a set of groups. Users and groups can run its processes in the context of a project they're member of. Both users and groups can be members of more than one project so that the relations (project, user) and (project, group) are n to n relationships. The project is the basic entity against which the usage of resources can be restricted. The task is the entity to which a process is associated. The project, indeed, is associated with a set of tasks. Tasks will be described later on.

Default Projects

Every user and every group are associated to a default project, which is the project under which their processes are run if not differently specified. The algorithm that Solaris 10 uses when determining the default project of an user or of a group is the following:
  • it checks if there's an explicit association in the /etc/user_attr database by means of a project attribute. If there's one, that's the default project.
  • it checks if it exists a project named user.user-id or group.group-id, in this order, in the projects database. If there's one, that's the default project.
  • it checks if it exists the special default project in the projects database. If it does, that's the default project.

The Project Database

The projects database stores all the information related to existing projects in the operating system. The project database is a plain text file which can be read and modified by a set of commands such as:
  • projadd, to add projects
  • projmod, to modify projects
  • projects, to read the project database
  • projdel, to remove projects
The structure of the file is pretty simple:

project-name:project-id:comment:user-list:group-list:attributes

FieldDescription
project-name the name of the project which can contain only alphanumeric characters and the - and _ characters. The . character only can appear in default user's and default group's project.
project-idthe numerical id of the project, which can be a number between 0 and UID_MAX.
commenta string which describes the project
user-listthe list of users which can be member of this project whose syntax is described later
group-listthe list of groups which can be member of this project whose syntax is described later
attributesa set of attributes which apply to the project whose syntax is described later

Although the file syntax is very simple, this and many other Solaris configuration files should not be edited manually. Use the corresponding commands instead.

User and Group List

The list of users and groups in the projects database is a comma separated list of values which can be one of the following:
  • an user or group name
  • *, to allow all users and groups to join the project
  • !*, to allow nobody to join the project
  • !name, to disallow a specific user or group to join the project

Attributes

The set of attributes for a project is a semicolon (;) separated list of (key, value) pairs with the syntax:

key=value

Both the semicolon (;) and the colon (:) cannot be used in the value definition.

Task

Whenever a user log in into a project, a new task for that project is created and it contains the login process. The task is given its task id end every process launched in that login session will be associated with the task which owns the login process. The task is the basic workload entity and can be viewed as a process group, also. Indeed, many operations which are supported on process groups can also be executed on tasks. Commands which create tasks are the following:
  • login
  • su
  • newtask
  • cron
  • setproject

Determine the Current Project and Task

If you're wondering which project you're logged in into and which task you're running your processes under, you can check it with the id command:

$ id -p
uid=101(enrico) gid=10(staff) projid=10(custom-project)

I'm currently logged in as user enrico, member of group staff and I'm currently member of project custom-project.

The ps command can also show project and task information:

$ ps -o user,uid,pid,projid
USER UID PID PROJID
enrico 101 15274 10


Other Useful Commands

The Solaris 10 operating system has got a number of commands which can be used to view and manage processes using the project or the task they're member of.
  • pgrep -T | -J, to look for processes associated with the specified task, using the -T option, or the specified project, using the -J option
  • pkill -T | -J, to send a signal to the processes associated with the specified task, using the -T option, or the specified project, using the -J option
  • prstat -T | -J, to view process statistics with task statistics, using the -T option, or project statistics, using the -J option

Creating an User and a Project

Let's assume we want to cap the CPU utilization of a subset of well-known CPU-intensive processes. We do this because we're not concerned about the time these processes need to end their job but we're concerned in that at least 50% of the available CPUs on our workstation be free for other processes to run during normal system and user activity. The first thing we can do is creating a project for these processes and then run these processes in a task associated with such project. In our case, being the processes non-interactive, the easier path probably is creating a dedicated user for them and creating a default user project with the characteristics we need. This way we don't even have to bother creating a specific task manually to launch these processes: they'll be run in the default user project. Let's call the user custom-user and put it for simplicity's sake in the existing staff group:

# useradd -d /export/home/custom-user -m -k /etc/skel -P a-suitable-profile -G staff -c "custom-user" -s /bin/bash custom-user

Our user would probably need to be given a suitable profile for RBAC access control and this step will depend on your specific needs and environment. If this user needs no special RBAC configuration, we can omit the -P option. Other user related configuration activities are omitted from this example.

Once the user is created, we can create its default project using the user.user-id syntax:

# projadd -U custom-user -K "project.cpu-cap=(privileged,50,deny)" user.custom-user

Projects and Resource Caps

This command will create a project, named user.custom-user, that will be the default project for the user custom-user. This project has the following attribute:


project.cpu-cap=(privileged,50,deny)

This is one of the many resource controls that Solaris 10 puts at our disposal. For a complete list of resource control please read the official documentation. This resource control limits the amount of CPU available at the project level. It's a privileged operation and a value of 50 corresponds to the 50% of one CPU. When the limit is reached, the associated behavior is deny, which actually denies the project more CPU. The machine I'm running is two processors machine so all of the processes running under the user.custom-user project will have the 50% of one CPU, leaving free the remaining 50% of that CPU and the entire other CPU.

I use this project to execute long-running tasks which in reality aren't always expensive in terms of CPU, but with such configuration I can sit at my workstation, launch the processes I need in this project and let them run without worrying that the machine be left without CPU power to run the desktop and the application I'm using when logged in.

I configured many different projects for different classes of processes I run and their administration is pretty easy, once you're familiar with the resource controls available in Solaris 10. The mechanism of associating a default project to users and groups, moreover, is a quick way for an administrator to cap the resources used by a particular user, based on individual needs, or by a particular group. In this sense the word project really fits this kind of functionality: we have groups of people working at some projects with different resource needs each and Solaris 10 puts at our disposal a concept which really reflects the organization of our work.

Creating a Task and Moving Processes to Projects

As described in the previous section, the default project association is a quick means for administering the server resources. Our users, indeed, aren't even conscious that they're running their processes in such a resource capped environment.

Sometimes, however, you'll need to launch processes in different project or move processes from one project to another. For example, I haven't configured a project for myself as I'm not assigned a project during my working day. Sometimes, though, I need to execute processes in a particular project, as the long running processes I was describing earlier, or move an existing process from a project to another: this is the case if, for example, a process is unexpectedly consuming a larger amount of resources than I wish, and prefer to move it to a resource-capped project in order to improve the machine responsiveness.

If you need to create a task in a specific project, given that your user can join such project, can be done with the following command:

$ newtask -v -p my-project
25

This command creates a new task in the my-project project and the -v flag lets you see the new task generated for this task, which can be useful when used with other commands that accept this parameter. Launching the newtask command in your shell also has the side effect of putting your shell into the newly generated task, thus allowing you to immediately launch new process to be executed into this task.

Let's now suppose that you've detected a process which is consuming too much a resource and you prefer to move inside a properly capped project. The first thing you need is the process id of this process which can be obtained in a number of ways:

$ pgrep process-to-move
15257

Now that we have the process id of the process to be moved, we can create a new task in the desider project and move the process inside it. This can easily be done with just a one liner:

$ newtask -v -p my-project -c 15257
27

It's not necessary, but if you want to be sure that the process is now executing in the context of the desider project, you can use for example:

$ pgrep -T 27
15257

which confirms that the process number 15257 is executing inside the task 27.

Associating a SMF Managed Service with a Project

So far, we've seen the basic tools to define and manage Solaris 10 projects. The administrator is now able to:

  • Define projects.
  • Define resource caps for a projects.
  • Move processes to tasks and assign tasks to projects.
  • Assign a project to an user and/or a group of users.

So far, so good but there's still a little gotcha. How do you assign the identity (or the project) under which a SMF-managed service will run? In another installment we'll see how to do it.

Resource Controls for Network Virtualization

Solaris Network Virtualization facility takes advantage of resource controls as well. You can read this blog post for a quick walkthrough.

Taking Advance of Solaris Projects and Resource Control for Virtualization Technologies

Solaris Projects and Resource Controls are of great advantage when used in conjunction with virtualization technologies such as VirtualBox. You can read this blog post for a quick introduction and a walkthrough to get you started.

Conclusion

This is a really short introductory walk through inside Solaris 10 capabilities of configuring and monitoring resource utilization. The official documentation describes the list of resource controls that Solaris 10 puts at the administrator's disposal and the complete set of commands that can be used to monitor and even change these parameters at runtime, when they could also be life-saviours. Knowing of their existence, at least, may one day ease your life.

Magi are coming to town

And for me, it's the first time it happens.

I think everbody knows the story, at least at some degree. The biblical Magi, in the Christian tradition, were three Kings coming from the east of Jerusalem to pay a visit to the recently born Jesus and to bring him gifts. When I was a child, the only tradition that my family observed was adding the statues of the three Kings after Christmas, and I think we did it on Epiphany.

What I did not know, was that in some countries like Spain, this is the day in which gifts are traditionally given to the kids, who believe that were the Kings to bring them instead of Santa Claus. This tradition is indeed very similar to Santa Claus': kids use to write letters to make their wish and promise to behave well during the new year. Here in Spain, the Kings can be seen in the shops, as I used to see Santa when I went shopping with my father. Curious, indeed. I discovered that Santa didn't exist when I was pretty young and now, almost 25 years later, I'm celebrating this new tradition for the first time! Tonight, the Kings will bring me the first gift, ever!

Another tradition which is very popular here is Spain is the Cabalgata de los reyes Magos, o the ride of the Kings. On January 5th, the Kings can be seen in the streets of many town and as they arrive to worship Jesus, they throw candies from their horses.

Another small difference is that, instead of leaving the gifts below the Christmas Tree, the Magi leave the gifts on a terrace, on top of a pair of shoes. As usual, kids leave something to eat in order to pay their respect to the Magi.

Funny. Nowadays, kids in Spain are easily exposed to traditions of other countries by means of the television, above all, and by other means such as internet. It was natural to me seeing so many Santas in front of the houses, but here in Spain it's kind of a newly imported tradition. If you were born in these days, you'll probably be eligible to double gift: one from Santa, and one from the Kings!

Sending batch mail on Solaris 10 (with attachments)

The typical problem. You have to do a repetitive task and you bless UNIX and its shells. But sometimes you wonder how to do it. This time, I had to write a bunch of emails to a set of email addresses. So far, so good. Solaris and many other UNIX flavors have utilities such as mail and mailx which you can easily do you job with. I usually use mailx to do send emails from scripts and I'm very happy with it.

But today I had to send emails with attachments and mailx has no built in support for them. If you know something about emails standards, you could easily figure out how to do it. Google searches about the topic are full of examples of sending properly formatted mails with mailx, where properly means uuencoding the attachments, concatenating them with the mail message and then send them all piped to mailx. Some examples you can find are even easily scriptable. Reinventing the wheel, more often than not, is not a mean for progress so I decided to go with mutt, a powerful program I always neglected favoring mailx.

mutt has a similar syntax and has built in support for attachments. Its configuration file is a powerful tool to create different sending profiles in which you can, for example:
  • setting user mail identity
  • modify mail headers
  • write and attach hooks to mail events
I haven't spent much too time reading mutt documentation, yet, but it really seems worth the time. Just a one liner (inside an over-simplified loop):

for i in addresses ; do
cat user-message-file | mutt -s subject -a attachment-1 ... attachment-n -- $i
done

Please note that the the option -- is necessary only for separating the addresses from a list of multiple attachments. In the case of just one attachment the line above reduces to:

for i in addresses ; do
cat user-message-file | mutt -s subject -a attachment $i
done

I also had to modify some header and both mutt and muttrc man pages are well written and easy to search. The content of my (first) ~/.muttrc configuration file is:

set realname="myname"
set from=myuser@mydomain.com
set use_from=yes
set use_envelope_from=yes
my_hdr Reply-To: anotheraddress@mydomain.com

This way I told mutt:
  • to set my name
  • to set the From: header
  • to use the From: header
  • to force sendmail to use the same address from the From: header as the envelope address
  • to use a Reply-To: header
Some of these directives have their drawbacks so always follow the golden rule and don't copy and paste these lines without fully documenting yourself: read the manual and enjoy mutt.

First contact with a villancico

Another thorny subject: villancicos. I could translate this word with Christmas carol but as far as I could see, the term villancico has a wider meaning.

I remember the first time I heard one: I was having a walk inside a big mall few days before Christmas and I noticed that everywhere you could listen to the music of some villancico. Then, I didn't even know their name. Neither could I understand their words. Even so, the atmosphere of happiness was clearly transmitted to me and I started remembering the typical Christmas carols I'd been listening all over my life.

The spell was broken when the following year I could clearly understand their words. Some of them are so absurd that I started looking for information about this kind of Christmas carols. It turned out that the villancico even had a couple of century of glory as a poetic form, and I don't doubt about the quality of the Renaissance poetry.

As far as it concerns their poetic form, a villancico is formed by stanzas, which usually are two, followed by a refrain. As far as it concerns the content, they were meant to be didactic texts to get people to know Christianism and help their conversion. One of the many funny evangelic means that mankind has invented throughout the centuries.

So far, so good. The problem is: have you ever listened to one? I'm not saying heard. I said: listened. I could give plenty of examples but the one that struck me most comes from a villancico which (I think) it's called Rin rin. I really want to translate it for you because I couldn't find words to express the feeling I have.

Toward Betlehem goes a donkey
rin, rin
I was mending it,
I mended it,
I did a mend,
I took it off.
Loaded with chocolate
it brings its chocolate machine
rin, rin,
I was mending it,
I mended it,
I did a mend,
I took it off.
its grinder and its stove
Mary, mary
come here soon
that the chocolate
they're eating.

In the hall of Betlehem
rin, rin
I was mending it,
I mended it,
I did a mend,
I took it off.
some mice have entered
and to good St. Joseph
rin, rin
I was mending it,
I mended it,
I did a mend,
I took it off.
they gnawed the shorts.
Mary, mary
come here soon
that the pants
they're gnawing.

In the hall of Betlehem
rin, rin
I was mending it,
I mended it,
I did a mend,
I took it off.
some thieves have entered
who, to the poor child in the cradle,
rin, rin
I was mending it,
I mended it,
I did a mend,
I took it off.
the nappies are stealing.
Mary, mary
come here soon
that the nappies
they're stealing.

Now, maybe we can just laugh at this nonsense. It's the only judicious thing left to do. But I was still wondering... Does somebody see any poetry in it? Does somebody see at least some didactic intent to convert somebody? I fail. Speaking of St. Joseph pants gnawed by mice seems almost unrespectful, though. And about the mend... I really don't catch its meaning.

Italian restaurants in Spain: my first impressions

It always feels uncomfortable to me speaking about this kind of stuff. I think it's too general a subject that retrieving significant statistical data is an impossible task for an individual. Unless, maybe, you're conducting a specific study. But I will try once again, altogether.

Being a stranger sometimes means being exposed to discussions about stereotypes related to your homeland: it's a good starting point for a discussion especially when speaking with somebody you've been recently introduced to. I think, at least I hope, the same thing happens to all the members of the strangers category residing in every country in the world.

Well, if you're guessing about which is the first topic in this particular hit parade: ladies and gentlemen, it's food, one of the subjects where objectivity is out of question. First of all, I would like to say that I'm not the typical traveler which fills his luggage with food lest I die of hunger... Wherever I went, I'd always been glad of trying the local dishes I surely would never try in my homeland. I still remember when I arrived to Poland wondering what local cuisine could be like. No idea. And I was delighted to try the typical Polish pierogis or the fantastic Polish soups that warmed my frozen body during the long winter nights spent in Warsaw. I still dream about the delicious Arabic desserts I had while having a cup of green tea.

More or less the same thing happened when I arrived in Spain. No knowledge about local gastronomic traditions. I only knew about the existence of something called tortilla and I wasn't even sure that it had something to do with eggs... And I can gladly declare that Spanish cuisine has been a pleasant surprise, too.

What's the matter then?

Well... I don't know why but I suppose it has something to do with the excellent marketing strategies of my fellow-countrymen. Wherever may I be, whatever may I be eating, a comparison with Italian dishes is almost assured. The problem with such thing, let aside how boring the subject is, is that true Italian dishes aren't easily found in Spain. The same thing happens in Italy when looking for a true paella valenciana or even something as simple as a sangría. No way. Contamination is everywhere. It's surely due to the lack of the necessary basic ingredients or to the differences they have in the different countries you buy them. Subtler things to grasp are contaminations due to the necessary adjustments a dish undergoes to accommodate with the local tastes. In Spain, a shining proof is paprika (pimentón in Spanish). We don't use it, so, don't put it. We don't have chorizo, so, don't put it in a dish of pasta. Garlic in a pizza with ham and mushrooms? Never heard of that.

When making such considerations it's really hard not to appear conceited and hurting somebody else's proud is very easy, particularly when nationality is part of the equation. The point isn't that. I'm not trying to make a blind criticism of the tastes in the country I'm a guest of. I'm just questioning the adjective. I began to like paprika here in Spain, and so did I with chorizo and many more things. This doesn't mean, however, that I'm expecting to see extraneous ingredients in dishes which supposedly are Italian.

This phenomenon not only happens when some particularly gentle guest invites me at his home and tries to delight me with an Italian dish. I appreciate it, it's very gentle of them and I think I'd do the same (mistake) if I were the host. The problem is that this happens in the majority of the Italian restaurants I've been eating in here in Madrid. "Italian restaurant" surely is a good tag to attract customers. I think the more xenophilous the people, the better such kind of tags is. Chinese, Indian, Ethiopian, Italian, Vietnamese. Who cares? Sometimes it's just the fact of eating in an exotic place which, at the end, often it's only exotic because of the name.

The magic in traditional dishes is often their semplicity. Centuries of hunger with few ingredients at hand and the wisdom to mix them magically. In Madrid, I'd come a cocido rather than a pizza. In Warsaw I'd come pierogis rather than spaghetti. That's the fact. Were I Spanish, I would go to an asador as often as I could. And when I really feel like eating an Italian dish, I'd go to a selected restaurant and as I already stated, there are very few which are worth the bill. In Madrid I just ate in two restaurants where I could close my eyes and pretend I was in Italy. Just two, and I'd been kind of obliged to try so many.

A suggestion for my fellow-Spaniards and all of the fellow-strangers in the world. It's easy to pronounce this magical statement

In (insert-your-favorite-country-here) you eat as nowhere else.

De gustibus et coloribus non est disputandum. But if you really love your traditional food, then eat it when you can. Promote it whenever you can. Eat it in the restaurants of your country and not only on Christmas day. So, tourists will see you and the efforts will be worth the price. It's quite unuseful speaking about dishes you wouldn't come in a restaurant yourself!

Noise and disk latency

I just ran into this interesting video by Sun's Brendan Gregg which shows how a sufficiently high noise produced in front of a disk rack can cause an observable spike in disk latency. It may seem a geeky thing, or maybe it's just the dinosaur in me which sweetly remembers the good ol' days when you even had to park disk heads in the landing zones if you wanted to move it safely. Nowadays I confess that I still have certain respect when working at my workstations and never, ever, produce much vibrations on the table where they're laying.

Interesting. Something to hold in due consideration.

Experimental 3D acceleration in Sun xVM VirtualBox 2.1.0 works really great

With the new major version of Sun xVM VirtualBox, Sun introduced experimental 3D support via OpenGL. You can enable it in the virtual machine General settings panel:


I've been playing with it just to use Google Earth and, despite the experimental tag currently applied to this feature, I'm really pleased that the enabling this feature produces a notable improvement on the performances of Google Earth in OpenGL mode. My workstation is pretty powerful but working with such software without 3D acceleration was a pain. I suppose that whoever needs OpenGL acceleration in an application running in a VirtualBox host would be very pleased of this new feature.




According to the user guide (pag. 67), the current experimental implementation has the following limitations:
  • It is only available in Windows XP and 32-bit Vista guests with the Windows Guest Additions installed.
  • Only OpenGL acceleration is presently available in those guests; Direct3D is not yet supported and will be added in a future release.
  • Because the feature is experimental at this time, it is disabled by default and must be manually enabled in the VM settings.
Great work, VirtualBox guys.

Silence in Valdemoro

I was going to shut my workstation down. I took my headset off and I lit a cigarette when I realized how deep it's the silence, tonight. Windows are open because I use to smoke in this room. Tonight, moreover, the temperature is mild and I'm enjoying a fresh breeze in the face. Dogs are barking, probably at the latecomers arriving at some new year's dinner in a house in the neighborhoods. I can also hear the sound of the leafs in the trees behind my house. The few crackling leaves that winter has yet to rip off the branches. I think it's the first time I hear them since I live here in Valdemoro. Sometimes the silence is broken by a sudden explosion. Probably some kid who's already bored of the dinner with its parents. Child, you'll miss it, sometimes. The quiet sound of my workstation seems unnatural, now, and the monitor is gleaming too much, tonight.

It's time to shut it down.

Windows interoperability: sharing folders with the SMB protocol using Solaris CIFS server

Interoperability between Solaris and Windows has improved and is improving very much. In the case of file systems sharing, the situation is now pretty good. There's no need of installing Microsoft Services for UNIX on top of your Windows servers to be able to share folders with Solaris. One of the last additions in the Solaris operating system is the CIFS Server which, as the official project page @ OpenSolaris.org states:

The OpenSolaris CIFS Server provides support for the CIFS/SMB LM 0.12 protocol and MSRPC services in workgroup and domain mode.

The official project page is the ideal starting point to look for information about installing and using the CIFS Server and Client components in Solaris. In this blog I will describe how to quickly configure the CIFS Server to be able to share folder between your Solaris and your Windows environments. I will use the new, and very simple, sharing semantics introduced in the last versions of the ZFS file system.

What's impressive of these tools is the ease of use and administration. Both ZFS commands and CIFS Server commands are few, easy and intuitive. Sharing a ZFS file system is a no brainer and just few one-time configuration steps are necessary to bring your CIFS Server up and running.

Preparing a ZFS file system

We will share a ZFS file system which we usually create with the following command:

# zfs create file-system-name

Once the file system is created, we configure the SMB sharing:

# zfs set sharesmb=on file-system-name

As described in the official ZFS documentation (for Solaris Express) or in the zfs(1M) man page, the sharesmb property can be set to on, off or [options]. The last syntax is useful to pass parameters to the CIFS server. The most useful is the name parameter, which lets you override the automatic name generation for the SMB share:

# zfs set sharesmb=name=smb-name file-system-name

The automatic name generation works fine but sometimes it must change illegal characters which appear in the dataset name.

Setting up CIFS Server in workgroup mode

The CIFS Server can work in both domain and workgroup mode. The domain mode is useful when you connect to a Windows domain and the very flexible configuration is well detailed in the official CIFS service administrator guide. In my case the workgroup mode is fine and that's the configuration I'll detail here.

Starting the service

If it's not started yet, you'll have to start the CIFS server. Please be aware that if you're running Samba in your Solaris box, you'll have to stop it first.

# svcadm enable -r smb/server

Joining a workgroup

To be able to use shares, you have to join a workgroup:

# smbadm join -w workgroup

Configuring password encryption

To be able to authenticate you must configure Solaris to support password encryption for CIFS. To do this, open the /etc/pam.conf file and add the following entry:

other password required pam_smb_passwd.so.1 nowarn

Generating or recreating passwords

Now that CIFS password encryption has been configured, you'll have to regenerate the passwords for the users you want to use with it because the CIFS service cannot use Solaris password encryption, which was used before /etc/pam.conf was reconfigured. The passwd command will take care of that:

# passwd user
[...]

Conclusions

With just these few steps you'll have your CIFS server up and running in workgroup mode. Now you can share whichever ZFS file system you want just setting its sharesmb property.

Enjoy!

Modifying Sun Java Enterprise System installation or completely removing it on Solaris 10

I wrote other posts describing why and how I installed components from Sun Microsystems' Sun Java Enterprise System 5 (JES from now on) on Solaris. It may sound somewhat silly, but one of the questions related to the JES installer I heard so far is how software can be uninstalled. That's strange, and more if you consider the nature of Solaris 10 package management system. Somehow, having a GUI makes the situation more complicated, because a person unfamiliar with Solaris 10 and with Sun's way of distributing software would expect the same installer to do the job. Whoever knows of its existence may also also think that prodreg is sufficient to perform the uninstallation, but its not, for the reasons I will clarify soon.

The worst thing, in my humble opinion, is that Sun's documentation is usually good and very detailed. For every product I'm trying to remember, there's always a detailed installation document. What's not so clear to the newbie, in reality, is that documentation is there to be read. If you feel like doing it, read this blog and go here, where you can find complete information about JES installations. You should read it carefully while planning your installations/upgrades/uninstallations.

Here's the long story short.

How the JES installer works

The JES installer is an utility that eases the installation and the configuration of a set of server-side products which are bundled together. The installer also takes care of interdependencies between products, which may be complex, with various preinstallation and postinstallation procedures. The JES installer for Solaris uses the usual operating system package management system to deploy package on a host. For the same reason, the JES installer also provides an uninstallation utility which should be used when removing JES components instead of removing pagkages with other means, such as pkgrm or prodreg.

JES installation utilities

The JES installer can be found on the directory which corresponds to the platform you're installing and its called installer. This is the program you'll launch when installing JES for the first time. If you invoke it with no arguments, a GUI will be displayed and you will be able to choose the packages you need and perform your installation.

Even if having a GUI gives you the idea that everything will be managed for you by it, you're wrong. Read the documents before performing the installation of any document.

Patching installer

If JES has already been installed or if you need to patch the installer itself, you will find another copy (packaged) of the installer in the /var/sadm/prod/sun-entsys5u1i/Solaris_{x86,sparc} directory. Once the JES installer has been patched, that's the copy that should be used when installing or modyfing the current installation.

JES uninstaller

Once the JES installer has installed some of the products of the JES distribution, you will find the uninstallation utility in /var/sadm/prod/SUNWentsys5u1. The uninstaller, as the installer, can be run either in graphical, text or silent mode. Due to the complexity of the relationships between the component, an uninstallation should be carefully planned, too. Once more time: read the docs.

Uninstaller limitations

The JES uninstaller has some limitations, including the following:
  • it only uninstalls software installed with the installer
  • it does not remove shared components
  • it does not support remote uninstallations
  • some uninstallation behavior depends on the components being removed and it does not limit to data or configuration files.
  • it does not unconfigure the Access Manager SDK on the web server.

GPT protective partition after moving a removable disk from a zpool to Windows

In an emergency, I needed an USB disk to backup some data from a Windows XP PC and the only disk I could take was a Lacie disk used as a cache device on a zpool. No problem removing it from the pool and formatting it again, thought I.

Problem is that when I plug the disk on the troubled Windows machine and go to Logical disk manager to format it, I discover that the partition appears as a GPT protective partition and you cannot do anything to it.

Cleaning a GUID Partition table

Long story short: having been part of a zpools, that disk was labeled with an EFI label. EFI adds support for GUID partition tables, and that (partially) explained the behavior of the Logical disk manager. I didn't even imagine that Windows XP would know about EFI of GUID partitions tables and I understand why it didn't let me touch that disk. What I didn't know was how to clean the disk to be able to format it with NTFS. Fortunately, Google always helps who knows what to search, and I discovered about the existence of a command line utility called diskpart. With diskpart it was easy to clean the disk and format it again. Instructions are straightforward:
  • open a command prompt
  • launch diskpart
  • list disks with the list disk command
  • select the disk you want to clean with the select disk command
  • clean the disk with the clean command
That's it.

Installing Solaris Express Community Edition build 104

Introduction

After many time with / on the venerable UFS file system, I decided to spare a weekend and dedicate it to backing up my data and reinstalling Solaris from scratch and (finally!) having a ZFS root filesystem. My experience with ZFS is so good that I simply could bear no longer the clumsiness of partitioning and slicing. The available Solaris flavors were the following:
  • Solaris 10 10/08
  • Solaris Express Community Edition
  • OpenSolaris 2008.11
I keep on excluding OpenSolaris because of the reasons I detailed many times: I cannot renounce some of Sun's proprietary goodies which are not by default installed on OpenSolaris. I'm speaking about StarOffice and Sun developer tools bundled with Solaris. Being a workstation, I decided to go for Solaris Express.

Choosing a ZFS root pool

Both Solaris 10 and Solaris Express let you use ZFS on the root pool but only if installed with the text-based installer, which may be something not obvious for whoever doesn't read the installation documentation. The installer lets you choose the drives and build the ZFS pool for you. In this pool, other two ZFS volumes will be created: one for swapping and one for dumping. In the root pool, you can decide whether /var should reside on / filesystem or if it should be another one.

Zones on ZFS

This was an unexpected surprise. Before creating the zones I needed, I created a ZFS file system, named /zones, and was preparing another set of ZFS file systems below /zones, one for each zone I was going to create. When I installed the first zone, because of a typo on the path I configured for the zone, I realized on zoneadm console output that it was creating a ZFS file system for me! No need of further configuration to do that! Great.

Homes on ZFS

The installation creates a ZFS file system for locally hosted home directories, which is mounted as usually on /export/home. When I create a user, I create a ZFS file system for it and mount it on /export/home/username. This way, for example, I can control user quotas and snapshots.

ZFS is a great step forward even on desktop installations

I'm not going to blog here about ZFS advantages: it deserves much more space than this. Nevertheless it's worth mentioning that even in a simple desktop install, ZFS benefits are important:
  • user disk quotas: having a ZFS file system for every user is very easy for quota management.
  • backing up: having separate file systems for /export/home and for every user, snapshotting and backing up individual home directories is straight forward.
  • sharing: sharing a ZFS file system is straight forward too, just set ZFS property sharenfs to on.
  • time slider: I didn't try this feature (yet), but it seems great. That's what I had been doing with custom script for quite a while, but much cooler (at least because it's SMF managed and has a configuration GUI).
  • no partitions: a beginner is not going to plan its installation to determine the necessary partitions/slices and their sizes. A zpool on an entire device is a great advantage to take advantage of all of the available space on your disk(s) without concerns about file system layouts.

Just one installation glitch

SUNWiwh package does not properly install because something in the package seems to be malformed. I'm installing on a Sun Ultra 24 and WIFI is not an issue for me, but I think that's something to consider if you need such functionality.

How to determine if your CPU is HVM-capable

Some time ago I installed Solaris Express Community Edition because I wanted to try Sun xVM to run a couple of Windows 2003 Server domains. I knew that the CPU of my Sun Ultra 20 M2, an AMD Opteron 1214, was HVM-capable but strangely virt-install was reporting it as not. Documentation stated that invoking virt-install without arguments in an HVM-capable machine should ask if the domain that's going to be created is for a fully virtualized guest.

Googling around for an explanation, I hit the following blog: Detecting Hardware Virtualization support for xVM. The blogger posts a small C program to check for HVM support. I paste the blogger's program which should be run when the system is not running the hypervisor.

/*
* CDDL HEADER START
*
* The contents of this file are subject to the terms of the
* Common Development and Distribution License (the "License").
* You may not use this file except in compliance with the License.
*
* You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
* or http://www.opensolaris.org/os/licensing.
* See the License for the specific language governing permissions
* and limitations under the License.
*
* When distributing Covered Code, include this CDDL HEADER in each
* file and include the License file at usr/src/OPENSOLARIS.LICENSE.
* If applicable, add the following below this CDDL HEADER, with the
* fields enclosed by brackets "[]" replaced with your own identifying
* information: Portions Copyright [yyyy] [name of copyright owner]
*
* CDDL HEADER END
*/

/*
* Test to see if Intel VT-x or AMD-v is supported according to cpuid.
*/
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <stdlib.h>
#include <string.h>
#include <errno.h>
#include <stdio.h>
#include <ctype.h>


static const char devname[] = "/dev/cpu/self/cpuid";

#define EAX 0
#define EBX 1
#define ECX 2
#define EDX 3

int
main(int argc, char **argv)
{
int device;
uint32_t func;
uint32_t regs[4];
uint32_t v;
int r;
int bit;
int nbits;

/*
* open cpuid device
*/
device = open(devname, O_RDONLY);
if (device == -1)
goto fail;

func = 0x0;
if (pread(device, regs, sizeof (regs), func) != sizeof (regs))
goto fail;

if (regs[EBX] == 0x68747541 &&
regs[ECX] == 0x444d4163 &&
regs[EDX] == 0x69746e65) { /* AuthenticAMD */

func = 0x80000001;
r = ECX;
bit = 2;
nbits = 1;

} else if (regs[EBX] == 0x756e6547 &&
regs[ECX] == 0x6c65746e &&
regs[EDX] == 0x49656e69) { /* GenuineIntel */

func = 1;
r = ECX;
bit = 5;
nbits = 1;

} else {
goto fail;
}

if (pread(device, regs, sizeof (regs), func) != sizeof (regs))
goto fail;

v = regs[r] >> bit;
if (nbits < 32 && nbits > 0)
v &= (1 << nbits) - 1;

if (v)
printf("yes\n");
else
printf("no\n");

(void) close(device);
exit(0);

fail:
printf("no\n");
(void) close(device);
exit(1);
}

Running Windows on Sun xVM with Solaris Express Community Edition

Introduction

I'm waiting to see Sun xVM Server. It's a long wait, this, and I hope to see it soon. I've being using Sun xVM VirtualBox since quite a while and I'm really happy with it. The underlying technology is completely different, but I'm experiencing a feeling of hope that makes me think that Sun xVM Server quality will be at least the same.

Recently, I needed to set up an Active Directory for development purposes and while part of the team is happily running its .net development stack on top of a Sun xVM VirtualBox Windows guest, a quick test drive showed us that running a server on top of Sun xVM VirtualBox wasn't practical at all. That's why I set up a Solaris Express Community Edition machine: not only I wanted to test Sun xVM but I needed it.

All the commands shown in this post were executed on Solaris Express Community Edition build 103. Be aware that the output and the commands themselves are not stable yet.

The feeling of running unsupported software

It's unpleasant. Solaris Express Community Edition is rock solid. I'm using it on many machines and never let me down. But running a critical component on such an experimental technology, well, was something I wanted to avoid. That's the rationale behind using Solaris 10 even in our development machines, where I'm sure that Solaris Express Community Edition (or even OpenSolaris now), would greatly do their job; and the developers would also probably be happier.

I waited months hoping that Sun released Sun xVM Server just in time for us to be on schedule but project deadlines pushed me to deploy Solaris Express Community Edition instead. Documentation is not as up-to-date or easily retrievable as Solaris 10 is, but with a little help from Google and especially from a Sun white paper, Install Sun xVM Hypervisor and Use It to Configure Domains, setting up Windows 2003 Server guests was not that hard.

Setting up Windows

Setting up Windows 2003 Server was not as straight forward as I thought. The first times I tried it, indeed, I get stuck for a CD-related problem and that's where Sun's cited white paper really helped me a lot.

Checking up the system

The first thing to check is if xVM is installed and running. The following command should produce this output:

# /usr/bin/pkginfo | grep SUNWxvm
system SUNWxvmdomr Hypervisor Domain Tools (Root)
system SUNWxvmdomu Hypervisor Domain Tools (Usr)
system SUNWxvmh Hypervisor Header Files
system SUNWxvmhvm Hypervisor HVM
system SUNWxvmipar xVM PV IP address agent (Root)
system SUNWxvmipau xVM PV IP address agent (Usr)
system SUNWxvmpv xVM Paravirtualized Drivers
system SUNWxvmr Hypervisor (Root)
system SUNWxvmu Hypervisor (Usr)

Once logged in in the system running the hypervisor, be sure that relevant services are running:

# /usr/bin/svcs | grep xvm
[...]
online 0:15:07 svc:/system/xvm/console:default
online 0:15:08 svc:/system/xvm/xend:default
online 0:15:08 svc:/system/xvm/domains:default
[...]
online 0:15:09 svc:/system/xvm/store:default

The default network interface

The hypervisor, unless differently instructed, will use the first available NIC when setting up the network for its guests:

# dladm show-link
LINK CLASS MTU STATE OVER
e1000g0 phys 1500 up --

To specify the desired NIC for the guests, you can set xend service's config/default-nic property:

# /usr/sbin/svccfg -s xend 'setprop config/default-nic = astring: “yourNIC”'

and then restarting the services with svcadm.

Assigning space on the disk

You can assign space to your guest both using a dedicated ZPool or on a regular file. Using ZFS is undoubtedly easier but I had no ZPool available on that machine so I had to setup a regular file on UFS:

# mkfile 20g file-path

I created two 20 GB files which I used as disks for the new guests.

Installing the host OS

From here ahead, I just suggest you to follow the instructions on the white paper I linked above. Everything went as planned, including the glitches during Windows 2003 Server installation described in that document.

Running the guest

Basic commands

In this xVM version, you still have to use both virsh and xm to have a complete set of administrative command while managing your hosts. In the future virsh should completely replace xm but that's not yet the case.

Booting and shutting down a domain

To boot and to shutdown a domain you can use the following commands:

# virsh start [domain-name]
# virsh shutdown [domain-name]

In the case of Windows, I still prefer to connect to it and shut it down from its GUI.

Rebooting a guest

A guest may also be rebooted directly with the following command:

# virsh reboot [domain-name]

Suspending and resuming a guest

To suspend and subsequently resume a guest you can use the following commands:

# virsh suspend [domain-name]
# [...]
# virsh resume [domain-name]

Dumping a domain configuration

To dump the domain configuration in the case you need to examine it and modify it, you can use the following command (virsh will dump it on the standard output):

# virsh dump-xml [domain-name]

Loading a domain configuration

If you previously dumped and modified a domain configuration, you can redefine the domain using this command:

# virsh define [domain-configuration]

Determining the VNC display

To determine the display that VNC is using for a particular domain, you can use the following command:

# virsh vncdisplay [domain-name]

Examining existing domains

To determine the status of every existing domain, you can use the following command:

# virsh list --all
Id Name State
---------------------------------
0 Domain-0 running
2 winsrv2003 blocked

Domain winsrv2003 is listed as in a blocked state. This usually means that the domain is not running because it may be idle or waiting for I/O.

Block device related commands

The following commands are used to manage guests' block devices.

Mounting a CDROM

To mount a cdrom on a guest you can either directly mount the physical device or mounting and ISO image of the medium. The commands are:

# xm block-attach [domain-name] [device-type]:[path-to-device] [physical-drive]:[device-name] [options]

For example:

# xm block-attach [domain-name] phy:[path-to-device] hdb:cdrom r
# xm block-attach [domain-name] file:[path-to-device] hdb:cdrom r

Checking for device status

To check the block device status you can use the following command:

# xm block-list [domain-name] --long
(768
((backend-id 0)
(virtual-device 768)
(device-type disk)
(state 1)
(backend /local/domain/0/backend/vbd/11/768)
)
)
(5632
((backend-id 0)
(virtual-device 5632)
(device-type cdrom)
(state 1)
(backend /local/domain/0/backend/vbd/11/5632)
)
)

Unmounting a device

After detecting the device ID using the block-list command described in the previous section, you can use the following command to unmount a device:

# xm block-detach [domain-name] [device-id] -f

Before detaching a block device, the device should be unmounted and ejected in the guest OS first.

Impressions

The Windows guest runs pretty well, even if the machine seems slower while running the xVM kernel. I'm still doubting, moreover, if there's sort of a memory leak because as time passes by, vmstat shows that free memory and free swap are almost zero and the machine is indeed swapping on the disk.

For example, I'm running two Windows 2003 Server since a couple of SXCE build iterations and on build 103 I still continue to have the same problem. The machine is a Sun Ultra 20 M2 with 8 GB of RAM memory. Both domUs were dedicated 1 GB setting both mem-set and mem-max. This problem even shows up the same way even if I only boot one domU. When I boot the Windows domU, everything's fine and memory usage seems reasonable: dom0 has more or less 6 GB of dedicated memory and an unlimited mem-max. As times goes by, free memory goes down, free swap goes down and the machine begins to swap to disk and a moment comes I have to reboot. The effect is pretty clear with vmstat: free memory goes down to more or less 100 MB, free swap goes down too and the machine begins slowing down.

I found the description of this bug on Solaris Express release notes:

xVM Hypervisor Running Out of Memory

When running some non-Solaris domUs, you could encounter an issue where xVM hypervisor runs out of memory. This will generally be reflected by error messages generated to the dom0 console, in some cases in such high quantities that a reboot of the dom0 might be required to recover.

To avoid this, it is suggested that when running a non-Solaris domU, you manually balloon the amount of memory used by dom0 down to a smaller amount before booting the domU.

For example, if the dom0 is using 3500Mb, which can be determined via the xm list command, you would issue the following command to reduce its memory usage to 3000Mb:

xm mem-set Domain-0 3000

This should not be necessary when using a build-81 based dom0, or later.

This bug seems to explain the behavior I'm experiencing but it seems not applicable because I'm running build 103, while this bug is related to builds earlier than 81.

Other glitches

I experienced problems mounting and ejecting ISO images on Windows 2003 Server cdrom. Indeed, up to Solaris Express Community Edition build 103, I was experiencing bug 6749195: empty CD-ROM disappears from HVM domains. And when it disappeared, you had to reboot the domain. This made xVM unusable on production, at least if you needed the cdrom even only from time to time.

On build 103 and 104 I'm still noticing instabilities in xm block-attach behavior and I prefer using xm block-configure even when mounting an iso image on an empty cdrom.

Conclusions

Globally, I must say I'm pretty happy with xVM, such as I tried it on Solaris Express Community Edition since build 103. Nevertheless, as I said in the opening, the feeling of running unsupported software isn't that good when part of your business is relying on it. Still, I'm missing the performance, the stability and the ease of use of Sun xVM VirtualBox but I hope they'll find their way into Sun xVM Server.