Showing posts with label opensolaris. Show all posts
Showing posts with label opensolaris. Show all posts

Sunday, 14 November 2010

OpenSolaris (and OpenIndiana) Spends 50% of CPU Time in Kernel

A couple of days ago my client decided to prepare some new Java EE development environments and, when asked about which OS to choose, I suggested that he gave Solaris a try: since my client production servers run Solaris 10, he would benefit from a more homogeneous set of environments.

We installed a couple of test machines, one with Solaris 10 and another with OpenSolaris 2009.06, and we began installing the development environments and the required runtime components. The installation packages were SVR4: installation went straightforward on Solaris 10 while on OpenSolaris we had to resolve a couple of glitches. After a couple of day, test users were inclined towards OpenSolaris mostly because of its newer desktop environment: we started installing the remaining machines and started upgrading OpenSolaris to the latest dev release (b134).

Reduced Performance: CPU Time in Kernel When Idle 

The latest OpenSolaris dev release (b134) has got some known issues I wasn't concerned with since I already fought with in the past and can easily be resolved.

The surprise was discovering that all of the upgraded machines were affected by another problem: as soon as users rebooted into their b134 boot environment, the performance of the machine seemed to be pretty worse than when using the oldest (b111) boot environment.

prstat was showing no misbehaving process while vmstat indicated that the system was spending a constant 50% of the time in the kernel. With a quick search I easily pointed out this bug:


Repeating the steps outlined in the bug discussion confirmed me that we were hitting the same bug. We thus disabled cpupm in /etc/power.conf and the problem disappeared.

Upgrading to OpenIndiana

Although the bug is still listed as ACCEPTED, we decided to give OpenIndiana a try and upgrade a machine following the upgrade path from OpenSolaris b134. The upgrade went smooth and in no time we were rebooting into OpenIndiana b147.

The cpupm bug is still there, though. Nevertheless, it's a great opportunity for my client to test drive OpenIndiana and decide if it fits its needs. Nowadays, users will appreciate almost no differences between OpenSolaris and OpenIndiana (except for the branding.) As time goes by, we'll discover if and when Oracle will put back sources into OpenSolaris or if OpenIndiana is destined to diverge from its step-brother.



Thursday, 3 June 2010

Installing Sun Ray Server Software on OpenSolaris 2009.06

Overview

Yesterday I received my first Sun Ray client and was looking forward to trying. The Sun Ray client is a display device which can be connected to a remote OS instance in basically two ways:
  • Using the Sun Ray Server Software.
  • Using Sun Virtual Desktop Infrastructure.

The Sun Ray client I'm using is a Sun Ray 2, which is a very low power device (about 4W), equipped with the following ports:
  • 1 DVI port
  • 1 serial port
  • 1 Ethernet RJ45 port

Sun Virtual Desktop Infrastructure (VDI) is a connection broker which gives Sun Ray devices access to a supported operating system such as Sun Microsystems' Solaris, Linux or Microsoft Windows which must be executed by one of the virtualization technology supported by VDI:
  • VirtualBox.
  • VMWare.
  • Microsoft Hyper-V.

Sun Ray Server Software (SRSS), on the other end, is a server software available for Solaris and Linux which gives Sun Ray clients remote access to an UNIX desktop session. Since we're only running Solaris on our machines, Sun Ray Server Software and Sun Rays are the quickest way to go to provide a low cost and effective desktop to all of our users.

Prerequisites

Solaris' SRSS prerequisite are very simple: Solaris 10 05/09 or newer. That's where fun begins. Sun Ray Server Software is supported on Solaris 10 but not (yet) on OpenSolaris. We'll be running Solaris 10 on our production environment but, for this proof of concept, I tried to use an already existing OpenSolaris virtual machine (OSOL 2009.06 upgraded to b134 from /dev) running on VirtualBox on a Mac OS X host. Taking into account the problems I've had in the past trying to run software supported on Solaris 10 (such as Sun Java Enterprise System) on OpenSolaris, I seriously considered installing a Solaris 10 VM and get rid of all those problems that are a direct consequence of the great job the OpenSolaris developers are doing (especially package refactoring and changes to Xorg installation paths.) At the end, curiosity kills the cat and now SRSS is running on OpenSolaris.

Installation Steps

First of all, download SRSS. You'll just need the following files:
  • Sun Ray Server Software 4.2.
  • Sun Ray Connector for Windows Operating System (only if you want to connect your Sun Ray client to a Windows operating system instance.)

Unzip SRSS on a temporary location on your Sun Ray server:

# unzip srss_4.2_solaris.zip

I will refer to this path as $SRSS from now on.

Bundled Apache Tomcat for the Sun Ray Admin GUI

If you plan to use the Sun Ray Admin GUI you should install a suitable web container (Servlet 2.4 and JSP 2.0 are required.) SRSS is bundled with Apache Tomcat which you can use to run the Admin GUI:

# cd $SRSS/Supplemental/Apache_Tomcat
# gtar -xvv -C /opt -f apache-tomcat-5.5.20.tar.gz

Please note that GNU tar is required to extract the Apache Tomcat bundle.

Since the default Apache Tomcat installation path used by SRSS is /opt/apache-tomcat you'd better make a symlink to your Apache Tomcat installation path:

# ln -s /opt/apache-tomcat-5.5.20 /opt/apache-tomcat

Installing SRSS

To launch the installation script for SRSS just run:

# $SRSS/utinstall

The script will just ask you a few question. Be ready to provide the following:
  • JRE installation path.
  • Apache Tomcat installation path.

From now on, the SRSS installation path (/opt/SUNWut) will be referred to as SRSS_INST.

Upon script termination it's now required that you restart your Sun Ray server:

# init 6

Planning Your SRSS Network Topology

The first thing you've got to do is defining your network topology. SRSS can be configured with or without a separate DHCP server, on private and shared networks, etc. The official SRSS documentation can give you hints to how to configure your server if your in doubt. For the sake of this proof of concept, I'll choose the simplest network topology: a shared network with an existing DHCP server. For alternate configuration, please have a look at the official SRSS documentation.

To configure SRSS on a shared network using an external DHCP server all you've got to do is:

# $SRSS_INST/sbin/utadm -L on
# $SRSS_INST/sbin/utrestart

On OpenSolaris some required Solaris 10 packages were missing and installation scripts correctly informed me about the situation. The missing packages can be installed with pkg:

# pkg install SUNWdhcs SUNWdhcsb SUNWdhcm

Configuring SRSS

SRSS has got an interactive configuration script which can be run to establish the initial SRSS configuration:

# $SRSS_INST/sbin/utconfig

Please take into account that the script will ask, amongst others, the following questions:
  • SRSS admin password.
  • Configuration parameters for the Admin GUI:
    • Tomcat path.
    • HTTP port.
    • HTTP admin port.
  • Whether you want to enable remote administration.
  • Whether you want to configure kiosk mode:
    • Kiosk user prefix.
    • Kiosk users' group.
    • Number of users.
  • Whether you want to configure a failover group.

To enable the use of GDM by SRSS you'll need to touch the following file:

# touch /etc/opt/SUNWut/ut_enable_gdm

Synchronize the Sun Ray DTU Firmware

The last step in the configuration process is synchronizing the Sun Ray DTU firmware:

# $SRSS_INST/sbin/utfwsync

SRSS Up and Running on Solaris 10

Solaris 10 configuration ends here and SRSS should now be up and running. In the next section I'll detail the workarounds needed to fix the quirks I've found while configuring SRSS on OpenSolaris.

Additional Configuration for OpenSolaris 2009.06 or >b134

As soon as I configured SRSS, I tried to plug my Sun Ray client on to see if it would work correctly. The Sun Ray client was discovering the SRSS server correctly but then hung with a 26 B error code. The SRSS logs were reporting that GDM session was dying almost upon startup. So, there was a problem with GDM.

Fixing Bug 6803899

There's a known bug that affects $SRSS_INST/lib/utdtsession. Open it with vi and replace awk with nawk.

< tid=$(awk -F= '$1 == "TOKEN" {print $2;exit}' ${DISPDIR}/${dpyparm})
> tid=$(nawk -F= '$1 == "TOKEN" {print $2;exit}' ${DISPDIR}/${dpyparm})

NWAM

OpenSolaris has got a new SMF managed service to autoconfigure the network physical layer called NWAM. Using SRSS with NWAM (and with other server software as well) can be quirky. I suggest you disable NWAM and fall back to manual network configuration. More details on this can be found on official OpenSolaris documentation.

Motif

OpenSolaris is not shipped with the Motif libraries (and dependencies) required by SRSS. You can ignore them and set up a new policy accordingly:

# $SRSS_INST/sbin/utpolicy -a -g -z both -D
# $SRSS_INST/sbin/utrestart -c

or proceed and install the missing packages:

# pkg install SUNWmfrun SUNWtltk SUNWdtbas

Since this is a proof of concept I'm not going to use features such as mobility. Nevertheless, I wanted to try and install Motif to see if additional problems would come out.

Fixing GDM

As I told at the beginning of this section, SRSS logs were indicating some kind of problem with GDM. If you're following OpenSolaris evolution, you'll know that, indeed, Xorg as well as GDM have undergone major changes and now notably differ from their Solaris 10 "parents". The first error that was showing up on GDM logs, which can be found on /var/log/gdm, were complaints about missing fonts:

Fatal server error:
could not open default font 'fixed'

Font locations, indeed, changed considerably on latest OpenSolaris builds. To fix this you have to create a file called /etc/opt/SUNWut/X11/fontpath to reflect correct font paths on your system. On OpenSolaris b134 such paths are the following:

/usr/share/fonts/X11/100dpi
/usr/share/fonts/X11/100dpi-ISO8859-1
/usr/share/fonts/X11/100dpi-ISO8859-15
/usr/share/fonts/X11/75dpi
/usr/share/fonts/X11/75dpi-ISO8859-1
/usr/share/fonts/X11/75dpi-ISO8859-15
/usr/share/fonts/X11/encodings
/usr/share/fonts/X11/isas
/usr/share/fonts/X11/misc
/usr/share/fonts/X11/misc-ISO8859-1
/usr/share/fonts/X11/misc-ISO8859-15
/usr/share/fonts/X11/Type1

After fixing font paths GDM complained about missing dependencies for the following libraries: libXfont and libfontenc. Although this is not the "Solaris way" of doing things, a quick and dirty solution was making symlinks to the missing dependencies in /usr/lib:

# cd /usr/lib
# ln -s xorg/libXfont.so
# ln -s xorg/libXfont.so.1
# cd amd64
# ln -s ../xorg/amd64/libXfont.so
# ln -s ../xorg/amd64/libXfont.so.1
# cd /usr/lib
# ln -s xorg/libfontenc.so
# ln -s xorg/libfontenc.so.1
# cd amd64
# ln -s ../xorg/amd64/libfontenc.so
# ln -s ../xorg/amd64/libfontenc.so.1

The last thing to do is fixing a problem in $SRSS_INST/lib/xmgr/gdm/remove-dpy for gdmdynamic syntax:

< gdmglue="; gdmdynamic -b -d "'$UT_DPY'
> gdmglue="; gdmdynamic -d "'$UT_DPY'

Done.

Now, your Sun Ray clients should be able to connect to your SRSS running on OpenSolaris (at least, b134.) As you can see in the following picture, there's my MacBook with a virtualized OpenSolaris (b134) acting as a Sun Ray Server, the Sun Ray 2 client and the virtualized desktop on the screen behind the MacBook.


Have fun!




Monday, 24 May 2010

Upgrading OpenSolaris to the Latest Build from the dev Repository

At home I'm still running Solaris Express Community Edition. I was waiting for OpenSolaris 2010.03 to be released to perform a major upgrade of my workstation: months have passed and we're still waiting for it. Since I'm going to change my SATA drive with a new SAS one I could even try and go with OpenSolaris but I should upgrade it from the /dev repository since some ZFS pools are running versions which are unsupported by the 2009.06 release.

My earliest OpenSolaris test drives were pretty satisfactory as far as it concerns the OS "feeling." I really liked 2008.11 and, although it took some time to get accustomed to the IPS repository (it mainly is a psychological issue with it), I like the direction it took. Unfortunately SXCE was far more solid than OpenSolaris and, moreover, I was having some trouble with some Sun product (such as the Java Enterprise System) which I needed to work.

Since then and since SXCE discontinuation, I've been waiting for the next stable release before upgrading my system. This weekend I had some spare time and decided to give the latest OpenSolaris build a try. I downloaded VirtualBox for Mac, installed it and run the OpenSolaris 2009.06 installation. Once it finished the first thing I did was disabling the splash screen. 

Disabling the Splash Screen

To disable the OpenSolaris splash screen during boot you have to edit the /rpool/boot/grub/menu.lst and remove the following fragments:

[...snip...]
... ,console=graphics
splashimage = ...
foreground = ...
background = ...

Please, pay attention to remove just the ,console=graphics fragment and not the entire kernel line. Failing to do so will result in an unbootable system.

Upgrading to /dev

Once I modified the menu.lst file I changed the package repository to point to http://pkg.opensolaris.org/dev/:

# pkg set-authority -O http://pkg.opensolaris.org/dev/ opensolaris.org

and run an image update:

# pkg image-update -v

The new packaging system is working far better than I remembered. Unfortunately, it still seems pretty slow, especially when compared to similar packaging systems such as Debian's. After a couple of hours build 134 (snv_134) was installed and rebooted into the new boot environment.

There's no need to examine change logs to notice that, almost one year after OpenSolaris 2009.06 was released, many things have changed. Although I already considered the Nimbus theme the most beautiful GNOME theme out there, there were room for improvements and the OpenSolaris guys have made a great job.

Minor Problems

Missing xfs Service

During the first boot I noticed an error from the Service Management Facility relating a missing service, xfs. This is just a manifestation of bug 11602 and it just affected the first boot after the upgrade.

Xorg Fails to Start

A more serious problem was Xorg failing to start. After the reboot in the new boot environment the system was not unable to start the graphical login session and was continuously dropping down to console login. Long story short, the /etc/X11/xorg.conf file that was present on the system had some invalid paths in it which were preventing Xorg to start correctly. Since Xorg usually detects the computer configuration correctly, I just deleted the file and Xorg came up happily.

.ICEAuthority Could Not Be Found (A.K.A.: gdm User Has Changed its Home)

As soon as Xorg started, a popup appeared complaining about a missing .ICEAuthority file. That's another misconfiguration to correct but harder to find: you're running into the following bug:

13534 "Could not update ICEauthority file /.ICEauthority" on bootup of build 130
http://defect.opensolaris.org/bz/show_bug.cgi?id=13534

The gdm user home directory was reported as / by /etc/passwd. I just changed it to where it belongs and all problems were solved:

# usermod -d /var/lib/gdm gdm

Malfunctioning Terminals

Another problem you might find is the following:

12380 image-update loses /dev/ptmx from /etc/minor_perm
http://defect.opensolaris.org/bz/show_bug.cgi?id=12380

The workaround is the following:
  • Reboot into the working boot environment.
  • Execute the following commands:

$ pfexec beadm mount your-BE /mnt
$ pfexec sh -c "grep ^clone: /etc/minor_perm >> /mnt/etc/minor_perm"
$ pfexec touch /mnt/reconfigure
$ pfexec bootadm update-archive -R /mnt
$ pfexec beadm unmount your-BE

Waiting for the Next Release

So far, OpenSolaris snv_134 is a Solaris as great as ever. I wouldn't mind running it on my workstation now. I'll patiently wait a bit longer just in case: I surely prefer running stable versions on some machines. However, OpenSolaris now seems as stable as SXCE was and I think it's an operating system that now should deserve the attention of any user who is running other *NIX flavors on their home workstations.

Wednesday, 19 May 2010

Setting up PostgreSQL on Solaris

PostgreSQL is bundled with Solaris 10 and is available from the primary OpenSolaris IPS repository.

To check if PostgreSQL is installed in your Solaris instance you can use the following command:

$ svcs "*postgres*"
STATE          STIME    FMRI
disabled       Feb_16   svc:/application/database/postgresql:version_81
disabled       16:11:25 svc:/application/database/postgresql:version_82

Install Required Packages

If you don't see any PosgreSQL instance in your Solaris box then proceed and install the following packages (the list may actually change over time):
  • SUNWpostgr
  • SUNWpostgr-contrib
  • SUNWpostgr-devel
  • SUNWpostgr-docs
  • SUNWpostgr-jdbc
  • SUNWpostgr-libs
  • SUNWpostgr-pl
  • SUNWpostgr-server
  • SUNWpostgr-server-data
  • SUNWpostgr-tcl

Check if PostgreSQL SMF Services are Configured

After installation, SMF services should be listed by (the output may depend on the actual PostgreSQL version you installed):

$ svcs "*postgres*"
STATE          STIME    FMRI
disabled       Feb_16   svc:/application/database/postgresql:version_81
disabled       16:11:25 svc:/application/database/postgresql:version_82

On Solaris, PostgreSQL is managed by the SMF framework. If you're curious you can check the service manifest at /var/svc/manifest/application/database/postgresql.xml and the service methods at /lib/svc/method/postgresql. Many important parameters are stored in the service configuration file (postgresql.xml): if you want to change some parameters (such as PostgreSQL data directory) you must use svccfg to edit the service configuration.

PostgreSQL and RBAC

PostgreSQL on Solaris uses RBAC to give users permissions over the database instance. When you install Solaris' PostgreSQL packages an RBAC role is setup for you:

[/etc/passwd]
postgres:x:90:90:PostgreSQL Reserved UID:/:/usr/bin/pfksh


This user is setup as an RBAC role in /etc/user_attr file:

[/etc/user_attr]
postgres::::type=role;profiles=Postgres Administration,All

Permission for the Postgres Administration profiles are setup in the /etc/security/exec_attr file:

[/etc/security/exec_attr]
Postgres Administration:solaris:cmd:::/usr/postgres/8.2/bin/initdb:uid=postgres
Postgres Administration:solaris:cmd:::/usr/postgres/8.2/bin/ipcclean:uid=postgres
Postgres Administration:solaris:cmd:::/usr/postgres/8.2/bin/pg_controldata:uid=postgres
Postgres Administration:solaris:cmd:::/usr/postgres/8.2/bin/pg_ctl:uid=postgres
Postgres Administration:solaris:cmd:::/usr/postgres/8.2/bin/pg_resetxlog:uid=postgres
Postgres Administration:solaris:cmd:::/usr/postgres/8.2/bin/postgres:uid=postgres
Postgres Administration:solaris:cmd:::/usr/postgres/8.2/bin/postmaster:uid=postgres

Starting PostgreSQL

You can start PostgreSQL using the following SMF command from an account with the appropriate privileges:

$ su - postgres
$ svcadm enable svc:/application/database/postgresql:version_82

Initial Configuration

By default, PostgreSQL is configured to trust all of the local users. That's not a good practice because all your local users may connect to PostgreSQL as a superuser. The first to do is setting up a password for the postgres user:

$ psql -U postgres
postgres=# alter user postgres with password 'your-password';

Exit psql with the \q command and change the /var/postgres/8.2/data/pg_hba.conf file to set an appropriate authentication method and change the following line:

[/var/postgres/8.2/data/pg_hba.conf]
local all all trust

with, for example:

[/var/postgres/8.2/data/pg_hba.conf]
local all all md5

Next time you connect, PostgreSQL will be asking you for the user password. Now, let's refresh the PostgreSQL service so that PostgreSQL will receive a SIGHUP signal an re-read the pg_hba.conf file:

$ svcadm refresh svc:/application/database/postgresql:version_82

Done!

You're now running a PostgreSQL instance on your Solaris box ready to be given to your database administrator, ready for production use.


Tuesday, 18 May 2010

VirtualBox v. 3.2.0 Has Been Released Adding Support For Mac OS X


Today, Oracle Corporation has released VirtualBox v. 3.2.0 and renamed it Oracle VM VirtualBox.

This is a major version which includes many new technologies such as:
  • In-hypervisor networking.
  • Remote Video Acceleration.
  • Page Fusion.
  • Memory Ballooning.
  • Virtual SAS Controller.
  • Mac OS X guest support (on Apple hardware only.)

And much more. If your want to read the official announcement please follow this link. If you want to read the change log please follow this link.

Installing JIRA on Solaris

Installing Atlassian JIRA on Solaris is pretty easy. To run JIRA on a production environment you'll need:
  • Java SE (JRE or JDK).
  • A supported database.
  • Optionally, an application server.

Solaris 10 is bundled with everything you need while, on OpenSolaris, you'll rely packaging system to install the bits you're missing.

Installing Java SE

Solaris 10 is bundled with Java SE 5.0 at /usr/jdk but you might switch to 6.0 as well. If you want to install a private Java SE 6.0 instance on your Solaris 10 system, just download the shell executable versions from Sun website and install them:

$ chmod +x jdk-6u20-solaris-i586.sh
$ cd /java/installation/dir
$ ./jdk-6u20-solaris-i586.sh

If you're running an AMD64 system you should also install the x64 bits:

$ cd /java/installation/dir
$ chmod +x jdk-6u20-solaris-x64.sh
$ ./jdk-6u20-solaris-x64.sh

I usually install private Java SE instance on /opt/jdk replicating the structure of the /usr/jdk which is very helpful, for example, when decoupling specific Java SE instances from shell scripts:

# cd /opt
# mkdir -p jdk/instances
[...install here...]
# cd /opt/jdk
# ln -s instances/jdk1.6.0_20 jdk1.6.0_20
# ln -s jdk1.6.0_20 latest

Setting Up JAVA_HOME

When using JIRA scripts your JAVA_HOME environment variable should be set accordingly. I usually write a small script to prepare the environment for me:

[set-jira-env]
export JAVA_HOME=/opt/jdk/latest
export PATH=$JAVA_HOME/bin:$PATH

and then just source it into my current shell:

$ . ~/bin/set-jira-env

Setting Up an User

This is a point to seriously take into account when running your JIRA instances. Since I usually build a Solaris Zone to run JIRA into, I sometimes run JIRA as the root user. By the way, if you need to create an user, just run:

# useradd -d /export/home/jira -g staff -m -k /etc/skel -s /bin/bash jira

Please note that Solaris 10 use the /export/home directory as the root of local user home directories. You cal also use Solaris' automount to map user homes in /export/home in /home. Ensure that the /etc/auto_master file contains the following line:

/home  auto_home  -nobrowse

Then edit the /etc/auto_home file as in the following example:

*  -fstype=lofs  :/export/home/&

Ensure that the autofs service is running:

$ svcs \*autofs\*
STATE          STIME    FMRI
online         Feb_16   svc:/system/filesystem/autofs:default

If it's not, enable it:

# svcadm enable svc:/system/filesystem/autofs:default

After creating an user, you can just change its home directory and the automounter will mount its home into /home:

# usermod -d /home/jira jira

Setting Up a Project

Solaris has excellent resource management facilities such as Solaris Project. If you want to finely tune the resources you're assigning to your JIRA instance or to the Solaris Zone where your instance will be run you can read this blog post.

Setting Up PostgreSQL

Solaris 10 comes with a supported instance of the PostgreSQL database which is, moreover, one of Atlassian's very favorite databases. Solaris, then, provides out-of-the-box all of the pieces you need to run your JIRA instances.

To check if it's enabled just run:

# svcs "*postgresql*"
STATE          STIME    FMRI
disabled       abr_23   svc:/application/database/postgresql_83:default_32bit
disabled       abr_23   svc:/application/database/postgresql:version_82
disabled       abr_23   svc:/application/database/postgresql:version_82_64bit
disabled       abr_23   svc:/application/database/postgresql:version_81
online         abr_29   svc:/application/database/postgresql_83:default_64bit

In this case, the PostgreSQL 8.3 64-bits instance is active. If it were not, just enable it using the following command:

# svcadm enable svc:/application/database/postgresql_83:default_64bit

This is just the beginning, though. To make the initial configuration for your PostgreSQL instance on Solaris, please read this other post.

Installing JIRA

Please take into account that to install the standalone JIRA distribution you'll need GNU tar. GNU tar isn't always bundled in a Solaris 10 instance, while it is in OpenSolaris/Nevada. If it is, it should be installed in /usr/sfw/bin/gtar. If your Solaris 10 instance have no GNU tar and you would like to install it, you can grab it for example from the Solaris Companion CD.

Since I don't like having to rely on GNU tar, I usually decompress the GNU tar file on regenerate a pax file to store for later use. To create a pax file including the contents of the current directory you can run the following command:

$ pax -w -f your-pax-file.pax .

To read and extract the content of a pax file you can run:

$ pax -r -f your-pax-file

You can install JIRA on a directory of your choice. I usually install it in the /opt/atlassian subdirectory.

Create a JIRA Home Directory

JIRA will store its file on a directory you should provide. Let's say the you'll prepare the /var/atlassian/jira directory as the home directory for JIRA:

# mkdir -p /var/atlassian/jira

If you can, consider creating a ZFS file system instead of a plain old directory: ZFS provides you powerful tools in case you want, for example, to compress at runtime, to take a snapshot of, to backup or retore your file system:

# zfs create your-pool/jira-home
# zfs set mountpoint=[mount-point] your-pool/jira-home

Setting Your JIRA Home Directory

The jira-application.properties file is JIRA main configuration file. There, you'll find the jira.home properties which must point to the JIRA dome directory you just prepared:

[jira-application.properties]
[...snip...]
jira.home = /var/atlassian/jira
[...snip...]

Creating a Database Schema and an User for JIRA

The last thing you've got to do is creating a database user and a schema for JIRA to store its data. On Solaris, you can just use psql. The default postgres user comes with no password on a vanilla Solaris 10 installation. Please, consider to change it as soon as you start using your PostgreSQL database.

# psql -U postgres
postgres=# create user jira-user password 'jira-password';
postgres=# create database jira-db ENCODING 'UTF-8' OWNER jira-user;
postgres=# grant all on database jira-db to jira-user;

If you don't remember, you can exit psql with the \q command. ;)

Configuring Your Database in JIRA

To tell JIRA that it must use your newly created PostgreSQL database your have to open the conf/server.xml file and change the following parameters:

[server.xml]
<Context path="" docBase="${catalina.home}/atlassian-jira" reloadable="false">
<Resource name="jdbc/JiraDS" auth="Container" type="javax.sql.DataSource"
username="[enter db username]"
password="[enter db password]"
driverClassName="org.postgresql.Driver"
url="jdbc:postgresql://host:port/database"
[ delete the minEvictableIdleTimeMillis and timeBetweenEvictionRunsMillis params here ]
/>

The last thing to do is configuring the entity engine modifying the atlassian-jira/WEB-INF/classes/entityengine.xml file:

[entityengine.xml]
<datasource name="defaultDS" field-type-name="postgres72"
schema-name="public"
helper-class="org.ofbiz.core.entity.GenericHelperDAO"
check-on-start="true"
use-foreign-keys="false"
use-foreign-key-indices="false"
check-fks-on-start="false"
check-fk-indices-on-start="false"
add-missing-on-start="true"
check-indices-on-start="true">

Start JIRA

You can now happily start JIRA by issuing:

# ./bin/startup.sh

from the JIRA installation directory.

Next Steps

The next step will tipically be configuring JIRA as a Solaris SMF Service.

Enjoy JIRA!



Sunday, 9 May 2010

Backing up JIRA and Confluence taking advantage of ZFS snapshots

If you're running an instance of JIRA or Confluence (or many other software packages as well) you probably want to make sure that your data is properly and regularly backed. If you've got some experience with JIRA or Confluence you surely have noticed the bundled XML backup facility: a scheduled backup service which takes advantage of it it's even running by default in your instances.

The effectiveness of such a backup facility depends on the size of your installation but the rule of thumb is that it's a mechanism that does not scale well as the amount of data stored in your instances grows up. In fact, XML backup was thought for small-scale installations and is not a recommended backup strategy for larger scale deployments.

In the case of JIRA I still continue to run automated XML backups since they do not store attachments in them but as far as it concerns Confluence, I always disable the automated XML backup and rely on native database backup and attachment storage backup. The database backup must be performed with the native database tools such as pg_dump for PostgreSQL. The backup of your instance's attachments will depend on the type of storage in use. If you're storing your attachment in the database, your attachments will be backed up automatically during your database backup. If you store your attachments in a file systems, as it's the case for both JIRA and Confluence default installations, there's plenty of tools out there to get the job done such as tar, pax, cpio and rsync (to name just a few). Each one of these have advantages and drawbacks and I won't enter in a detailed discussion: it suffices to say that none can beat a Solaris ZFS-based JIRA or Confluence installation. 

Since ZFS inception I've been taking advantage of its characteristics more and more often and snapshots are a ZFS killer feature that will considerably ease your administration duties. Whenever I install a new instance on a Solaris Zone, I set up a ZFS file system for hosting both the database files and JIRA or Confluence home directories:

# zfs create my-pool/my/db/files
# zfs create my-pool/jira/or/confluence/home

Taking a snapshot of a ZFS file system is a one-liner:

# zfs snapshot file-system-name@snapshot-name

In an instant your snapshot will be done and you will be able to send it to another device for permanent storage. ZFS snapshots, combined (or not...) with another tool such as rsync, will incredibly simplify backing up your files and, also, maintaining a cheap history (in terms of storage overhead) of changes in case you need to roll back your file systems (and hence the data stored in your application) in case you needed it.

Take into account that, to recover a single file from a snapshot in case your original pool crashes, you will need to ZFS receive the snapshot in another pool for files to be accessible. That's why I still rely on a scheduled rsync backup together with ZFS snapshots just in case, although with a much lower frequency than in the pre-ZFS epoch.





Sunday, 2 May 2010

JIRA: Creating issues from TLS-encrypted mail

As you know I'm extensively using Atlassian JIRA and one of the features my current client uses most is automatically creating issues from received email. The ability of automatically parsing an email and creating an issue is a nice built-in JIRA feature which sometimes can spare you a lot of work.

Configuring this service is straightforward:
  • Configure a mail server.
  • Configure the mail service.

Configuring a Mail Server

The mail server configuration screen, that you can access from your JIRA Administration section, is a simple screen where you can configure the basic properties of your mail server:
  • Name.
  • Default From: address.
  • Email Subject: prefix.
  • SMTP configuration:
    • Host.
    • Port.
    • (Optional) User credentials.
  • JNDI location of a JavaMail Session, in case you're running JIRA on a Java EE Application Server.

Once you've set up a mail server, you can proceed and configure the service that will read your mail box and create issues for you.

Configuring a "Create Issues From Mail" Service

The Services configuration tab lets you define JIRA services, which are the JIRA equivalent of an UNIX cron job. JIRA ships with some predefined services two of which are:
  • Create Issues from POP.
  • Create Issues from IMAP.

Depending on the protocol you're accessing you mail server with, you'll choose the appropriate service. In my case, I always choose IMAP if available. The following screenshot is the configuration screen of the "Create Issues from POP/IMAP" service:


There are different handlers you can choose from: you can find detailed information in the JIRA Documentation. The "Create issue or comment" is probably what you're looking for. The handler parameters lets you fine tune your handler with parameters such as:
  • project: the project new issues will be created for.
  • issuetype: the type of issues that will be created.
  • createusers: a boolean flag that sets whether JIRA will create new users when a mail is received from an unknown address. Generally, you want this to be false.
  • reporterusername: the name of the issue reporter when the address of the email doesn't match the address of any of the configured JIRA users.

Usually I set this parameter to something like: project=myProjId,issuetype=1,createusers=false,bulk=forward,reporteruserame=myuser

The Uses SSL combo box lets you choose whether you mailbox will be accessed using an encrypted connection. If you're planning to use SSL to access you mailbox you will probably need to import your mail server certificate into your certificate file, as explained later.

The Forward Email parameter lets you specify the address where errors or email that could not be processed will be forwarded to.

The Server and Port parameters lets you choose the mail server this service will connect to. The Delay parameter lets you specify the interval between every service execution.

Connecting to an SSL Service

If you're going to access your mail server using SSL, you will probably need to import the mail server public key into your certificate file, otherwise you'll receive some javax.net.ssl.SSLHandshakeException. In a previous post I explained how you can retrieve a server public key using OpenSSL. Once you have got the public key, you can add it to your key store by using the keytool program. The location of your key store may depend on your environment or application server configuration. The default location of the system-wide key store is $JAVA_HOME/jre/lib/security/cacerts. To add a key to your key store you can run the following command:

# keytool -import -alias your.certificate.alias -keystore path/to/keystore -file key-file

Additional Considerations for Solaris Sparse Zones

I'm often using Solaris 10 Sparse Zones to quickly deploy instances of software such as JIRA. In this case, please note that the system wide Java key store won't be writable in a zone. Instead of polluting the global zone key store, I ended up installing Java SE on every zone I deploy to avoid ending up with application trusting some certificates just because other applications do.

Wednesday, 17 February 2010

Setting up Solaris COMSTAR and an iSCSI target for a ZFS volume

COMSTAR stands for Common Multiprotocol SCSI Target: it basically is a framework which can turn a Solaris host into a SCSI target. Before COMSTAR made its appearance, there was a very simple way to share a ZFS file system via iSCSI: just setting the shareiscsi property on the file system was sufficient, such as you do to share it via NFS or CIFS with the sharenfs and sharesmb properties.

COMSTAR brings a more flexible and better solution: it's not as easy as using those ZFS properties, but it is not that hard, either. Should you need more complex setup and features, COMSTAR includes a wide set of advanced features such as:
  • Scalability.
  • Compatibility with generic host adapters.
  • Multipathing.
  • LUN masking and mapping functions.

The official COMSTAR documentation is very detailed and it's the only source of information about COMSTAR I use. If you want to read more about it, please check it out.

Enabling the COMSTAR service

COMSTAR runs as a SMF-managed service and enabling is no different than usual. First of all, check if the service is running:

# svcs \*stmf\*
STATE          STIME    FMRI
disabled       11:12:50 svc:/system/stmf:default

If the service is disable, enable it:

# svcadm enable svc:/system/stmf:default

After that, check that the service is up and running:

# svcs \*stmf\*
STATE          STIME    FMRI
online         11:12:50 svc:/system/stmf:default

# stmfadm list-state
Operational Status: online
Config Status     : initialized
ALUA Status       : disabled
ALUA Node         : 0

Creating SCSI Logical Units

You're not required to master the SCSI protocols to setup COMSTAR but knowing the basics will help you understand the next steps you'll go through. Oversimplifying, a SCSI target is the endpoint which is waiting client (initiator) connections. For example, a data storage device is a target and your laptop may be an initiator. Each target can provide multiple logical units: each logical unit is the entity that performs "classical" storage operations, such as reading and writing from and to disk.

Each logical unit, then, is backed by some sort of storage device; Solaris and COMSTAR will let you create logical units backed by one of the following storage technologies:
  • A file.
  • A thin-provisioned file.
  • A disk partition.
  • A ZFS volume.

In this case, we'll choose the ZFS volume as our favorite backing storage technology.

Why ZFS volumes?

One of the wanders of ZFS is that it isn't just another filesystem: ZFS combines the volume manager and the file system providing you best of breed services from both world. With ZFS you can create a pool out of your drives and enjoy services such as mirroring and redundancy. In my case, I'll be using a RAID-Z pool made up of three eSATA drives for this test:

enrico@solaris:~$ zpool status tank-esata
  pool: tank-esata
 state: ONLINE
 scrub: scrub completed after 1h15m with 0 errors on Sun Feb 14 06:15:16 2010
config:

        NAME        STATE     READ WRITE CKSUM
        tank-esata  ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c7t0d0  ONLINE       0     0     0
            c8t0d0  ONLINE       0     0     0
            c8t1d0  ONLINE       0     0     0

errors: No known data errors

Inside pools, you can create file systems or volumes, the latter being the equivalent of a raw drive connected to your machine. File systems and volumes use the storage of the pool without any need for further partitioning or slicing. You can create your file systems almost instantly. No more repartition hell or space estimation errors: file systems and volumes will use the space in the pool, according to the optional policies you might have established (such as quotas, space allocation, etc.)

ZFS, moreover, will let you snapshot (and clone) your file systems on the fly almost instantly: being a Copy-On-Write file system, ZFS will just write modification on the disk, without any overhead and when the blocks are no more referenced, they'll be automatically freed. ZFS snapshot are Solaris a much optimized version of Apple's time machine.

Creating a ZFS volume

Creating a volume, provided you've already have a ZFS pool, it's as easy as:

# zfs create -V 250G tank-esata/macbook0-tm

The previous command creates a 250GB volume called macbook0-tm on pool tank-esata. As expected you will find the raw device corresponding to this new volume:

# ls /dev/zvol/rdsk/tank-esata/
[...snip...]  macbook0-tm  [...snip...]

Creating a logical unit

To create a logical unit for our ZFS volume, we can use the following command:

# sbdadm create-lu /dev/zvol/rdsk/tank-esata/macbook0-tm
Created the following LU:

              GUID                    DATA SIZE           SOURCE
--------------------------------  -------------------  ----------------
600144f00800271b51c04b7a6dc70001  268435456000         /dev/zvol/rdsk/tank-esata/macbook0-tm

Logical units are identified by a unique ID, which is the GUID shown in sbdadm output. To verify and get a list of the available logical units we can use the following command:

# sbdadm list-lu
Found 1 LU(s)

              GUID                    DATA SIZE           SOURCE
--------------------------------  -------------------  ----------------
600144f00800271b51c04b7a6dc70001  268435456000         /dev/zvol/rdsk/tank-esata/macbook0-tm

Indeed, it finds the only logical unit we created so far.

Mapping the logical unit

The logical unit we created in the previous section is not available to any initiator yet. To make your logical unit available, you must choose how to map them. Basically, you've got two choices:
  • Mapping it for all initiators on every port.
  • Mapping it selectively.

In this test, taking into account that it's a home setup on a private LAN, I'll go for simple mapping. Please, choose carefully your mapping strategy according to your needs. If you need more information on selective mapping, check the official COMSTAR documentation.

To get the GUID of the logical unit you can use the sbdadm or the stmfadm commands:

# stmfadm list-lu -v
LU Name: 600144F00800271B51C04B7A6DC70001
    Operational Status: Offline
    Provider Name     : sbd
    Alias             : /dev/zvol/rdsk/tank-esata/macbook0-tm
    View Entry Count  : 0
    Data File         : /dev/zvol/rdsk/tank-esata/macbook0-tm
    Meta File         : not set
    Size              : 268435456000
    Block Size        : 512
    Management URL    : not set
    Vendor ID         : SUN
    Product ID        : COMSTAR
    Serial Num        : not set
    Write Protect     : Disabled
    Writeback Cache   : Enabled
    Access State      : Active

To create the simple mapping for this logical unit, we run the following command:

# stmfadm add-view 600144f00800271b51c04b7a6dc70001

Configuring iSCSI target ports

As outlined in the introduction, with COMSTAR a new iSCSI transport implementation has been introduced that replaces the old implementation. Since the two implementation are incompatible and only one can run at a time, please check which one you're using. Nevertheless, consider switching to the new implementation as soon as you can.

The old implementation is registered as the SMF service svc:/system/iscsitgt:default and the new implementation is registered as svc:/network/iscsi/target.

enrico@solaris:~$ svcs \*scsi\*
STATE          STIME    FMRI
disabled       Feb_03   svc:/system/iscsitgt:default
online         Feb_03   svc:/network/iscsi/initiator:default
online         Feb_16   svc:/network/iscsi/target:default

If you're running the new COMSTAR iSCSI transport implementation, you can now create a target with the following command:

# itadm create-target
Target iqn.1986-03.com.sun:02:7674e54f-6738-4c55-d57d-87a165eda163 successfully created

If you want to check and list the targets you can use the following command:

# itadm list-target
TARGET NAME                                                  STATE    SESSIONS
iqn.1986-03.com.sun:02:7674e54f-6738-4c55-d57d-87a165eda163  online   0

Configuring the iSCSI target for discovery

The last thing left to do to have your iSCSI target configured for discovery. Discovery is the process which an initiator use to get a list of available targets. You can opt for one of the three iSCSI discovery methods:
  • Static discovery: a static target address is configured.
  • Dynamic discovery: targets are discovered by initiators using an intermediary iSNS servers.
  • SendTargets discovery: configuring the SendTargets option on the initiator.

I will opt for static discovery because I've got a very small number of targets and I want to control which initiators connect to my target. To configure static discovery just run the following command:

# devfsadm -i iscsi

Next steps

Configuring a target is a matter of few commands. It took me much more time to write down this blog post than having my COMSTAR target running.

The next steps wil be having an initiator connect to your target. I detailed how to configure a Mac OS/X instance as an iSCSI initiator on another post.

Using ZFS with Apple's Time Machine

The many of us who got accustomed to the ZFS wonders won't willingly trade ZFS for another file system, ever. But even though many ZFS users are running Solaris on their machines, including laptops, as I am, there are cases in which running another OS is desirable: that's when I seek the best option to integrate the other systems I'm running with Solaris and ZFS.

In the simplest case using some file system protocol such as NSF or CIFS is sufficient (and desirable): that's how I share the ZFS file systems where I archive my photos, my video, my music and so on. Sharing such file systems with another UNIX, Windows or Mac OS/X (just to cite some), it's just some commands away.

In other occasions accessing a file system is not sufficient: that's the case with Apple's Time Machine, which is expecting a whole disk for its own sake connected locally.

Fortunately, integrating ZFS and Time Machine is pretty easy if you're running a COMSTAR-enabled Solaris. Although setting up COMSTAR is a very well documented topic by the Solaris and OpenSolaris documentation, I'll give you a walk through the necessary steps to get the job done and having your time machine making its backup on a ZFS volume. You'll end up with the benefits of both world: a multidimensional time machine which will take advantage of ZFS snapshotting and cloning capabilities.

The steps I'll detail in the following posts are:

With such a solution, you will need no USB/FireWire/anything-else drive hanging around. You won't need to rely on consumer drives which implement some kind of file system sharing protocol which, as explained earlier, won't fit into the time machine use case.

Just a network connection and a box to install Solaris, ZFS and COMSTAR, and you'll provide a scalable, enterprise-level, easy to maintain solution for your storage needs.

Monday, 21 September 2009

Screencast - Install OpenSolaris in VirtualBox

opensolaris
Installing OpenSolaris on your machine may require some gentle hand holding if you are a first time user of the operating system. More so if you are thinking of installing OpenSolaris in VirtualBox. But help is at hand. I came across an exceptionally done screencast which walks you through installing and configuring VirtualBox in Windows, and then installing OpenSolaris in VirtualBox.
Read more »

Tuesday, 2 June 2009

OpenSolaris 2009.06 has been released


Innovate on OpenSolaris
This is the kind of news many people was waiting for: the third release of the OpenSolaris OS, OpenSolaris 2009.06, has been released. You can read the announcement here.

A quick summary of the new features, as reported from the official site, are:
If you were already using Solaris Express Community Edition you may have already tried these features. Nevertheless, I think it's time to burn a CD and give a try to this new release. Personally, I dropped OpenSolaris after trying 2008.11 for a couple of reasons (no sparse zones, incompatibility with some Sun products) but I'm really curious to try this new release: every iteration has been a great surprise and a very positive experience, so far.

As soon as I try it, I'll post a review. Meanwhile, you can download the OpenSolaris OS here.

Saturday, 17 January 2009

OpenSolaris 2008.11 has been released

As announced on opensolaris.org and opensolaris.com, OpenSolaris 2008.11 has been released.
Interesting updates are the following:
You can also have a look at the feature spot or read the entire release notes.
The OpenSolaris 2008.11 LiveCD can be downloaded here and here.

This really seems a major improvement since the release of 2008.05 and I'm looking forward to trying myself, in which case, I'll let you know.

Enjoy!