Showing posts with label configuration. Show all posts
Showing posts with label configuration. Show all posts

Sunday, 13 November 2011

Network Configuration in Solaris 11 with NWAM Profiles

Oracle has just released the Solaris 11 operating system, the new production ready Solaris major release. It's an "interesting" release since it's the first Solaris major release under the Oracle egemony and it inherits all of the technologies many of us have been using, in the last few years, in the Solaris Express and OpenSolaris releases that Sun Microsystems used to provide.

This blog post is the first part of a series of quick wrap ups for the impatient to quickly start and configure their new Solaris 11 servers. My advice is always the same: read the manual.

Network Configuration Changes in Solaris 11

Network configuration in Solaris 11 is pretty different than it was in earlier Solaris releases (including Solaris Express) and many administrators may be taken by surprise. Some of these changes were introduced in the corresponding OpenSolaris projects, such as Crossbows, and may be known by many of us. To sum things up, the major differences are the following:
  • Network configuration is now managed by a profile.
  • The dladm command now centralizes the configuration of layer 2 datalinks: many tasks performed by the ifconfig command on previous Solaris releases are now to be performed using the dladm command.
  • Datalink names aren't bound to their hardware driver name any longer.
  • IP interfaces on layer 3 are configured by using the ipadm command: the venerable ifconfig command has been deprecated in the Solaris 11 release.
  • IP network multi pathing (IPMP) groups are now implemented as IP interface and as such, configured with the ipadm command.
  • The new ipmpstat command has been introduced to gather statistics about IPMP groups.
  • Network virtualization has been implemented on the network device level.

The Solaris 11 Network Stack

The new Solaris 11 network stack is similar to Solaris 10's. Yet, some improvements have been introduced that administrators are simply going to love.
In the new network stack, the software layer has been decoupled from the hardware layer: this means that:
  • The network configuration of a system (or a zone) is insulated from the hardware it's running upon. 
  • As already stated, datalink names can be customized.
  • Many network abstractions (such as VNICs) are managed in the datalink layer: this means that all of the datalink configurations can be centrally managed with one administrative interface.
On Solaris 11, then, datalinks aren't named from the underlying physical devices and, by default, are named using the netn scheme, where n is a 0-based integer index. This apparently minor modification has a very important consequence: if you modify the underlying hardware, a network configuration may still be valid if the datalink name is left unchanged. This is really handy, for example:
  • If the underlying hardware of a box changes.
  • If you migrate zones across systems.
  • If you write generic configurations for a wide set of boxes.
The mapping between a datalink and the underlying physical device can be inspected with the dladm command:

$ dladm show-phys
LINK  MEDIA     STATE  SPEED  DUPLEX  DEVICE
net0  Ethernet  up     1000   full    e1000g0
net1  Ethernet  up     1000   full    e1000g1

Network Auto-Magic (NWAM)

Long time users of older Solaris Express releases will remember the introduction of the Network Auto-Magic feature into the operating system. NWAM is a feature that automates the basic network configuration of a Solaris box. NWAM in Solaris 11 has been greatly enhanced and it now supports the following concepts:
  • NCP.
  • Location.  
An NCP is a an administrative unit that specifies the configuration of the components of the network setup such as physical links and IP interfaces. An NCP is itself made up of NCUs (Network Configuration Units) representing the configuration of a physical link or interface.

A Location profile is another administrative unit that let the administrator specify:
  • The conditions under which a profile should be activated.
  • The naming service configuration.
  • The domain name.
  • The IP filter rules.
  • The IPSec policy.
At a given time, only one NCP and one Location profile will be active in a Solaris system.

NWAM is handy when a system network configuration is changed often and an administrator, in those cases, can encapsulate the different and required configurations in profiles (NCPs and Location profiles) and activate them when needed.

If you're using the Solaris 11 desktop, you can use the Network Preferences application (which can be found into the System/Administration menu) to quickly build NCPs and Location profiles.

Network Preferences

In the following sections we will use some NWAM administrative commands but we won't dig into this subject any more and let NWAM administration be the topic of another post.

Configuring the Network

Depending on how a newly Solaris 11 installation has been performed, your initial network configuration may differ. If you've installed it from the Live CD, the Automatic NCP and the Automatic Location profile are active. These profiles are pretty simple: they configure every IP interface and the name service using DHCP, leaving any other configuration option (IP filters, IPSec, etc.) disabled.

If you're using Solaris on your PC this configuration may be good for you but chances are you might be installing some server that requires a less trivial network configuration.

Creating an NCP profile
The first thing you're going to do is creating a new NCP:

$ netcfg create ncp datacenter

The datacenter NCP will be the container of our configuration and we will add the NCU that we need for every link and IP interface we're going to configure.

# netcfg
netcfg> select ncp datacenter
netcfg:ncp:datacenter> create ncu phys net0
Created ncu 'net0'.  Walking properties ...
activation-mode (manual) [manual|prioritized]> 
link-mac-addr> 
link-autopush> 
link-mtu> 
netcfg:ncp:datacenter:ncu:net0> end
Committed changes
netcfg:ncp:datacenter> create ncu ip net0
Created ncu 'net0'.  Walking properties ...
ip-version (ipv4,ipv6) [ipv4|ipv6]> ipv4
ipv4-addrsrc (dhcp) [dhcp|static]> static
ipv4-addr> 192.168.1.53
ipv4-default-route> 192.168.1.1
netcfg:ncp:datacenter:ncu:net0> end
Committed changes
netcfg:ncp:datacenter> exit

With the netcfg command we created an NCP with the following characteristics:

  • It has an NCU for a physical interface (net0). This NCU has been configured with default values for all of its properties (such as MAC address or MTU).
  • It has an NCU for an IP interface (net0). This NCU has been configured with a static IPv4 address and a default router.
If you activate this profile, your system will reconfigure the network according to the settings of this NCP:

# netadm enable -p ncp datacenter
Enabling ncp 'datacenter'

If we now check the IP interfaces we can see how they've been configured according to the above-mentioned NCUs: the net1 IP interface is up while the net0 interface has disappeared.

# ipadm show-if
IFNAME  CLASS     STATE  ACTIVE OVER
lo0     loopback  ok     yes    --
net1    ip        ok     yes    --

If we check the IP addresses currently used, the ipadm command confirms that only net1 has been assigned an address which is the static address we configured in the NCU. Again, net0 has disappeared.

# ipadm show-addr
ADDROBJ  TYPE    STATE  ADDR
lo0/v4   static  ok     127.0.0.1/8
net1/_a  static  ok     192.168.1.53/24
lo0/v6   static  ok     ::1/128

If we know check the state of the datalinks, we can see that net0 is in the unknown state while net1 is up.

# dladm show-phys
LINK  MEDIA     STATE    SPEED  DUPLEX  DEVICE
net0  Ethernet  unknown  1000   full    e1000g0
net1  Ethernet  up       1000   full    e1000g1

If we wanted to add both the net0 datalink and IP interface into the profile, we could simply modify it and create the corresponding NCUs.

If we now try to resolve some name, however, we discover that it's not going to work. If you remember, we're still using the Automatic location profile which configure the name resolver using DHCP. In this case, however, DHCP isn't being used so that the resolver is not going to resolve any name.

What we need now, is a corresponding location profile.

Creating a Location Profile
To configure the resolver settings, we can now create a new location profile, using once more the netcfg command:

netcfg> create loc datacenter
Created loc 'datacenter'.  Walking properties ...
activation-mode (manual) [manual|conditional-any|conditional-all]> 
nameservices (dns) [dns|files|nis|ldap]> 
nameservices-config-file ("/etc/nsswitch.dns")> 
dns-nameservice-configsrc (dhcp) [manual|dhcp]> manual
dns-nameservice-domain> 
dns-nameservice-servers> 192.168.1.1
dns-nameservice-search> yourdomain.com
dns-nameservice-sortlist> 
dns-nameservice-options> 
nfsv4-domain> 
ipfilter-config-file> 
ipfilter-v6-config-file> 
ipnat-config-file> 
ippool-config-file> 
ike-config-file> 
ipsecpolicy-config-file> 
netcfg:loc:datacenter> 
netcfg:loc:datacenter> end
Committed changes
netcfg> end

As soon as we enable the newly created location profile, the resolver is going to use the configured settings and it's just going to work:

$ netadm enable -p loc datacenter
Enabling loc 'datacenter'

$ nslookup www.oracle.com
Server: 192.168.1.1
Address: 192.168.1.1#53

Non-authoritative answer:
www.oracle.com canonical name = www.oracle.com.edgekey.net.
www.oracle.com.edgekey.net canonical name = e4606.b.akamaiedge.net.
Name: e4606.b.akamaiedge.net
Address: 2.20.190.174

Conclusion

As you can see, configuring the basic network settings in a Solaris 11 system is clean and easy. The new administrative interface lets you easily define, store and activate on-demand multiple network configuration for your system without the need of writing and maintaing multiple copies of the old style Solaris network configuration files.

Saturday, 17 January 2009

Sending batch mail on Solaris 10 (with attachments)

The typical problem. You have to do a repetitive task and you bless UNIX and its shells. But sometimes you wonder how to do it. This time, I had to write a bunch of emails to a set of email addresses. So far, so good. Solaris and many other UNIX flavors have utilities such as mail and mailx which you can easily do you job with. I usually use mailx to do send emails from scripts and I'm very happy with it.

But today I had to send emails with attachments and mailx has no built in support for them. If you know something about emails standards, you could easily figure out how to do it. Google searches about the topic are full of examples of sending properly formatted mails with mailx, where properly means uuencoding the attachments, concatenating them with the mail message and then send them all piped to mailx. Some examples you can find are even easily scriptable. Reinventing the wheel, more often than not, is not a mean for progress so I decided to go with mutt, a powerful program I always neglected favoring mailx.

mutt has a similar syntax and has built in support for attachments. Its configuration file is a powerful tool to create different sending profiles in which you can, for example:
  • setting user mail identity
  • modify mail headers
  • write and attach hooks to mail events
I haven't spent much too time reading mutt documentation, yet, but it really seems worth the time. Just a one liner (inside an over-simplified loop):

for i in addresses ; do
cat user-message-file | mutt -s subject -a attachment-1 ... attachment-n -- $i
done

Please note that the the option -- is necessary only for separating the addresses from a list of multiple attachments. In the case of just one attachment the line above reduces to:

for i in addresses ; do
cat user-message-file | mutt -s subject -a attachment $i
done

I also had to modify some header and both mutt and muttrc man pages are well written and easy to search. The content of my (first) ~/.muttrc configuration file is:

set realname="myname"
set from=myuser@mydomain.com
set use_from=yes
set use_envelope_from=yes
my_hdr Reply-To: anotheraddress@mydomain.com

This way I told mutt:
  • to set my name
  • to set the From: header
  • to use the From: header
  • to force sendmail to use the same address from the From: header as the envelope address
  • to use a Reply-To: header
Some of these directives have their drawbacks so always follow the golden rule and don't copy and paste these lines without fully documenting yourself: read the manual and enjoy mutt.

Windows interoperability: sharing folders with the SMB protocol using Solaris CIFS server

Interoperability between Solaris and Windows has improved and is improving very much. In the case of file systems sharing, the situation is now pretty good. There's no need of installing Microsoft Services for UNIX on top of your Windows servers to be able to share folders with Solaris. One of the last additions in the Solaris operating system is the CIFS Server which, as the official project page @ OpenSolaris.org states:

The OpenSolaris CIFS Server provides support for the CIFS/SMB LM 0.12 protocol and MSRPC services in workgroup and domain mode.

The official project page is the ideal starting point to look for information about installing and using the CIFS Server and Client components in Solaris. In this blog I will describe how to quickly configure the CIFS Server to be able to share folder between your Solaris and your Windows environments. I will use the new, and very simple, sharing semantics introduced in the last versions of the ZFS file system.

What's impressive of these tools is the ease of use and administration. Both ZFS commands and CIFS Server commands are few, easy and intuitive. Sharing a ZFS file system is a no brainer and just few one-time configuration steps are necessary to bring your CIFS Server up and running.

Preparing a ZFS file system

We will share a ZFS file system which we usually create with the following command:

# zfs create file-system-name

Once the file system is created, we configure the SMB sharing:

# zfs set sharesmb=on file-system-name

As described in the official ZFS documentation (for Solaris Express) or in the zfs(1M) man page, the sharesmb property can be set to on, off or [options]. The last syntax is useful to pass parameters to the CIFS server. The most useful is the name parameter, which lets you override the automatic name generation for the SMB share:

# zfs set sharesmb=name=smb-name file-system-name

The automatic name generation works fine but sometimes it must change illegal characters which appear in the dataset name.

Setting up CIFS Server in workgroup mode

The CIFS Server can work in both domain and workgroup mode. The domain mode is useful when you connect to a Windows domain and the very flexible configuration is well detailed in the official CIFS service administrator guide. In my case the workgroup mode is fine and that's the configuration I'll detail here.

Starting the service

If it's not started yet, you'll have to start the CIFS server. Please be aware that if you're running Samba in your Solaris box, you'll have to stop it first.

# svcadm enable -r smb/server

Joining a workgroup

To be able to use shares, you have to join a workgroup:

# smbadm join -w workgroup

Configuring password encryption

To be able to authenticate you must configure Solaris to support password encryption for CIFS. To do this, open the /etc/pam.conf file and add the following entry:

other password required pam_smb_passwd.so.1 nowarn

Generating or recreating passwords

Now that CIFS password encryption has been configured, you'll have to regenerate the passwords for the users you want to use with it because the CIFS service cannot use Solaris password encryption, which was used before /etc/pam.conf was reconfigured. The passwd command will take care of that:

# passwd user
[...]

Conclusions

With just these few steps you'll have your CIFS server up and running in workgroup mode. Now you can share whichever ZFS file system you want just setting its sharesmb property.

Enjoy!

Modifying Sun Java Enterprise System installation or completely removing it on Solaris 10

I wrote other posts describing why and how I installed components from Sun Microsystems' Sun Java Enterprise System 5 (JES from now on) on Solaris. It may sound somewhat silly, but one of the questions related to the JES installer I heard so far is how software can be uninstalled. That's strange, and more if you consider the nature of Solaris 10 package management system. Somehow, having a GUI makes the situation more complicated, because a person unfamiliar with Solaris 10 and with Sun's way of distributing software would expect the same installer to do the job. Whoever knows of its existence may also also think that prodreg is sufficient to perform the uninstallation, but its not, for the reasons I will clarify soon.

The worst thing, in my humble opinion, is that Sun's documentation is usually good and very detailed. For every product I'm trying to remember, there's always a detailed installation document. What's not so clear to the newbie, in reality, is that documentation is there to be read. If you feel like doing it, read this blog and go here, where you can find complete information about JES installations. You should read it carefully while planning your installations/upgrades/uninstallations.

Here's the long story short.

How the JES installer works

The JES installer is an utility that eases the installation and the configuration of a set of server-side products which are bundled together. The installer also takes care of interdependencies between products, which may be complex, with various preinstallation and postinstallation procedures. The JES installer for Solaris uses the usual operating system package management system to deploy package on a host. For the same reason, the JES installer also provides an uninstallation utility which should be used when removing JES components instead of removing pagkages with other means, such as pkgrm or prodreg.

JES installation utilities

The JES installer can be found on the directory which corresponds to the platform you're installing and its called installer. This is the program you'll launch when installing JES for the first time. If you invoke it with no arguments, a GUI will be displayed and you will be able to choose the packages you need and perform your installation.

Even if having a GUI gives you the idea that everything will be managed for you by it, you're wrong. Read the documents before performing the installation of any document.

Patching installer

If JES has already been installed or if you need to patch the installer itself, you will find another copy (packaged) of the installer in the /var/sadm/prod/sun-entsys5u1i/Solaris_{x86,sparc} directory. Once the JES installer has been patched, that's the copy that should be used when installing or modyfing the current installation.

JES uninstaller

Once the JES installer has installed some of the products of the JES distribution, you will find the uninstallation utility in /var/sadm/prod/SUNWentsys5u1. The uninstaller, as the installer, can be run either in graphical, text or silent mode. Due to the complexity of the relationships between the component, an uninstallation should be carefully planned, too. Once more time: read the docs.

Uninstaller limitations

The JES uninstaller has some limitations, including the following:
  • it only uninstalls software installed with the installer
  • it does not remove shared components
  • it does not support remote uninstallations
  • some uninstallation behavior depends on the components being removed and it does not limit to data or configuration files.
  • it does not unconfigure the Access Manager SDK on the web server.