Showing posts with label network. Show all posts
Showing posts with label network. Show all posts

Sunday, 13 November 2011

Network Configuration in Solaris 11 with NWAM Profiles

Oracle has just released the Solaris 11 operating system, the new production ready Solaris major release. It's an "interesting" release since it's the first Solaris major release under the Oracle egemony and it inherits all of the technologies many of us have been using, in the last few years, in the Solaris Express and OpenSolaris releases that Sun Microsystems used to provide.

This blog post is the first part of a series of quick wrap ups for the impatient to quickly start and configure their new Solaris 11 servers. My advice is always the same: read the manual.

Network Configuration Changes in Solaris 11

Network configuration in Solaris 11 is pretty different than it was in earlier Solaris releases (including Solaris Express) and many administrators may be taken by surprise. Some of these changes were introduced in the corresponding OpenSolaris projects, such as Crossbows, and may be known by many of us. To sum things up, the major differences are the following:
  • Network configuration is now managed by a profile.
  • The dladm command now centralizes the configuration of layer 2 datalinks: many tasks performed by the ifconfig command on previous Solaris releases are now to be performed using the dladm command.
  • Datalink names aren't bound to their hardware driver name any longer.
  • IP interfaces on layer 3 are configured by using the ipadm command: the venerable ifconfig command has been deprecated in the Solaris 11 release.
  • IP network multi pathing (IPMP) groups are now implemented as IP interface and as such, configured with the ipadm command.
  • The new ipmpstat command has been introduced to gather statistics about IPMP groups.
  • Network virtualization has been implemented on the network device level.

The Solaris 11 Network Stack

The new Solaris 11 network stack is similar to Solaris 10's. Yet, some improvements have been introduced that administrators are simply going to love.
In the new network stack, the software layer has been decoupled from the hardware layer: this means that:
  • The network configuration of a system (or a zone) is insulated from the hardware it's running upon. 
  • As already stated, datalink names can be customized.
  • Many network abstractions (such as VNICs) are managed in the datalink layer: this means that all of the datalink configurations can be centrally managed with one administrative interface.
On Solaris 11, then, datalinks aren't named from the underlying physical devices and, by default, are named using the netn scheme, where n is a 0-based integer index. This apparently minor modification has a very important consequence: if you modify the underlying hardware, a network configuration may still be valid if the datalink name is left unchanged. This is really handy, for example:
  • If the underlying hardware of a box changes.
  • If you migrate zones across systems.
  • If you write generic configurations for a wide set of boxes.
The mapping between a datalink and the underlying physical device can be inspected with the dladm command:

$ dladm show-phys
LINK  MEDIA     STATE  SPEED  DUPLEX  DEVICE
net0  Ethernet  up     1000   full    e1000g0
net1  Ethernet  up     1000   full    e1000g1

Network Auto-Magic (NWAM)

Long time users of older Solaris Express releases will remember the introduction of the Network Auto-Magic feature into the operating system. NWAM is a feature that automates the basic network configuration of a Solaris box. NWAM in Solaris 11 has been greatly enhanced and it now supports the following concepts:
  • NCP.
  • Location.  
An NCP is a an administrative unit that specifies the configuration of the components of the network setup such as physical links and IP interfaces. An NCP is itself made up of NCUs (Network Configuration Units) representing the configuration of a physical link or interface.

A Location profile is another administrative unit that let the administrator specify:
  • The conditions under which a profile should be activated.
  • The naming service configuration.
  • The domain name.
  • The IP filter rules.
  • The IPSec policy.
At a given time, only one NCP and one Location profile will be active in a Solaris system.

NWAM is handy when a system network configuration is changed often and an administrator, in those cases, can encapsulate the different and required configurations in profiles (NCPs and Location profiles) and activate them when needed.

If you're using the Solaris 11 desktop, you can use the Network Preferences application (which can be found into the System/Administration menu) to quickly build NCPs and Location profiles.

Network Preferences

In the following sections we will use some NWAM administrative commands but we won't dig into this subject any more and let NWAM administration be the topic of another post.

Configuring the Network

Depending on how a newly Solaris 11 installation has been performed, your initial network configuration may differ. If you've installed it from the Live CD, the Automatic NCP and the Automatic Location profile are active. These profiles are pretty simple: they configure every IP interface and the name service using DHCP, leaving any other configuration option (IP filters, IPSec, etc.) disabled.

If you're using Solaris on your PC this configuration may be good for you but chances are you might be installing some server that requires a less trivial network configuration.

Creating an NCP profile
The first thing you're going to do is creating a new NCP:

$ netcfg create ncp datacenter

The datacenter NCP will be the container of our configuration and we will add the NCU that we need for every link and IP interface we're going to configure.

# netcfg
netcfg> select ncp datacenter
netcfg:ncp:datacenter> create ncu phys net0
Created ncu 'net0'.  Walking properties ...
activation-mode (manual) [manual|prioritized]> 
link-mac-addr> 
link-autopush> 
link-mtu> 
netcfg:ncp:datacenter:ncu:net0> end
Committed changes
netcfg:ncp:datacenter> create ncu ip net0
Created ncu 'net0'.  Walking properties ...
ip-version (ipv4,ipv6) [ipv4|ipv6]> ipv4
ipv4-addrsrc (dhcp) [dhcp|static]> static
ipv4-addr> 192.168.1.53
ipv4-default-route> 192.168.1.1
netcfg:ncp:datacenter:ncu:net0> end
Committed changes
netcfg:ncp:datacenter> exit

With the netcfg command we created an NCP with the following characteristics:

  • It has an NCU for a physical interface (net0). This NCU has been configured with default values for all of its properties (such as MAC address or MTU).
  • It has an NCU for an IP interface (net0). This NCU has been configured with a static IPv4 address and a default router.
If you activate this profile, your system will reconfigure the network according to the settings of this NCP:

# netadm enable -p ncp datacenter
Enabling ncp 'datacenter'

If we now check the IP interfaces we can see how they've been configured according to the above-mentioned NCUs: the net1 IP interface is up while the net0 interface has disappeared.

# ipadm show-if
IFNAME  CLASS     STATE  ACTIVE OVER
lo0     loopback  ok     yes    --
net1    ip        ok     yes    --

If we check the IP addresses currently used, the ipadm command confirms that only net1 has been assigned an address which is the static address we configured in the NCU. Again, net0 has disappeared.

# ipadm show-addr
ADDROBJ  TYPE    STATE  ADDR
lo0/v4   static  ok     127.0.0.1/8
net1/_a  static  ok     192.168.1.53/24
lo0/v6   static  ok     ::1/128

If we know check the state of the datalinks, we can see that net0 is in the unknown state while net1 is up.

# dladm show-phys
LINK  MEDIA     STATE    SPEED  DUPLEX  DEVICE
net0  Ethernet  unknown  1000   full    e1000g0
net1  Ethernet  up       1000   full    e1000g1

If we wanted to add both the net0 datalink and IP interface into the profile, we could simply modify it and create the corresponding NCUs.

If we now try to resolve some name, however, we discover that it's not going to work. If you remember, we're still using the Automatic location profile which configure the name resolver using DHCP. In this case, however, DHCP isn't being used so that the resolver is not going to resolve any name.

What we need now, is a corresponding location profile.

Creating a Location Profile
To configure the resolver settings, we can now create a new location profile, using once more the netcfg command:

netcfg> create loc datacenter
Created loc 'datacenter'.  Walking properties ...
activation-mode (manual) [manual|conditional-any|conditional-all]> 
nameservices (dns) [dns|files|nis|ldap]> 
nameservices-config-file ("/etc/nsswitch.dns")> 
dns-nameservice-configsrc (dhcp) [manual|dhcp]> manual
dns-nameservice-domain> 
dns-nameservice-servers> 192.168.1.1
dns-nameservice-search> yourdomain.com
dns-nameservice-sortlist> 
dns-nameservice-options> 
nfsv4-domain> 
ipfilter-config-file> 
ipfilter-v6-config-file> 
ipnat-config-file> 
ippool-config-file> 
ike-config-file> 
ipsecpolicy-config-file> 
netcfg:loc:datacenter> 
netcfg:loc:datacenter> end
Committed changes
netcfg> end

As soon as we enable the newly created location profile, the resolver is going to use the configured settings and it's just going to work:

$ netadm enable -p loc datacenter
Enabling loc 'datacenter'

$ nslookup www.oracle.com
Server: 192.168.1.1
Address: 192.168.1.1#53

Non-authoritative answer:
www.oracle.com canonical name = www.oracle.com.edgekey.net.
www.oracle.com.edgekey.net canonical name = e4606.b.akamaiedge.net.
Name: e4606.b.akamaiedge.net
Address: 2.20.190.174

Conclusion

As you can see, configuring the basic network settings in a Solaris 11 system is clean and easy. The new administrative interface lets you easily define, store and activate on-demand multiple network configuration for your system without the need of writing and maintaing multiple copies of the old style Solaris network configuration files.

Friday, 11 June 2010

Getting Started with Solaris Network Virtualization ("Crossbow")

Solaris Network Virtualization

OpenSolaris Project Crossbow aim is bringing a flexible Network Virtualization and Resource Control layer to Solaris. A Crossbow-enabled version of Solaris enables the administrator to create virtual NICs (and switches) which, from a guest operating system or Zone standpoint, are indistinguishable from physical NICs. You will be able to create as many NICs as your guests need and configure them independently. More information on Crossbow and official documentation can be found on the project's homepage.

This post is just a quick walkthrough to get started with Solaris Network Virtualization capabilities.

Creating a VNIC

To create a VNIC on a Solaris host you can use the procedure described hereon. Show the physical links and decide which one you'll use:

$ dladm show-link
LINK        CLASS     MTU    STATE    BRIDGE     OVER
e1000g0     phys      1500   up       --         --
vboxnet0    phys      1500   unknown  --         --

In this machine I only have one physical link, e1000g0. Create a VNIC using the physical NIC you chose:

# dladm create-vnic -l e1000g0 vnic1

Your VNIC is now created and you can use it with Solaris network monitoring and management tools:

$ dladm show-link
LINK        CLASS     MTU    STATE    BRIDGE     OVER
e1000g0     phys      1500   up       --         --
vboxnet0    phys      1500   unknown  --         --
vnic1       vnic      1500   up       --         e1000g0

Note that a random MAC address has been chosen for your VNIC:

$ dladm show-vnic
LINK         OVER         SPEED  MACADDRESS        MACADDRTYPE         VID
vnic1        e1000g0      100    2:8:20:a8:af:ce   random              0

You can now use your VNIC as a "classical" physical link. You can plumb it and bring it up with the classical Solaris procedures like ifconfig and Solaris configuration files.

Resource Control

Solaris network virtualization is tightly integrated with Solaris Resource Control. After a VNIC is created you can attach resource control parameters to it such as a control for maximum bandwidth consumption or CPU usage.

Bandwidth Management

As if it were a physical link, you can use the dladm command to establish a maximum bandwidth limit on a whole VNIC:

# dladm set-linkprop -p maxbw=300 vnic4
# dladm show-linkprop vnic4
LINK         PROPERTY        PERM VALUE          DEFAULT        POSSIBLE
vnic4        autopush        --   --             --             -- 
vnic4        zone            rw   --             --             -- 
vnic4        state           r-   unknown        up             up,down 
vnic4        mtu             r-   1500           1500           1500 
vnic4        maxbw           rw     300          --             -- 
vnic4        cpus            rw   --             --             -- 
vnic4        priority        rw   high           high           low,medium,high 
vnic4        tagmode         rw   vlanonly       vlanonly       normal,vlanonly 
vnic4        protection      rw   --             --             mac-nospoof,
                                                                ip-nospoof,
                                                                restricted 
vnic4        allowed-ips     rw   --             --             -- 

vnic4 maximum bandwidth limit is now set to 300.

If you want to read an introduction to Solaris Projects and Resource Control you can read this blog post.

Using VNICs

VNICs are useful on a variety of use cases. VNICs are one of the building blocks of a full fledged network virtualization layer offered by Solaris. The possibility of creating VNICs on the fly will open the door to complex network setups and resource control policies.

VNICs are especially useful when used in conjunction with other virtualization technologies such as:
  • Solaris Zones.
  • Oracle VM.
  • Oracle VM VirtualBox.

Using VNICs with Solaris Zones

Solaris Zones can use a shared or an exclusive IP stack. An exclusive IP stack has its own instance of variables used by the TCP/IP stack and are not shared with the global zone. This basically means that a Solaris Zone with an exclusive IP stack can have:
  • Its own routing table.
  • Its own ARP table.

and whatever parameter Solaris lets you set on your IP stack.

Before Crossbow the number of physical links on a server was a serious problem when you needed to set up a large number of Solaris Zones when an exclusive IP stack was desirable. Crossbow now removes that limit and having a large number of exclusive IP stack non global Zones is not an issue any longer.

Other Virtualization Software

The same reasoning applies for other virtualization software such as Oracle VM or Oracle VM VirtualBox. For every guest instance you need, you will create the VNICs you'll need for exclusive use of your guest operating system.

On another post I'll focus on VirtualBox and describe how VNICs can be used with its guests.

Next Steps

There's more to Solaris Network Virtualization, these are just the basics. For instance, you will be able to fully virtualize a network topology by using:
  • VNICs.
  • Virtual Switches.
  • Etherstubs.
  • VLANs.

As far as it concerns resource control, bandwith limit is just the beginning. Solaris Network Virtualization will let you finely control your VNIC usage on a:
  • Per-transport basis.
  • Per-protocol basis.
  • CPU consumption per VNIC basis.

To discover what else Solaris Network Virtualization can do for you, keep on reading this blog and checkout the official project documentation. You could also install an OpenSolaris guest with VirtualBox and experiment yourself. There's nothing like a hands-on session.






Monday, 4 May 2009

Sun xVM VirtualBox v. 2.2: USB support on {Open}Solaris works and HP's multifunction printers work like a charm.

As I told you on another post I had some problems with VirtualBox on a Solaris host: missing USB support not only meant that USB devices were unmanageable, it also meant that other software (such as HP's solution center for multifunction printers) wouldn't even install because of that missing feature. I planned to use my printer though the Ethernet interface but the software wouldn't install.

Until I installed Sun xVM VirtualBox v. 2.2. Experimental USB support was sufficient for HP's software to install and the printer worked perfectly both from Windows and Linux guests. Direct access to the printer by setting up a filter was straight forward.

A note about networking configuration: HP's solution center uses a certain number of TCP and UDP ports to communicate with the multifunction device. The printer documentation was detailed and setting up a firewall or a set of NAT rules wouldn't be hard. By the way, once I realized that the problems I was experiencing with the scanner were due to VirtualBox's default network configuration, I decided to change the guest's network settings.

If you're a Solaris Express user who regularly updates its system, you'd probably read about Crossbow project. Crossbow project, which was integrated in Nevada build 105, aims to provide the building blocks for network virtualization on Solaris hosts. The first thing I thought about was, indeed, creating a virtual NIC. But the solution was easier than that and it's called "Bridged networks" on VirtualBox's jargon. You simply change the adapter configuration for you VirtualBox host from NAT to Bridged and you optionally choose the physical NIC you want to bridge upon, in the case your system has more than one. Boot your guest OS and you'll have a virtual NIC at your disposal, without the limitations of the NAT configuration. And if you are communicating with the "outside world", such as a network multifunction printer or some CIFS client, your guest OS' NIC will appear just as a physical NIC.

The only caveat to use this technique is that, on Solaris hosts, Virtual NICs and VirtualBox bridged networking isn't implemented (yet) on top of a Wireless NIC.

Enjoy,
Grey

Wednesday, 23 April 2008

Solaris Zones on different network interfaces: setting up the routing table

For the reasons I explained here I created a zone to install Blastwave's software. Furthermore, as I usually use ssh to connect to this machine from the outside of my LAN, I'm running the ssh service on another non-global zone. My Sun Ultra 20 M2 has two NICs, and the two zones share a physical NIC (nge1), while the global zones uses both nge0 and nge1, as shown in the following two fragment of the zone configuration file.

Everything was working OK and I usually use zlogin when I connect to a zone. I had no reason, either, to connect to the Blastwave's zone using ssh because I loopback mounted the /opt/csw filesystem so that it's available to desktop users who log in the global zone.

When I tried to ssh a zone, I realized that I couldn't! A quick check with netstat told me why:
bash-3.2# netstat -r

Routing Table: IPv4
Destination Gateway Flags Ref Use Interface
-------------------- -------------------- ----- ----- ---------- ---------
default speedtouch.lan UG 1 58 nge0
default speedtouch.lan UG 1 108 nge1
192.168.0.0 solaris.lan U 1 19 nge0
192.168.0.0 Unknown-00-14-4f-80-d6-b1.lan U 1 3 nge1
solaris solaris UH 3 561 lo0

Routing Table: IPv6
Destination/Mask Gateway Flags Ref Use If
--------------------------- --------------------------- ----- --- ------- -----
::1 ::1 UH 1 35 lo0
The zones' IP addresses (192.168.0.132 and 192.168.0.140) are indeed not reacheable. The quick fix was updating the routing table:
route add 192.168.0.132 192.168.0.130
route add 192.168.0.140 192.168.0.130
An OpenSolaris project named Crossbow will solve this kind of problem by fully virtualizing the network interfaces.