Sunday, October 2, 2016

More


Another post of Jarret's on VirtuallyHyper is good for installing netatalk.

Time Machine Backups

I knew that for Time Machine backups the easiest way to backup to a remote machine is to use AFP. From OS X Yosemite: Disks you can use with Time Machine:
If your backup disk is on a network, the network server must use Apple File Protocol (AFP) file sharing, and both your Mac and the networked backup disk should have Mac OS X v10.5.6 or later. The AFP disk must also be “mounted” (available to your Mac) during the set up of Time Machine. After you select the network disk in Time Machine preferences as a backup disk, Time Machine automatically mounts the disk when it’s time to backup or restore your data.
There are a couple of work arounds you can do to use a samba share. The steps are laid out in Time Machine Over SMB Share. The gist of it:
  1. Mount the Samba Share
  2. Create a sparse image file:
    hdiutil create -size 200G -fs HFS+J -volname 'Time Machine Backups' -type SPARSEBUNDLE time_machine.sparsebundle
    
  3. Copy the Sparse Bundle onto the Samba share:
    rsync -avzP time_machine.sparsebunde /Volumes/Samba_Share
    
  4. Mount the Sparse Bundle as a local Volume:
    hdiutil attach /Volumes/Samba_Share/time_machine.sparsebundle
    
  5. Then enable non-supported destinations for Time Machine Backups
    defaults write com.apple.systempreferences TMShowUnsupportedNetworkVolumes 1
    
  6. Set the Sparse Bundle as the Destination for Time Machine Backups
    sudo tmutil setdestination "/Volumes/Time Machine Backups"
    
  7. Launch the Time Machine Preferences and run the backup
That’s cool and all but for some reason I wanted to try out setting up my own AFP server. My main storage box runs OmniOS so after doing some research it looks like you can install netatalk and mDNSResponder on opensolaris and use it for Time Machine Backups.

Installing netatalk and mDNSResponder on OmniOS

Before we go any further let’s create a time machine zfs volume
zfs create data/tm
zfs set compression=gzip data/tm
And as per Fixing AFP access on OmniOS Solaris w/ napp-it let enable the right acl settings:
zfs set aclinherit=passthrough data/tm
zfs set aclmode=passthrough data/tm
I was using napp-it so I just installed AFP using their scripts:
wget -O - www.napp-it.org/afp | perl
It just enables a pkg publisher and grabs the packages from there. If you want to run the command manually check out the steps laid in napp-it afp on omnios, the relevant note from that page:
pkg set-publisher -g http://scott.mathematik.uni-ulm.de/release uulm.mawi
pkg install netatalk
pkg unset-publisher uulm.mawi
After the install is done you will see the multicast daemon (mDNSResponder) running:
root@zf:~#ps -ef | grep mdns
noaccess 11818     1   0 10:06:45 ?           0:00 /usr/lib/inet/mdnsd
and netatalk running as well:
root@zf:~#ps -ef | grep afp
    root 12394 12392   0 10:17:03 ?           0:00 /usr/local/sbin/i386/cnid_metad -d -F /etc/afp.conf
    root 12393 12392   0 10:17:03 ?           0:01 /usr/local/sbin/i386/afpd -d -F /etc/afp.conf
and in the logs under /var/adm/messages you will see the following:
Jun  6 10:17:02 zf netatalk[12392]: [ID 702911 daemon.notice] Netatalk AFP server starting
Jun  6 10:17:02 zf netatalk[12392]: [ID 702911 daemon.notice] Registered with Zeroconf
Jun  6 10:17:02 zf cnid_metad[12394]: [ID 702911 daemon.notice] CNID Server listening on localhost:4700
Jun  6 10:17:03 zf afpd[12393]: [ID 702911 daemon.notice] Netatalk AFP/TCP listening on 192.168.1.103:548
You will also notice mdns added to your /etc/nsswitch.conf file:
root@zf:~#grep mdns /etc/nsswitch.conf
# server lookup.  See resolv.conf(4). For lookup via mdns
# svc:/network/dns/multicast:default must also be enabled. See mdnsd(1M)
hosts:      files dns mdns
ipnodes:   files dns mdns
Here is the information about the netatalk daemon:
root@zf:~#afpd -V
afpd 3.1.7 - Apple Filing Protocol (AFP) daemon of Netatalk

This program is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free Software
Foundation; either version 2 of the License, or (at your option) any later
version. Please see the file COPYING for further information and details.

afpd has been compiled with support for these features:

          AFP versions: 2.2 3.0 3.1 3.2 3.3 3.4
         CNID backends: dbd last tdb
      Zeroconf support: mDNSResponder
  TCP wrappers support: Yes
         Quota support: Yes
   Admin group support: Yes
    Valid shell checks: Yes
      cracklib support: No
            EA support: ad | sys
           ACL support: Yes
          LDAP support: Yes
         D-Bus support: No
     Spotlight support: No
         DTrace probes: Yes

              afp.conf: /etc/afp.conf
           extmap.conf: /etc/extmap.conf
       state directory: /var/netatalk/
    afp_signature.conf: /var/netatalk/afp_signature.conf
      afp_voluuid.conf: /var/netatalk/afp_voluuid.conf
       UAM search path: /usr/local/lib/netatalk//
  Server messages path: /var/netatalk/msg/
At this point we need to create a config file. I found a couple of sites with examples:
So I created group and added my user to it:
root@zf:~#groupadd tmusers
root@zf:~#useradd -g tmusers elatov
root@zf:~#passwd elatov
New Password:
Re-enter new Password:
passwd: password successfully changed for elatov
Also let’s create the directory for this user:
root@zf:~#mkdir /data/tm/elatov
root@zf:~#chown elatov:tmusers /data/tm/elatov
root@zf:~#chmod 700 /data/tm/elatov
And then added a simple config like this:
root@zf:~#cat /etc/afp.conf
;
; Netatalk 3.x configuration file
;

[Global]
 mimic model = TimeCapsule6,106
 log level = default:warn
 log file = /var/adm/afpd.log
 hosts allow = 192.168.1.0/24
 disconnect time = 1
 vol dbpath = /var/netatalk/CNID/$u/$v/

; [Homes]
; basedir regex = /xxxx

[time_mach]
 time machine = yes
 path=/data/tm/$u
 valid users = @tmusers
 #200 GB (units of MB)
 vol size limit = 204800
Most of the settings are covered in the above links, the mimic model string can be grabbed from the following file on a Mac:
/System/Library/CoreServices/CoreTypes.bundle/Contents/Info.plist
And you will see something like this:
<dict>
 <key>com.apple.device-model-code</key>
 <array>
  <string>AirPort6</string>
  <string>AirPort6,106</string>
  <string>TimeCapsule</string>
  <string>TimeCapsule6</string>
  <string>TimeCapsule6,106</string>
  <string>TimeCapsule6,109</string>
  <string>TimeCapsule6,113</string>
  <string>TimeCapsule6,116</string>
 </array>
</dict>
I then restarted both daemons:
svcadm restart netatalk
svcadm restart multicast

Mounting AFP Share on MacOS X

If you really want to you can compile afpfs-ng on a linux machine and check the status of the server. I tried that on my gentoo box and here is what I saw:
elatov@gen:~$/usr/local/afpng/bin/afpgetstatus zf:548
Server name: zf
Machine type: Netatalk3.1.7
AFP versions:
     AFP2.2
     AFPX03
     AFP3.1
     AFP3.2
UAMs:
     DHCAST128
     DHX2
Signature: 38ffffff40ff69ff7bffffff21ffffffffff92
I just wanted to query the server to make sure it responds with the right version. I then went to Finder -> Go -> Connect to Server (CMD+K) and entered the following url:
afp://zf
and it prompted me for a password but it failed to mount the drive, checking out the logs I saw the following:
tail -f /var/adm/afpd.log
Jun 06 11:50:00.418607 cnid_metad[15639] {cnid_metad.c:321} (error:CNID): set_dbdir: mkdir failed for /var/netatalk/CNID/elatov/time_mach/
Jun 06 11:50:01.419231 cnid_metad[15639] {cnid_metad.c:321} (error:CNID): set_dbdir: mkdir failed for /var/netatalk/CNID/elatov/time_mach/
Jun 06 11:50:01.419333 afpd[15765] {cnid_dbd.c:414} (error:CNID): transmit: Request to dbd daemon (volume time_mach) timed out.
Jun 06 11:50:01.419447 afpd[15765] {volume.c:857} (error:AFPDaemon): afp_openvol(/data/tm/elatov): Fatal error: Unable to get stamp value from CNID backend
Jun 06 11:50:05.727993 afpd[15765] {dsi_stream.c:504} (error:DSI): dsi_stream_read: len:0, unexpected EOF
So I went ahead and created the directory:
root@zf:~#mkdir /var/netatalk/CNID/elatov
After that the share mounted without issues:
elatov@macair:~$mount | grep afp
//elatov@zf/time_mach on /Volumes/time_mach (afpfs, nodev, nosuid, mounted by elatov)
elatov@macair:~$df -Ph -T afpfs
Filesystem              Size   Used  Avail Capacity  Mounted on
//elatov@zf/time_mach  200Gi   27Gi  173Gi    14%    /Volumes/time_mach
and I saw the client on the OmniOS machine:
root@zf:~#/usr/local/bin/macusers
PID      UID      Username         Name                 Logintime Mac
15947    978      elatov                                11:54:48
I then added the AFP share in the Time Machine Settings, after you add the afp share as the backup destination you can use tmutil to get the information about it:
elatov@macair:~$tmutil destinationinfo
====================================================
Name          : time_mach
Kind          : Network
URL           : afp://elatov@zf/time_mach
Mount Point   : /Volumes/time_mach-1
ID            : C17BD870-304F-4FDB-AF6D-3A78B24729AB

OmniOS Network Bandwidth Monitoring

When I started the back up I wanted to see what the BW usage was on omnios, so I enabled extented network monitoring:
root@zf:~#acctadm -e extended -f /var/log/net.log net
root@zf:~#acctadm net
            Net accounting: active
       Net accounting file: /var/log/net.log
     Tracked net resources: extended
   Untracked net resources: none
And then checking out the file (after waiting a couple of minutes), I saw the following:
root@zf:~#dlstat show-link -h -f /var/log/net.log e1000g0
LINK         START         END           RBYTES   OBYTES   BANDWIDTH
e1000g0      12:38:46      12:39:06      1408650  3190663     57.622 Mbp
e1000g0      12:39:06      12:39:26      1370705  3090352     56.064 Mbp
e1000g0      12:39:26      12:39:46      1653583  3670691     67.611 Mbp
e1000g0      12:39:46      12:40:06      1393440  3153884     56.999 Mbp
I was sitting around 54Mbp, which makes sense cause I was on Wireless on my Mac. After I was done checking out the Bandwidth I disabled the network monitoring:
root@zf:~#acctadm -D net
root@zf:~#acctadm net
            Net accounting: inactive
       Net accounting file: /var/log/net.log
     Tracked net resources: extended
   Untracked net resources: none
And on the omnios machine I did see the sparse bundle get created:
root@zf:~#ls -lh /data/tm/elatov/
total 17
drwx--S---   3 elatov   tmusers        9 Jun  6 12:47 macair.sparsebundle

Running the Bonjour Services on OmniOS

On Napp-it you can enable the bojour services to auto start if you want. In the Napp-it UI it will show what sample commands you can enter:
napp-it-bonjour-instructions
Basically run the following commands (also seen here)
root@zf:~#dns-sd -R "zf" _afpovertcp._tcp local. 548 &
[1] 19946
root@zf:~#Registering Service zf._afpovertcp._tcp.local. port 548
Got a reply for zf._afpovertcp._tcp.local.: Name now registered and active
Now on the mac if you query for afp services you will see the server respond to hosting that service:
elatov@macair:~$dns-sd -B _afpovertcp._tcp
Browsing for _afpovertcp._tcp
DATE: ---Sat 06 Jun 2015---
13:23:01.290  ...STARTING...
Timestamp     A/R    Flags  if Domain               Service Type         Instance Name
13:23:01.291  Add        2   4 local.               _afpovertcp._tcp.    zf
If you want the server to show up with an icon of a storage device you can also run this:
root@zf:~# dns-sd -R "zf" _device-info._tcp. local 548 model=Xserve &
and of course from the mac you can query that information:
elatov@macair:~$dns-sd -B _device-info._tcp
Browsing for _device-info._tcp
DATE: ---Sun 07 Jun 2015---
19:05:20.472  ...STARTING...
Timestamp     A/R    Flags  if Domain               Service Type         Instance Name
19:05:20.474  Add        2   4 local.               _device-info._tcp.   zf
And in Finder you will your see device:
napp-it-bonjour-instructions
If you are manually mouting the afp share, it probably won’t matter that much.
Also, if you need to open up the firewall, the ports are covered here Arch Linux Netatalk

Some plagiarism

I found this old page that I once referenced before for adding a user account...

There are some good bits that I thought I'd copy over here for reference.


 20 April 2013  Jarret Lavallee


Adding a Mirror disk to the rpool

It is a great idea to have some redundancy in the rpool. With my recent stretch of lost hard drives, I am going to add a mirrored disk to the rpool. First let’s list the zpool and see what disk is attached:
root@megatron:~# zpool status rpool
  pool: rpool
 state: ONLINE
  scan: none requested

config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          c3d0s0    ONLINE       0     0     0

errors: No known data errors
Let’s list the other disks in the system:
root@megatron:~# echo |format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c2d1 wdc WD32-  WD-WX30AB9C376-0001-298.09GB
          /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
       1. c3d0 unknown -Unknown-0001 cyl 30398 alt 2 hd 255 sec 63
          /pci@0,0/pci-ide@1f,5/ide@0/cmdk@0,0
Specify disk (enter its number): Specify disk (enter its number): 
So we have c2d1 that is available for a mirror. Let’s create a Solaris partition on it, instructions on using fdisk can be found in Karim’s article.
root@megatron:~# fdisk c2d1
Let’s copy over the slice information from our original drive:
root@megatron:~# prtvtoc /dev/dsk/c3d0s0 |fmthard -s - /dev/rdsk/c2d1s0
Let’s install grub on the drive:
root@megatron:~# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c2d1s0
Now we can add it to the zpool:
root@megatron:~# zpool attach -f rpool c3d0s0 c2d1s0 
Make sure to wait until resilver is done before rebooting. To check on the resilver progress, we can run the following:
root@megatron:~# zpool status rpool
  pool: rpool
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sun Mar 17 21:47:50 2013
    186M scanned out of 19.3G at 2.24M/s, 2h25m to go
    185M resilvered, 0.94% done

Configuring the Networking

Since this is a NAS, we want to give it a static network address. First let’s disable nwam:
root@omnios-bloody:~# svcadm disable nwam
root@omnios-bloody:~# svcs nwam
STATE          STIME    FMRI
disabled       14:42:58 svc:/network/physical:nwam
Now let’s make sure network/physical:default is enabled:
root@omnios-bloody:~# svcadm enable network/physical:default
root@omnios-bloody:~# svcs network/physical:default
STATE          STIME    FMRI
online         14:43:05 svc:/network/physical:default
So now we can create the interfaces. I have intel NICs using the e1000 driver, so the NICs show up as e1000g0e1000g1, etc. Different drivers will produce different NIC names. First we need to list the interfaces we will use:
root@omnios-bloody:~# dladm show-link
LINK        CLASS     MTU    STATE    BRIDGE     OVER
e1000g0     phys      1500   up       --         --
e1000g1     phys      1500   up       --         --
e1000g2     phys      1500   up       --         --
For each of the NICs we want to set up, we will need to create them:
root@omnios-bloody:~# ipadm create-if e1000g0
root@omnios-bloody:~# ipadm create-if e1000g1
root@omnios-bloody:~# ipadm create-if e1000g2
Now we can configure the static IP addresses:
root@omnios-bloody:~# ipadm create-addr -T static -a 192.168.5.82/24 e1000g0/v4
root@omnios-bloody:~# ipadm create-addr -T static -a 10.0.1.82/24 e1000g1/v4
root@omnios-bloody:~# ipadm create-addr -T static -a 10.0.0.82/24 e1000g2/v4
List the addresses to confirm that the changes were taken:
root@omnios-bloody:~# ipadm show-addr
ADDROBJ           TYPE     STATE        ADDR
lo0/v4            static   ok           127.0.0.1/8
e1000g0/v4        static   ok           192.168.5.82/24
e1000g1/v4        static   ok           10.0.1.82/24
e1000g2/v4        static   ok           10.0.0.82/24
lo0/v6            static   ok           ::1/128
Now we need to add the default gateway. In my case I want the traffic to route out of the 192.168.5.1 gateway:
root@omnios-bloody:~# route -p add default 192.168.5.1
add net default: gateway 192.168.5.1
add persistent net default: gateway 192.168.5.1
Let’s check the routing table to make sure that our default gateway is set:
root@omnios-bloody:~# netstat -rn

Routing Table: IPv4
  Destination           Gateway           Flags  Ref     Use     Interface
-------------------- -------------------- ----- ----- ---------- ---------
default              192.168.5.1          UG        1          0
10.0.0.0             10.0.0.82            U         2          0 e1000g2
10.0.1.0             10.0.1.82            U         4        535 e1000g1
127.0.0.1            127.0.0.1            UH        2         24 lo0
192.168.5.0          192.168.5.82         U         2          0 e1000g0
Just to confirm, let’s ping the gateway:
root@omnios-bloody:~# ping 192.168.5.1
192.168.5.1 is alive
Next we have to add the DNS servers to the /etc/resolv.conf file:
root@omnios-bloody:~# echo 'domain moopless.com' > /etc/resolv.conf
root@omnios-bloody:~# echo 'nameserver 192.168.5.10' >> /etc/resolv.conf
root@omnios-bloody:~# echo 'nameserver 8.8.8.8' >> /etc/resolv.conf
Now we need to tell the host to use DNS for host resolution:
root@omnios-bloody:~# cp /etc/nsswitch.dns /etc/nsswitch.conf
Let’s test DNS resolution and network connectivity with a simple ping:
root@omnios-bloody:~# ping google.com
google.com is alive
You may have noticed that my hostname is omnios-bloody. I accepted the defaults during the installation. I will use the command below to change it:
root@omnios-bloody:~# echo "megatron" > /etc/nodename
Now we can reboot the server to make sure that everything comes up on boot:
root@omnios-bloody:~# reboot
After the reboot, log in and check the networking and hostname:
root@megatron:~# ping virtuallyhyper.com
virtuallyhyper.com is alive
Great, so the networking is working. Let’s make sure ssh is enabled:
root@megatron:~# svcadm enable ssh  
root@megatron:~$ svcs ssh
STATE          STIME    FMRI
online         15:45:39 svc:/network/ssh:default

Add a local user

We have been logging in as root, so let’s add a local user. Since I am going to be mounting my existing zpools, I do not want to go back to all of my client machines and change the user IDs. To avoid any ID mismatch I am going to create my user with the same ID as it previously was:
root@megatron:~# useradd -u 1000 -g 10 -m -d /export/home/jarret -s /bin/bash jarret
64 blocks
If you just want the default UIDs, you can run the command below. The -m and -d options create the home directory.
root@megatron:~# useradd -m -d /export/home/username username
Let’s set a password for the new user:
root@megatron:~# passwd jarret
New Password:
Re-enter new Password:
passwd: password successfully changed for jarret
Now I want to add this user to the sudoers file. Type visudo to safely edit the /etc/sudoers file. Find the line below and remove the ‘#’mark to enable it:
## Uncomment to allow members of group sudo to execute any command
%sudo ALL=(ALL) ALL
This will allow any user in the sudo group to run sudo. Let’s add the sudo group:
root@megatron:~# groupadd sudo
Now we can add our newly created user to the sudo group:
root@megatron:~# usermod -G sudo jarret
Now let’s verify that the user is in the sudo group:
root@megatron:~# id jarret
uid=1000(jarret) gid=10(staff) groups=10(staff),100(sudo)
Ok, so that looks good. Let’s switch to the jarret user and run sudo:
root@megatron:~# su - jarret
OmniOS 5.11     omnios-dda4bb3  2012.06.14
jarret@megatron:~$ sudo -l
Password:
User jarret may run the following commands on this host:
    (ALL) ALL
Now we have a user with a home directory that is able to run elevated commands with sudo.

Configure Extra Repositories

By default OmniOS comes with a package repository, but there are few other repotories out there. A list can be found here. Out of the ones on there, I will be adding these repositories for their specific packages:
  • http: //pkg.cs.umd.edu for fail2ban
  • http: //scott.mathematik.uni-ulm.de for smartmontools
Let’s list the existing repository:
root@megatron:~# pkg publisher
PUBLISHER                             TYPE     STATUS   URI
omnios                                origin   online   http://pkg.omniti.com/omnios/bloody/
Let’s add the new repositories:
root@megatron:~# pkg set-publisher -g http://pkg.cs.umd.edu/ cs.umd.edu
root@megatron:~# pkg set-publisher -g http://scott.mathematik.uni-ulm.de/release/ uulm.mawi
Let’s list the repositories again to confirm the new ones are there:
root@megatron:~# pkg publisher
PUBLISHER                             TYPE     STATUS   URI
omnios                                origin   online   http://pkg.omniti.com/omnios/bloody/
cs.umd.edu                            origin   online   http://pkg.cs.umd.edu/
uulm.mawi                             origin   online   http://scott.mathematik.uni-ulm.de/release/
The installation ISO is generally behind in packages, depending on when it was released, so let’s do an update. First we should list the updates that are available:
root@megatron:~# pkg update -nv
            Packages to remove:       3
           Packages to install:       4
            Packages to update:     387
     Estimated space available: 5.44 GB
Estimated space to be consumed: 1.08 GB
       Create boot environment:     Yes
     Activate boot environment:     Yes
Create backup boot environment:      No
          Rebuild boot archive:     Yes
The Create boot environment means that it will require a reboot to activate the changes. A new BE will be created. Karim went into depth about beadm in this post. Let’s run the upgrade:
root@megatron:~# pkg update -v
...
A clone of omnios exists and has been updated and activated.
On the next boot the Boot Environment omnios-1 will be
mounted on '/'.  Reboot when ready to switch to this updated BE.
Let’s go ahead and reboot.
root@megatron:~# reboot
Let’s log in and check the current BE.
jarret@megatron:~$ beadm list
BE               Active Mountpoint Space Policy Created
omnios-1         NR     /          4.69G static 2013-03-14 12:45
omnios           -      -          33.6M static 2012-08-31 16:54
We are running on the omnios-1 as we expected.

Installing Napp-it

Napp-it is a web management front end for Solaris. It is not a necessity, but I really like using it to create Comstar views rather than running it manually. There are a bunch of features that can be installed with it and it is actively developed and maintained. Let’s install napp-it:
root@megatron:~# wget -O - www.napp-it.org/nappit | perl
...
 ##############################################################################
 -> REBOOT NOW or activate current BE followed by /etc/init.d/napp-it restart!!
 ##############################################################################
    If you like napp-it, please support us and donate at napp-it.org
The installer goes through and does many things. The list is below, if you are curious:
  1. Makes a new BE
  2. Adds a user named napp-it
  3. Makes some changes to PAM
  4. Grabs and unzips the napp-it web package
  5. Installs iscsi/target, iperf, bonie++, and parted (if available in the repositories)
  6. Adds some symbolic links for the GNU Compilers (ggc, g++, etc)
  7. Installs gcc
  8. Adds the cs.umd.edu repository
  9. Configures napp-it to start on boot
Now we should reboot into the new BE:
root@megatron:~# reboot
Let’s check the active BE to see if the napp-it one is now activated:
root@megatron:~# beadm list
BE                                         Active Mountpoint Space Policy Created
napp-it-0.9a8                              R      -          4.95G static 2013-03-14 13:26
omnios-1                                   N      /          0     static 2013-03-14 12:45
omnios                                     -      -          33.6M static 2012-08-31 16:54
pre_napp-it-0.9a8                          -      -          1.00K static 2013-03-14 13:16
We should now have napp-it running. It can be accessed at ***. I will touch back later on the configuration using *napp-it.

Configuring NFS

One of my favorite features of ZFS is that it stores the NFS settings. The settings will come over with the zpools. We just need to enable NFS and tweak any settings that we want before we mount any existing zpools. First let’s see if NFS is running:
root@megatron:~# svcs *nfs*
STATE          STIME    FMRI
disabled       13:01:08 svc:/network/nfs/client:default
disabled       13:01:08 svc:/network/nfs/nlockmgr:default
disabled       13:01:08 svc:/network/nfs/cbd:default
disabled       13:01:08 svc:/network/nfs/mapid:default
disabled       13:01:08 svc:/network/nfs/status:default
disabled       13:01:08 svc:/network/nfs/server:default
disabled       13:01:09 svc:/network/nfs/log:default
disabled       13:01:27 svc:/network/nfs/rquota:default
The NFS services are currently disabled. Let’s check the settings before we enable the service:
root@megatron:~# sharectl get nfs
servers=16
lockd_listen_backlog=32
lockd_servers=20
lockd_retransmit_timeout=5
grace_period=90
server_versmin=2
server_versmax=4
client_versmin=2
client_versmax=4
server_delegation=on
nfsmapid_domain=
max_connections=-1
protocol=ALL
listen_backlog=32
device=
These settings are the defaults, but they do not lead to good performance. Hardware has improved drastically since those default were defined. This machine is a Intel L5520 with 32GB of RAM, so it should be able to handle more load. I usually tune NFS as outlined in this blog article. I have standardized on NFS 3, so I setup the server and clients to use NFS version 3.
In previous versions of Solaris, the NFS properties were stored in the /etc/default/nfs file. Now we can edit the properties using sharectl utility:
root@megatron:~# sharectl set -p servers=512 nfs
root@megatron:~# sharectl set -p lockd_servers=128 nfs
root@megatron:~# sharectl set -p lockd_listen_backlog=256 nfs
root@megatron:~# sharectl set -p server_versmax=3 nfs
root@megatron:~# sharectl set -p client_versmax=3 nfs
Let’s check to see if the settings are applied:
root@megatron:~# sharectl get nfs
servers=512
lockd_listen_backlog=256
lockd_servers=128
lockd_retransmit_timeout=5
grace_period=90
server_versmin=2
server_versmax=3
client_versmin=2
client_versmax=3
server_delegation=on
nfsmapid_domain=
max_connections=-1
protocol=ALL
listen_backlog=32
device=
Let’s start up the services that we need:
 root@megatron:~# svcadm enable -r nfs/server

Import zpools

I tried to import my zpools but ran into an issue. When I ran zpool import I expected to see a list of the zpools that I could import. Instead the zpool command asserted and failed out. Below is what I saw:
root@megatron:~# zpool import        
Assertion failed: rn->rn\_nozpool == B\_FALSE, file ../common/libzfs\_import.c, line 1080, function zpool\_open_func Abort (core dumped)
I did a little research and ended up finding this bug which listed a workaround. The work-around mentioned in the bug was to move the /dev/dsk and /dev/rdsk folders out of the way and then re-scan for new disks. I gave it a try:
root@megatron:/dev# mv rdsk rdsk-old 
root@megatron:/dev# mv dsk dsk-old 
root@megatron:/dev# mkdir dsk 
root@megatron:/dev# mkdir rdisk 
root@megatron:/dev# devfsadm -c disk
I ran a zpool import and it ran fine, so I then imported the zpools:
root@megatron:/dev# zpool import -f data 
root@megatron:/dev# zpool import -f vms