Tuesday, February 23, 2010

Solaris: ZFS Administration

ZFS has been designed to be robust, scalable and simple to administer.


ZFS pool storage features:
ZFS eliminates the volume management altogether. Instead of forcing us to create virtual volumes, ZFS aggregates devices into a storage pool. The storage pool describes the physical characteristics of the storage (device layout, data redundancy, and so on,) and acts as arbitrary data store from which the file systems can be created.
File systems grow automatically within the space allocated to the storage pool.
ZFS is a transactional file system, which means that the file system state is always consistent on disk. With a transactional file system, data is managed using coy on write semantics.
ZFS supports storage pools with varying levels of data redundancy, including mirroring and a variation on RAID-5. When a bad data block is detected, ZFS fetches the correct data from another replicated copy, and repairs the bad data, replacing it with the good copy.
The file system itself if 128-bit, allowing for 256 quadrillion zettabytes of storage. Directories ca have up to 2 to the power of 48 (256 trillion) entries, and no limit exists on the number of file systems or number of files that can be contained within a file system.
A snapshot is a read-only copy of a file system or volume. Snapshots can be created quickly and easily. Initially, snapshots consume no additional space within the pool.
Clone – A file system whose initial contents are identical to the contents of a snapshot.

ZFS component Naming requirements:
Each ZFS component must be named according to the following rules;
1. Empty components are not allowed.
2. Each component can only contain alphanumeric characters in addition to the following 4 special characters:
a. Underscore (_)
b. Hyphen (-)
c. Colon (: )
d. Period (.)
3. Pool names must begin with a letter, expect that the beginning sequence c(0-9) is not allowed (this is because of the physical naming convention). In addition, pool names that begin with mirror, raid z, or spare are not allowed as these name are reserved.
4. Data set names must begin with an alphanumeric character.

ZFS Hardware and Software requirements and recommendations:
1. A SPARC or X86 system that is running the Solaris 10 6/06 release or later release.
2. The minimum disk size is 128 M bytes. The minimum amount of disk space required for a storage pool is approximately 64 Mb.
3. The minimum amount of memory recommended to install a Solaris system is 512 Mb. However, for good ZFS performance, at least 1 Gb or more of memory is recommended.
4. Whilst creating a mirrored disk configuration, multiple controllers are recommended.



ZFS Steps:

zpool create
zpool add
zpool remove
zpool attach
zpool detach
zpool destroy
zpool list
zpool status
zpool replace



zfs create
zfs destroy
zfs snapshot
zfs rollback
zfs clone
zfs list
zfs set
zfs get
zfs mount
zfs unmount
zfs share
zfs unshare




Output: Creating a zpool:

bash-3.00# zpool create testpool c2d0s7
bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
testpool 2G 77.5K 2.00G 0% ONLINE -

bash-3.00# df -h
Filesystem size used avail capacity Mounted on
/dev/dsk/c1d0s0 20G 10G 9.3G 53% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 3.3G 728K 3.3G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
/usr/lib/libc/libc_hwcap2.so.1
20G 10G 9.3G 53% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
swap 3.3G 48K 3.3G 1% /tmp
swap 3.3G 32K 3.3G 1% /var/run
testpool 2.0G 24K 2.0G 1% /testpool



Output: Creating a directory under a zpool:

bash-3.00# zfs create testpool/homedir
bash-3.00# df -h
Filesystem size used avail capacity Mounted on
/dev/dsk/c1d0s0 20G 10G 9.3G 53% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 3.3G 728K 3.3G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
/usr/lib/libc/libc_hwcap2.so.1
20G 10G 9.3G 53% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
swap 3.3G 48K 3.3G 1% /tmp
swap 3.3G 32K 3.3G 1% /var/run
testpool 2.0G 25K 2.0G 1% /testpool
testpool/homedir 2.0G 24K 2.0G 1% /testpool/homedir

bash-3.00# mkfile 100m testpool/homedir/newfile
bash-3.00# df -h
Filesystem size used avail capacity Mounted on
/dev/dsk/c1d0s0 20G 10G 9.1G 54% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 3.1G 728K 3.1G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
/usr/lib/libc/libc_hwcap2.so.1
20G 10G 9.1G 54% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
swap 3.1G 48K 3.1G 1% /tmp
swap 3.1G 32K 3.1G 1% /var/run
testpool 2.0G 25K 1.9G 1% /testpool
testpool/homedir 2.0G 100M 1.9G 5% /testpool/homedir


Mirror:

bash-3.00# zpool create testmirrorpool mirror c2d0s3 c2d0s4
bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
testmirrorpool 4.97G 52.5K 4.97G 0% ONLINE -
testpool 2G 100M 1.90G 4% ONLINE -
bash-3.00# df -h
Filesystem size used avail capacity Mounted on
/dev/dsk/c1d0s0 20G 10G 9.1G 54% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 3.1G 736K 3.1G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
/usr/lib/libc/libc_hwcap2.so.1
20G 10G 9.1G 54% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
swap 3.1G 48K 3.1G 1% /tmp
swap 3.1G 32K 3.1G 1% /var/run
testpool 2.0G 25K 1.9G 1% /testpool
testpool/homedir 2.0G 100M 1.9G 5% /testpool/homedir
testmirrorpool 4.9G 24K 4.9G 1% /testmirrorpool



bash-3.00# cat /etc/mnttab
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#

testpool /testpool zfs rw,devices,setuid,exec,atime,dev=2d50002 125 8087961
testpool/homedir /testpool/homedir zfs rw,devices,setuid,exec,atime,dev=2d 50003 1258088096
testmirrorpool /testmirrorpool zfs rw,devices,setuid,exec,atime,dev=2d50004 125 8089634


DESTROYING A POOL:

bash-3.00# zpool destroy testmirrorpool
bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
testpool 2G 100M 1.90G 4% ONLINE -



MANAGING ZFS PROPERTIES:

bash-3.00# zfs get all testpool/homedir
NAME PROPERTY VALUE SOURCE
testpool/homedir type filesystem -
testpool/homedir creation Sat Nov 14 11:34 2009 -
testpool/homedir used 24.5K -
testpool/homedir available 4.89G -
testpool/homedir referenced 24.5K -
testpool/homedir compressratio 1.00x -
testpool/homedir mounted yes -
testpool/homedir quota none default
testpool/homedir reservation none default
testpool/homedir recordsize 128K default
testpool/homedir mountpoint /testpool/homedir default
testpool/homedir sharenfs off default
testpool/homedir checksum on default
testpool/homedir compression off default
testpool/homedir atime on default
testpool/homedir devices on default
testpool/homedir exec on default
testpool/homedir setuid on default
testpool/homedir readonly off default
testpool/homedir zoned off default
testpool/homedir snapdir hidden default
testpool/homedir aclmode groupmask default
testpool/homedir aclinherit secure default


bash-3.00# zfs set quota=500m testpool/homedir

bash-3.00# zfs set compression=on testpool/homedir

bash-3.00# zfs set mounted=no testpool/homedir
cannot set mounted property: read only property

bash-3.00# zfs get all testpool/homedir
NAME PROPERTY VALUE SOURCE
testpool/homedir type filesystem -
testpool/homedir creation Sat Nov 14 11:34 2009 -
testpool/homedir used 24.5K -
testpool/homedir available 500M -
testpool/homedir referenced 24.5K -
testpool/homedir compressratio 1.00x -
testpool/homedir mounted yes -
testpool/homedir quota 500M local
testpool/homedir reservation none default
testpool/homedir recordsize 128K default
testpool/homedir mountpoint /testpool/homedir default
testpool/homedir sharenfs off default
testpool/homedir checksum on default
testpool/homedir compression on local
testpool/homedir atime on default
testpool/homedir devices on default
testpool/homedir exec on default
testpool/homedir setuid on default
testpool/homedir readonly off default
testpool/homedir zoned off default
testpool/homedir snapdir hidden default
testpool/homedir aclmode groupmask default
testpool/homedir aclinherit secure default


INHERITING ZFS PROPERTIES:

bash-3.00# zfs get -r compression testpool
NAME PROPERTY VALUE SOURCE
testpool compression off default
testpool/homedir compression on local
testpool/homedir/nesteddir compression on local

bash-3.00# zfs inherit compression testpool/homedir

bash-3.00# zfs get -r compression testpool
NAME PROPERTY VALUE SOURCE
testpool compression off default
testpool/homedir compression off default
testpool/homedir/nesteddir compression on local

bash-3.00# zfs inherit -r compression testpool/homedir

bash-3.00# zfs get -r compression testpool
NAME PROPERTY VALUE SOURCE
testpool compression off default
testpool/homedir compression off default
testpool/homedir/nesteddir compression off default

QUERYING ZFS PROPERTIES:

bash-3.00# zfs get checksum testpool/homedir
NAME PROPERTY VALUE SOURCE
testpool/homedir checksum on default

bash-3.00# zfs get all testpool/homedir
NAME PROPERTY VALUE SOURCE
testpool/homedir type filesystem -
testpool/homedir creation Sat Nov 14 11:34 2009 -
testpool/homedir used 50K -
testpool/homedir available 500M -
testpool/homedir referenced 25.5K -
testpool/homedir compressratio 1.00x -
testpool/homedir mounted yes -
testpool/homedir quota 500M local
testpool/homedir reservation none default
testpool/homedir recordsize 128K default
testpool/homedir mountpoint /testpool/homedir default
testpool/homedir sharenfs off default
testpool/homedir checksum on default
testpool/homedir compression off default
testpool/homedir atime on default
testpool/homedir devices on default
testpool/homedir exec on default
testpool/homedir setuid on default
testpool/homedir readonly off default
testpool/homedir zoned off default
testpool/homedir snapdir hidden default
testpool/homedir aclmode groupmask default
testpool/homedir aclinherit secure default

bash-3.00# zfs get -s local all testpool/homedir
NAME PROPERTY VALUE SOURCE
testpool/homedir quota 500M local



RAID-Z POOL:

bash-3.00# zpool create testraid5pool raidz c2d0s3 c2d0s4 c2d0s5
bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
testpool 2G 100M 1.90G 4% ONLINE -
testraid5pool 14.9G 89K 14.9G 0% ONLINE -
bash-3.00# df -h
Filesystem size used avail capacity Mounted on
/dev/dsk/c1d0s0 20G 10G 9.1G 54% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 3.1G 736K 3.1G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
/usr/lib/libc/libc_hwcap2.so.1
20G 10G 9.1G 54% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
swap 3.1G 48K 3.1G 1% /tmp
swap 3.1G 32K 3.1G 1% /var/run
testpool 2.0G 25K 1.9G 1% /testpool
testpool/homedir 2.0G 100M 1.9G 5% /testpool/homedir
testraid5pool 9.8G 32K 9.8G 1% /testraid5pool


DOUBLE PARITY RAID-Z POOL:

bash-3.00# zpool create doubleparityraid5pool raidz2 c2d0s3 c2d0s4 c2d0s5
bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
doubleparityraid5pool 14.9G 158K 14.9G 0% ONLINE -
testpool 2G 100M 1.90G 4% ONLINE -
bash-3.00# df -h
Filesystem size used avail capacity Mounted on
/dev/dsk/c1d0s0 20G 10G 9.1G 54% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 3.1G 736K 3.1G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
/usr/lib/libc/libc_hwcap2.so.1
20G 10G 9.1G 54% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
swap 3.1G 48K 3.1G 1% /tmp
swap 3.1G 32K 3.1G 1% /var/run
testpool 2.0G 25K 1.9G 1% /testpool
testpool/homedir 2.0G 100M 1.9G 5% /testpool/homedir
doubleparityraid5pool 4.9G 24K 4.9G 1% /doubleparityraid5pool



DRY RUN OF STORAGE POOL CREATION:

bash-3.00# zpool create -n testmirrorpool mirror c2d0s3 c2d0s4
would create 'testmirrorpool' with the following layout:

testmirrorpool
mirror
c2d0s3
c2d0s4
bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
testpool 2G 100M 1.90G 4% ONLINE -
bash-3.00# df
/ (/dev/dsk/c1d0s0 ):19485132 blocks 2318425 files
/devices (/devices ): 0 blocks 0 files
/system/contract (ctfs ): 0 blocks 2147483612 files
/proc (proc ): 0 blocks 16285 files
/etc/mnttab (mnttab ): 0 blocks 0 files
/etc/svc/volatile (swap ): 6598720 blocks 293280 files
/system/object (objfs ): 0 blocks 2147483444 files
/lib/libc.so.1 (/usr/lib/libc/libc_hwcap2.so.1):19485132 blocks 2318425 files
/dev/fd (fd ): 0 blocks 0 files
/tmp (swap ): 6598720 blocks 293280 files
/var/run (swap ): 6598720 blocks 293280 files
/testpool (testpool ): 3923694 blocks 3923694 files
/testpool/homedir (testpool/homedir ): 3923694 blocks 3923694 files

Note: Here the –n option is used not to create a zpool but just to check if it is possible to create it or not. If it is possible, it’ll give the above output, else it’ll give the error which is expected to occur when creating the zpool.

LISTING THE POOLS AND ZFS:

bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
testmirrorpool 4.97G 52.5K 4.97G 0% ONLINE -
testpool 2G 100M 1.90G 4% ONLINE -

bash-3.00# zpool list -o name,size,health
NAME SIZE HEALTH
testmirrorpool 4.97G ONLINE
testpool 2G ONLINE

bash-3.00# zpool status -x
all pools are healthy

bash-3.00# zpool status -x testmirrorpool
pool 'testmirrorpool' is healthy

bash-3.00# zpool status -v
pool: testmirrorpool
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
testmirrorpool ONLINE 0 0 0
mirror ONLINE 0 0 0
c2d0s3 ONLINE 0 0 0
c2d0s4 ONLINE 0 0 0

errors: No known data errors

pool: testpool
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
c2d0s7 ONLINE 0 0 0

errors: No known data errors


bash-3.00# zpool status -v testmirrorpool
pool: testmirrorpool
state: ONLINE
scrub: none requested
config:


NAME STATE READ WRITE CKSUM
testmirrorpool ONLINE 0 0 0
mirror ONLINE 0 0 0
c2d0s3 ONLINE 0 0 0
c2d0s4 ONLINE 0 0 0

errors: No known data errors


bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
testmirrorpool 75.5K 4.89G 24.5K /testmirrorpool
testpool 100M 1.87G 25.5K /testpool
testpool/homedir 100M 1.87G 100M /testpool/homedir

bash-3.00# zfs list -H
testmirrorpool 75.5K 4.89G 24.5K /testmirrorpool
testpool 114K 1.97G 26.5K /testpool
testpool/homedir_old 24.5K 1.97G 24.5K /testpool/homedir_old

bash-3.00# zfs list -o name,sharenfs,mountpoint
NAME SHARENFS MOUNTPOINT
testmirrorpool off /testmirrorpool
testpool off /testpool
testpool/homedir_old off /testpool/homedir_old

bash-3.00# zfs create testpool/homedir_old/nesteddir
bash-3.00# zfs list testpool/homedir_old
NAME USED AVAIL REFER MOUNTPOINT
testpool/homedir_old 52K 1.97G 27.5K /testpool/homedir_old
bash-3.00# zfs list -r testpool/homedir_old
NAME USED AVAIL REFER MOUNTPOINT
testpool/homedir_old 52K 1.97G 27.5K /testpool/homedir_old
testpool/homedir_old/nesteddir 24.5K 1.97G 24.5K /testpool/homedir_old/nesteddir

bash-3.00# zfs get -r compression testpool
NAME PROPERTY VALUE SOURCE
testpool compression off default
testpool/homedir compression off default
testpool/homedir/nesteddir compression off default
bash-3.00# zfs set compression=on testpool/homedir/nesteddir
bash-3.00# zfs get -r compression testpool
NAME PROPERTY VALUE SOURCE
testpool compression off default
testpool/homedir compression off default
testpool/homedir/nesteddir compression on local

bash-3.00# zfs inherit compression testpool/homedir/nesteddir
bash-3.00# zfs get -r compression testpool
NAME PROPERTY VALUE SOURCE
testpool compression off default
testpool/homedir compression off default
testpool/homedir/nesteddir compression off default
REFER MOUNTPOINT
testmirrorpool 75.5K 4.89G 24.5K /testmirrorpool
testpool 114K 1.97G 26.5K /testpool
testpool/homedir_old 24.5K 1.97G 24.5K /testpool/homedir_old

bash-3.00# zfs list -t filesystem -o name,used
NAME USED
testmirrorpool 75.5K
testpool 114K
testpool/homedir_old 24.5K


DESTROYING A ZFS:

bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
testmirrorpool 75.5K 4.89G 24.5K /testmirrorpool
testpool 100M 1.87G 25.5K /testpool
testpool/homedir 100M 1.87G 100M /testpool/homedir

bash-3.00# ls -l testpool/homedir/
total 4
drwxr-xr-x 2 root root 2 Nov 13 11:36 newdir
-rw-r--r-- 1 root root 0 Nov 13 11:36 newfile

bash-3.00# pwd
/testpool/homedir/newdir

bash-3.00# zfs destroy testpool/homedir
cannot unmount '/testpool/homedir': Device busy

bash-3.00# zfs destroy -f testpool/homedir

bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
testmirrorpool 75.5K 4.89G 24.5K /testmirrorpool
testpool 82K 1.97G 24.5K /testpool


bash-3.00# zfs create testpool/homedir/nesteddir
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
testmirrorpool 75.5K 4.89G 24.5K /testmirrorpool
testpool 144K 1.97G 26.5K /testpool
testpool/homedir 49K 1.97G 24.5K /testpool/homedir
testpool/homedir/nesteddir 24.5K 1.97G 24.5K /testpool/homedir/nesteddir

bash-3.00# zfs destroy testpool/homedir
cannot destroy 'testpool/homedir': filesystem has children
use '-r' to destroy the following datasets:
testpool/homedir/nesteddir
bash-3.00# zfs destroy -r testpool/homedir


RENAMING ZFS:

bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
testmirrorpool 75.5K 4.89G 24.5K /testmirrorpool
testpool 114K 1.97G 26.5K /testpool
testpool/homedir 24.5K 1.97G 24.5K /testpool/homedir

bash-3.00# zfs rename testpool/homedir testpool/homedir_old
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
testmirrorpool 75.5K 4.89G 24.5K /testmirrorpool
testpool 114K 1.97G 26.5K /testpool
testpool/homedir_old 24.5K 1.97G 24.5K /testpool/homedir_old
bash-3.00# df -h
Filesystem size used avail capacity Mounted on
/dev/dsk/c1d0s0 20G 11G 8.6G 56% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 3.1G 736K 3.1G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
/usr/lib/libc/libc_hwcap2.so.1
20G 11G 8.6G 56% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
swap 3.1G 48K 3.1G 1% /tmp
swap 3.1G 32K 3.1G 1% /var/run
testpool 2.0G 26K 2.0G 1% /testpool
testmirrorpool 4.9G 24K 4.9G 1% /testmirrorpool
testpool/homedir_old 2.0G 24K 2.0G 1% /testpool/homedir_old


bash-3.00# zfs create testpool/homedir_old/nesteddir
bash-3.00# zfs list testpool/homedir_old
NAME USED AVAIL REFER MOUNTPOINT
testpool/homedir_old 52K 1.97G 27.5K /testpool/homedir_old
bash-3.00# zfs list -r testpool/homedir_old
NAME USED AVAIL REFER MOUNTPOINT
testpool/homedir_old 52K 1.97G 27.5K /testpool/homedir_old
testpool/homedir_old/nesteddir 24.5K 1.97G 24.5K /testpool/homedir_old/nesteddir


MOUNTING AND UNMOUNTING ZFS FILESYSTEMS:

bash-3.00# zfs get mountpoint testpool/homedir
NAME PROPERTY VALUE SOURCE
testpool/homedir mountpoint /testpool/homedir default

bash-3.00# zfs get mounted testpool/homedir
NAME PROPERTY VALUE SOURCE
testpool/homedir mounted yes -

bash-3.00# zfs set mountpoint=/mnt/altloc testpool/homedir

bash-3.00# zfs get mountpoint testpool/homedir
NAME PROPERTY VALUE SOURCE
testpool/homedir mountpoint /mnt/altloc local

LEGACY MOUNT POINTS:

Legacy filesystems must be managed through mount and umount commands and the /etc/vfstab file. Unlike normal zfs filesystems, zfs doesn't automatically mount legacy filesystems on boot.

bash-3.00# zfs set mountpoint=legacy testpool/additionaldir

bash-3.00# mount -F zfs testpool/additionaldir /mnt/legacy

bash-3.00# df -h
Filesystem size used avail capacity Mounted on
/dev/dsk/c1d0s0 20G 11G 8.6G 56% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 3.3G 732K 3.3G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
/usr/lib/libc/libc_hwcap2.so.1
20G 11G 8.6G 56% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
swap 3.3G 48K 3.3G 1% /tmp
swap 3.3G 32K 3.3G 1% /var/run
testpool 4.9G 24K 4.9G 1% /testpool
testpool/homedir 500M 25K 500M 1% /mnt/altloc
testpool/homedir/nesteddir
500M 24K 500M 1% /mnt/altloc/nesteddir
testpool/additionaldir
4.9G 24K 4.9G 1% /mnt/legacy


MOUNTING ZFS FILESYSTEMS:

bash-3.00# umountall

bash-3.00# zfs mount

bash-3.00# zfs mount -a

bash-3.00# zfs mount
testpool/homedir /mnt/altloc
testpool/homedir/nesteddir /mnt/altloc/nesteddir
testpool /testpool

Note:
1. zfs mount -a command doesn't mount legacy filesystems.
2. To force a mount on top of a non-empty directory, use the option -O
3. To specify the options like ro, rw use the option -o

UNMOUNTING ZFS FILESYSTEMS:

bash-3.00# zfs mount
testpool /testpool
testpool/homedir /testpool/homedir
testpool/homedir/nesteddir /testpool/homedir/nesteddir

bash-3.00# zfs unmount /testpool/homedir

bash-3.00# zfs mount
testpool /testpool

bash-3.00# zfs mount -a

bash-3.00# zfs umount /testpool/homedir

bash-3.00# zfs mount
testpool /testpool

bash-3.00# pwd
/testpool/homedir

bash-3.00# zfs unmount /testpool/homedir
cannot unmount '/testpool/homedir': Device busy

bash-3.00# zfs unmount -f /testpool/homedir

bash-3.00# zfs mount
testpool /testpool

Note: The sub command works both the ways - unmount,umount. This is to provide backwards compatibility.



ZFS WEB-BASED MANAGEMENT:

bash-3.00# /usr/sbin/smcwebserver start
Starting Sun Java(TM) Web Console Version 3.0.2 ...
The console is running

bash-3.00# /usr/sbin/smcwebserver enable

The enable sub command enables the server to run automatically when the system boots.


ZFS SNAPSHOTS:

bash-3.00# zfs list -r
NAME USED AVAIL REFER MOUNTPOINT
testmirrorpool 75.5K 4.89G 24.5K /testmirrorpool
testpool 146K 1.97G 26.5K /testpool
testpool/homedir_old 52K 1.97G 27.5K /testpool/homedir_old
testpool/homedir_old/nesteddir 24.5K 1.97G 24.5K /testpool/homedir_old/nesteddir

bash-3.00# zfs snapshot testpool/homedir_old@snap1

bash-3.00# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
testpool/homedir_old@snap1 0 - 27.5K -

bash-3.00# zfs snapshot -r testpool/homedir_old@snap2

bash-3.00# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
testpool/homedir_old@snap1 0 - 27.5K -
testpool/homedir_old@snap2 0 - 27.5K -
testpool/homedir_old/nesteddir@snap2 0 - 24.5K -



bash-3.00# zfs get all testpool/homedir_old@snap1
NAME PROPERTY VALUE SOURCE
testpool/homedir_old@snap1 type snapshot -
testpool/homedir_old@snap1 creation Fri Nov 13 16:26 2009 -
testpool/homedir_old@snap1 used 0 -
testpool/homedir_old@snap1 referenced 27.5K -
testpool/homedir_old@snap1 compressratio 1.00x -


PROPERTIES OF SNAPSHOTS:

bash-3.00# zfs get all testpool/homedir_old@snap1
NAME PROPERTY VALUE SOURCE
testpool/homedir_old@snap1 type snapshot -
testpool/homedir_old@snap1 creation Fri Nov 13 16:26 2009 -
testpool/homedir_old@snap1 used 0 -
testpool/homedir_old@snap1 referenced 27.5K -
testpool/homedir_old@snap1 compressratio 1.00x -
bash-3.00#
bash-3.00# zfs set compressratio=2.00x testpool/homedir_old@snap1
cannot set compressratio property: read only property
bash-3.00# zfs set compression=on testpool/homedir_old@snap1
cannot set compression property for 'testpool/homedir_old@snap1': snapshot properties cannot be modified



RENAMING ZFS SNAPSHOTS:

bash-3.00# zfs rename testpool/homedir_old@snap1 additionalpool/homedir@snap3
cannot rename to 'additionalpool/homedir@snap3': snapshots must be part of same dataset

bash-3.00# zfs rename testpool/homedir_old@snap1 testpool/homedir_old@snap3

bash-3.00# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
testpool/homedir_old@snap3 0 - 27.5K -
testpool/homedir_old@snap2 0 - 27.5K -
testpool/homedir_old/nesteddir@snap2 0 - 24.5K -


DISPLAYING AND ACCESSING ZFS SNAPSHOTS:

bash-3.00# ls /testpool/homedir_old/.zfs/snapshot
snap2 snap3

bash-3.00# zfs list -r -t snapshot -o name,creation testpool/homedir_old
NAME CREATION
testpool/homedir_old@snap3 Fri Nov 13 16:26 2009
testpool/homedir_old@snap2 Fri Nov 13 16:31 2009
testpool/homedir_old/nesteddir@snap2 Fri Nov 13 16:31 2009

ROLLING BACK TO A ZFS SNAPSHOT:

bash-3.00# zfs rollback testpool/homedir_old@snap3
cannot rollback to 'testpool/homedir_old@snap3': more recent snapshots exist
use '-r' to force deletion of the following snapshots:
testpool/homedir_old@snap2

bash-3.00# zfs rollback -r testpool/homedir_old@snap3

DESTROYING A ZFS SNAPSHOT:

bash-3.00# zfs destroy testpool/homedir_old@snap3
cannot destroy 'testpool/homedir_old@snap3': snapshot has dependent clones
use '-R' to destroy the following datasets:
testpool/additionaldir/testclone

bash-3.00# zfs destroy -R testpool/homedir_old@snap3


CREATING ZFS CLONES:

bash-3.00# zfs clone testpool/homedir_old@snap3 testpool/additionaldir/testclone

bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
additionalpool 104K 4.89G 25.5K /additionalpool
additionalpool/homedir 24.5K 4.89G 24.5K /additionalpool/homedir
testmirrorpool 75.5K 4.89G 24.5K /testmirrorpool
testpool 185K 1.97G 27.5K /testpool
testpool/additionaldir 25.5K 1.97G 25.5K /testpool/additionaldir
testpool/additionaldir/testclone 0 1.97G 27.5K /testpool/additionaldir/testclone
testpool/homedir_old 52K 1.97G 27.5K /testpool/homedir_old
testpool/homedir_old@snap3 0 - 27.5K -
testpool/homedir_old/nesteddir 24.5K 1.97G 24.5K /testpool/homedir_old/nesteddir
testpool/homedir_old/nesteddir@snap2 0 - 24.5K -

SETTING CLONE PROPERTIES:

bash-3.00# zfs get all testpool/additionaldir/testclone
NAME PROPERTY VALUE SOURCE
testpool/additionaldir/testclone type filesystem -
testpool/additionaldir/testclone creation Fri Nov 13 16:51 2009 -
testpool/additionaldir/testclone used 22.5K -
testpool/additionaldir/testclone available 1.97G -
testpool/additionaldir/testclone referenced 27.5K -
testpool/additionaldir/testclone compressratio 1.00x -
testpool/additionaldir/testclone mounted yes -
testpool/additionaldir/testclone origin testpool/homedir_old@snap3 -
testpool/additionaldir/testclone quota none default
testpool/additionaldir/testclone reservation none default
testpool/additionaldir/testclone recordsize 128K default
testpool/additionaldir/testclone mountpoint /testpool/additionaldir/testclone default
testpool/additionaldir/testclone sharenfs off local
testpool/additionaldir/testclone checksum on default
testpool/additionaldir/testclone compression off default
testpool/additionaldir/testclone atime on default
testpool/additionaldir/testclone devices on default
testpool/additionaldir/testclone exec on default
testpool/additionaldir/testclone setuid on default
testpool/additionaldir/testclone readonly off default
testpool/additionaldir/testclone zoned off default
testpool/additionaldir/testclone snapdir hidden default
testpool/additionaldir/testclone aclmode groupmask default
testpool/additionaldir/testclone aclinherit secure default

bash-3.00# zfs set sharenfs=on testpool/additionaldir/testclone

bash-3.00# zfs set quota=500m testpool/additionaldir/testclone

bash-3.00# zfs get sharenfs,quota testpool/additionaldir/testclone
NAME PROPERTY VALUE SOURCE
testpool/additionaldir/testclone sharenfs on local
testpool/additionaldir/testclone quota 500M local


REPLACING A ZFS FILESYSTEM WITH A ZFS CLONE:

bash-3.00# zfs list -r testpool/homedir_old
NAME USED AVAIL REFER MOUNTPOINT
testpool/homedir_old 74.5K 1.97G 27.5K /testpool/homedir_old
testpool/homedir_old@snap3 22.5K - 27.5K -
testpool/homedir_old/nesteddir 24.5K 1.97G 24.5K /testpool/homedir_old/nesteddir
testpool/homedir_old/nesteddir@snap2 0 - 24.5K -


bash-3.00# zfs list -r testpool/additionaldir
NAME USED AVAIL REFER MOUNTPOINT
testpool/additionaldir 48K 1.97G 25.5K /testpool/additionaldir
testpool/additionaldir/testclone 22.5K 500M 27.5K /testpool/additionaldir/testclone

bash-3.00# zfs promote testpool/additionaldir/testclone

bash-3.00# zfs list -r testpool/homedir_old
NAME USED AVAIL REFER MOUNTPOINT
testpool/homedir_old 47K 1.97G 27.5K /testpool/homedir_old
testpool/homedir_old/nesteddir 24.5K 1.97G 24.5K /testpool/homedir_old/nesteddir
testpool/homedir_old/nesteddir@snap2 0 - 24.5K -

bash-3.00# zfs list -r testpool/additionaldir
NAME USED AVAIL REFER MOUNTPOINT
testpool/additionaldir 75.5K 1.97G 25.5K /testpool/additionaldir
testpool/additionaldir/testclone 50K 500M 27.5K /testpool/additionaldir/testclone
testpool/additionaldir/testclone@snap3 22.5K - 27.5K -

bash-3.00# zfs list -r testpool/homedir_old
NAME USED AVAIL REFER MOUNTPOINT
testpool/homedir_old 50K 500M 27.5K /testpool/homedir_old
testpool/homedir_old@snap3 22.5K - 27.5K -

bash-3.00# zfs list -r testpool/additionaldir
NAME USED AVAIL REFER MOUNTPOINT
testpool/additionaldir 24.5K 1.97G 24.5K /testpool/additionaldir

bash-3.00# zfs list -r testpool/homedir_old_old
NAME USED AVAIL REFER MOUNTPOINT
testpool/homedir_old_old 47K 1.97G 27.5K /testpool/homedir_old_old
testpool/homedir_old_old/nesteddir 24.5K 1.97G 24.5K /testpool/homedir_old_old/nesteddir
testpool/homedir_old_old/nesteddir@snap2 0 - 24.5K -

DESTROYING ZFS CLONE:

bash-3.00# zfs destroy /testpool/homedir_old@snap3

Saturday, February 20, 2010

Linux: Process scheduling

Process scheduling using # at:

1. Executing only once can be performed through # at command.
2. Executing recursively through # crontab command

# at
command execute the tasks only once.
Syn: # at

Wednesday, February 17, 2010

Linux : Security Administration - SUDO

SUDO - LINUX
RBAC - Solaris - Role Based Access Control


Sudo is the concept of giving permission to access only the selected commands.

Configuration files:
/etc/sudoers
/etc/sudo

What to do?
/etc/sudoers
1. This file will be present by default.
2. Editable file by the 'root' user.

We have to edit the file in 3 areas.
a. User_Alias specification:
Here we assign a variable to the (sudo-authenticated) user.
We can add any number of user to the file.

b. Command_Alias specification:
Here we assign a variable to the command which can
be executed by the sudo users.

c. User_privilege specification
Here we map the User_Alias variable with the
Commabd_Alias variable.

/etc/sudo
1. This file will NOT be present by default.
2. This file has to be created mannualy.
3. This file will be reffred by the # sudo command
4. Will have to edit the file with
a. User_Alias specification
b. Command_Alias specification
c. User_privilege specification


Note:

1. Before implementing the sudo, make sure that the user account is present.
2. When the user is trying to execute the permitted commands, system prompts for the "authenticated" user password.
3. If in a file # is there, then the whole line is commented out and the system will not read the entry to the file.


How to do? - Configuration:
I.

Example entry to the file /etc/sudoers
# User_Alias specification
# User_Alias ADMINS = jsmith, mikem
# The following entries are edited
User_Alias CHE = che
User_Alias CASTRO = castro



Changes done to the file:


# Command_Alias specification
# The following 2 lines are edited

Cmnd_Alias B1 = /usr/sbin/useradd
Cmnd_Alias B2 = /usr/bin/passwd


# User_priviledge specification
root ALL=(ALL) ALL
# the following 2 lines are edited

CHE ALL = B1,B2
CASTRO ALL = B1,B2


Save the file and exit.


II.
Example entry of the file /etc/sudo
# This file will not be present by default
# This file has to be created

# User_Alias name specification
User_Alias CHE = che
User_Alias CASTRO = castro


# Command_Alias specification
Cmnd_Alias BB1 = /usr/sbin/useradd
Cmnd_Alias BB2 = /usr/bin/passwd

# User privileged specification
root ALL = (ALL) ALL
CHE ALL = BB1,BB2
CASTRO ALL = BB1, BB2

Save the file and exit.

In the above files, variables are assigned against the user name and the commands.

To check:
1. Login as the user named che and castro
2. When normally executing the commands
# useradd and # passwd

They are not permitted to execute the commands.
3. So
$ sudo /usr/sbin/useradd
$ sudo /usr/bin/passwd

They have to execute the commands as above.

Monday, February 15, 2010

Secuirty Administration : RBAC


RBAC - Role Based Acess Control
:

RBAC is an alternative method to assign special privilidge to a non-root user as an authorization or as role or as profile.


Note:

In Linux the same implementation is said to be us SUDO.

Configuration files:
/etc/user_attr:
- Extended user attributes Database
- Associates users and roles with authrizations and profiles
NOTE:
When creating a new user account with no rights profiles, authorizations or roles, nothing is added to the file.

/etc/security/auth_attr:
- Authrization attributes Database
- Defines authorizations and their attributes and identifies the associated help file


/etc/security/prof_attr:
- Rights profile attributes database
- Defines profiles, lists the profile's assigned authorizations, and identifies the associated help


/etc/security/exec_attr:
- Profile attributes database
- Defines the privileged operations assinged to a profile

Roles:
- Will have an entry to the file /etc/passwd and /etc/shadow
- Similar to user account
- Collection of profiles

Profiles:
- Will have a dedicated shell
- Profile shells will assingned
- Bourne Shell & Kron shell have profile shells
- pfsh (bourne profile shell), pfksh (korn profile shell)
- Is collection of numbner of commands.

NOTE:

1. If the user/role change from the specified profile shell then they are not permitted to execute the authorized commands
2. It's not possible to login to the system directly using role.
A role can only be used by switching the user to the role with "su" command.
3. We can also set up the "root" user as a role through a manaul process. This approach prevents users from logging in directory as the root user. Therefore, they must login as themselves first, and then use the su command to assume the role.



We can perform RBAC by three ways to an user:
1. Directly adding the authorization to the user account
2. Creating a profile, and adding the profile to the user account
3. Creating a profile, adding it to role, then adding the role to the user account.
4. Adding authorization to role and adding the role to an user


I. Adding authorization to an user account:

# useradd -m -d /export/home/shyam -s /usr/bin/pfsh \
-A solaris.admin.usermgr.pswd \
solaris.system.shutdown \
solaris.system.admin.fsmgr.write shyam


# passwd shyam

Here, we had added the existing authorization to the user account using -A option with #useradd command

Note:
The shell assinged is profile shell.

Output:
bash-3.00# su – shyam

sunfire1% echo $SHELL
/usr/bin/pfsh

sunfire1% auths
solaris.admin.usermgr.pswd,solaris.system.shutdown,solaris.admin.fsmgr.write,solaris.device.cdrw,solaris.profmgr.read,solaris.jobs.users,solaris.mail.mailq,solaris.admin.usermgr.read,solaris.admin.logsvc.read,solaris.admin.fsmgr.read,solaris.admin.serialmgr.read,solaris.admin.diskmgr.read,solaris.admin.procmgr.user,solaris.compsys.read,solaris.admin.printer.read,solaris.admin.prodreg.read,solaris.admin.dcmgr.read,solaris.snmp.read,solaris.project.read,solaris.admin.patchmgr.read,solaris.network.hosts.read,solaris.admin.volmgr.read
sunfire1% profiles
Basic Solaris User
All

sunfire1% profiles -l

All:
*

sunfire1% roles
No roles


# roles
- Returns the information about, to which roles the user is authorized to login

# profiles
- Returns the information about, to which profile the user is authorized to execute


# profiles -l
- Retuns the detailed information about the permitted commands that can be executed by the user

# auths
- Returns the information about the permitted autorization mapped to the user account.


When a user is created with addittional information like authrization, profiles or roles, # useradd command update the entry to the file
/etc/user_attr


Output: (Relevant to the topic)
prabhu::::type=normal;auths=solaris.admin.usermgr.pswd,solaris.system.shutdown,solaris.admin.fsmgr.write


Note:
We cannot see an entry to the file for a normal user.



II. Creating a profile and adding it to an user account:

WTD:
1. Determine the name of the profile
2. Determine what commands has to be added to the profile
3. Edit the file /etc/security/prof_attr file accodingly
4. Edit the file /etc/security/exec_attr file by providing the list of
commands to the profile
5. Map the profile to the user

HTD:
Eample-1:
Profile name=testprofile
Commands added to the profile=shutdown,format,useradd,passwd

Step-1: Adding/Creating a profile
# vi /etc/security/prof_attr
testprofile:::This is a test profile to test RBAC
1 2

Here,
1 = Name of the profile
2 = Comment about the profile (Optional)



Step-2: Mapping the list of commands to the created profile
# vi /etc/security/exec_attr
testprofile:suser:cmd:::/usr/sbin/shutdown:uid=0
testprofile:suser:cmd:::/usr/sbin/format:uid=0
testprofile:suser:cmd:::/usr/sbin/useradd:uid=0
testprofile:suser:cmd:::/usr/bin/passwd:uid=0


Step-3: Mapping the profile to the user account
# useradd -m -d /export/home/accel -s /usr/bin/pfksh -P testprofile accel

Here we have added the profile named "testprofile" to the user.

Output:
bash-3.00# su - accel
sunfire1% echo $SHELL
/usr/bin/pfksh

sunfire1% roles
No roles

sunfire1% profiles
testprofile
Basic Solaris User
All

sunfire1% profiles -l

testprofile:
/usr/sbin/shutdown uid=0
/usr/sbin/format uid=0
/usr/sbin/useradd uid=0
/usr/bin/passwd uid=0
All:
*



Example-2
Profile name: complete
List of commands added: Creating a profile with all root privilidges

Step-1:
Step-1: Adding/Creating a profile
# vi /etc/security/prof_attr
complete:::This is to test the duplication of root profile
1 2

Here,
1 = Name of the profile
2 = Comment about the profile (Optional)



Step-2: Mapping the list of commands to the created profile
# vi /etc/security/exec_attr
complete:suser:cmd:::*:uid=0

Step-3: Mapping the user to the profile
# useradd -m -d /export/home/aita -s /usr/bin/pfsh -P complete aita




Output:
bash-3.00# su - aita
sunfire1# echo $USER
root

sunfire1# roles
No roles

sunfire1# profiles
Web Console Management
All
Basic Solaris User

sunfire1# profiles -l | more

Web Console Management:
/usr/share/webconsole/private/bin/smcwebstart uid=noaccess,
gid=noaccess,
privs=proc_audit
All:
*


Note:
1. The output of the commands
# profiles
# profiles -l
will be similar for the root user.

2. From the above output, we can also observe the change in the shell of the user. Normally for the user the shell is $, but since the all the privilidge is given to the user, the shell is #


III. Creating a role, profile and mapping it to the user account.
WTD:
1. Determine the name of the user
2. Create the role
3. Assign the password to the role
Note:
a. Role should have a password to it.
b. Without having a password it's not possible to login to that role

4. Create a profile
5. Add the list of commands to the profile
6. Add the profile to the role
7. Add the role to the user

Note:
This method has some more layer of security by assiging a password to a role.








HTD:
Step-1: Create a role

# roleadd -m -d /export/home/policy -s /usr/bin/pfsh policy

1. This command will update the following files
a. /etc/passwd
b. /etc/shadow
c. /etc/user_attr

Output:
bash-3.00# roleadd -m -d /export/home/policy -s /usr/bin/pfsh policy
80 blocks

bash-3.00# passwd policy
New Password:
Re-enter new Password:
passwd: password successfully changed for policy

bash-3.00# grep policy /etc/passwd
policy:x:112:1::/export/home/policy:/usr/bin/pfsh

bash-3.00# grep policy /etc/shadow
policy:xXuxPLl/Wt13Q:14512::::::

bash-3.00# grep policy /etc/user_attr
policy::::type=role;profiles=All



Step-2: Creating a profile

Note: To create a profile pls do refer II Creating a profile.

Let's make use of the above existing profile.
For eg, let's take the profile "testprofile"


Step-3: Adding the profile to the role

# rolemod -P testprofile,All policy

Adds the profile named "testprofile" to the existing role "quality".


Now we can observe the changes to the file /etc/user_attr
Output:
quality::::type=normal;roles=complete;auths=solaris.admin.usermgr.pswd,

solaris.system.shutdown,solaris.admin.fsmgr.write


Step-4: Mapping the role to the user:
# useradd -m -d /export/home/nokia -R policy -s /bin/bash nokia

Adding a role to the user.

Output:
bash-3.00# useradd -m -d /export/home/nokia -R policy -s /bin/bash nokia
80 blocks

bash-3.00# passwd nokia
New Password:
Re-enter new Password:
passwd: password successfully changed for nokia

bash-3.00# su – nokia

sunfire1% auths
solaris.device.cdrw,solaris.profmgr.read,solaris.jobs.users,solaris.mail.mailq,solaris.admin.usermgr.read,solaris.admin.logsvc.read,solaris.admin.fsmgr.read,solaris.admin.serialmgr.read,solaris.admin.diskmgr.read,solaris.admin.procmgr.user,solaris.compsys.read,solaris.admin.printer.read,solaris.admin.prodreg.read,solaris.admin.dcmgr.read,solaris.snmp.read,solaris.project.read,solaris.admin.patchmgr.read,solaris.network.hosts.read,solaris.admin.volmgr.read

sunfire1% profiles
Basic Solaris User
All
sunfire1% profiles -l

All:
*

sunfire1% roles
policy

sunfire1% su policy
Password:

sunfire1% profiles
testprofile
All
Basic Solaris User

sunfire1% profiles -l

testprofile:
/usr/sbin/shutdown uid=0
/usr/sbin/format uid=0
/usr/sbin/useradd uid=0
/usr/bin/passwd uid=0
All:
*


Note:
Authorized acitivity can be performed by the user, only after switch to the role.
Role user account CANNOT be directly logged into the system.



Output:
bash-3.00# su – nokia

sunfire1% su policy
Password:
$ /usr/sbin/shutdown -g 180 -i 5

Shutdown started. Fri Sep 25 17:26:01 IST 2009

Broadcast Message from root (pts/3) on sunfire1 Fri Sep 25 17:26:01...
The system sunfire1 will be shut down in 3 minutes



Note:

Default auths is assigned to an used is defined in the file /etc/security/policy.conf

bash-3.00# grep -i auths /etc/security/policy.conf
AUTHS_GRANTED=solaris.device.cdrw

Wednesday, February 10, 2010

Security - TCP Wrappers

TCP WRAPPERS:

Is a package developed by Wietse Vernema, who also wrote the SATAN security package.
Is an IP packet filtering and network access logging facility for inetd. TCP_wrappers is usually configured to “wrap” itself around TCP based services defined in inetd.conf.
Is used to restrict access to TCP services based on the hostname, IP address, network address etc.
TCP wrappers was integrated into Solaris starting in Solaris-9, where both Solaris Secured Shell and inet based services were wrapped.


What TCP wrappers does?
1. Provides system administrator’s a high degree of control over incoming TCP connections. The system is invoked after a remote host connects to our server/machine. It is involved either through a sub-routine library that is linked into the stand alone program stacked up through inetd.
2. Once running, the TCP wrappers system performs the following steps;
a. Open the /etc/hosts.allow file


Note:
1. /etc/hosts.allow and /etc/hosts.deny file will not exist by default.
2. Both file contains access control rules and actions for each protocol.
b. It scans through the files, line by line, until it finds a rule that matches the particular protocol and source host that has connected to the server.
c. It executes the actions specified. If appropriate, control is then turned over to the network server.
d. If no matching action is found, the file /etc/hosts.deny is opened and sequentially read line by line. If a matching line is found, access is denied and the corresponding action is performed.
e. If no match is found in either the /etc/hosts.allow or /etc/hosts.deny file, the connection is allowed by default.

To enable TCP wrappers support for inet based services:
For eg:
# inetadm –M tcp_wrappers=true
# svcadm refresh inetd
# inetadm –l telnet | grep tcp_wrapper
Default tcp_wrappers=TRUE





Example entries to file /etc/hosts.allow or /etc/hosts.deny:

Note:
Remember it’s case sensitive.

ALL : ALL
ALL :
in.telnetd : ALL EXCEPT < host_name1, host_name2…>
ALL EXCEPT in.telnetd : < host_name1, host_name2…>
ALL : 192.168.10.0/255.255.255.0

Note:
1. Host names can also replaced with IP addresses.
2. /etc/hosts.deny should contain only a single rule ALL:ALL to deny all access by default. Keeping all the rules in a single file simplifies maintenance. Using /etc/hosts.allow, which has priority over /etc/hosts.deny, ensures that if someone else accidentally modifies the wrong files it won’t override out rules.

Thursday, February 4, 2010

Information on Hot Spare

HOT SPARE:
1. Hot spare faciltiy included with Disk Suite allows automatic replacement of failed sub-mirror/RAID-5 components, provided spare components are avialable & reserved.
2. Because component replacement & resyncing of failed components is automatic.
3. A hot spare is a component that is running (but not being used) which can be substituted for a broken component in a sub-mirror of two or three way meta mirror or RAID-5 device.

Note:
4. Failed components in a one-way meta mirror cannot be replaced by a hot spare.
5. Components designated as hot sapres cannot be used in sub-mirrors or another meta device in the 'md.tab' file. They must remian ready for immediate use in the even of a component failure.


Hot spare states:
1. Has 3 states
a. Available
b. In-use
c. Broken
a. Available:
'Available' hot spares are running and ready to accept data, but are not currently being written to or read from.


b. In-use:
'In-use' hot spares are currenlty being written to and read from.

c. Broken:
1. 'Broken' hot spares are out of the service.
2. A hot spare is placed in the broken state when an I/O error occurs.

2. The number of hot spare pools is limited to 1000.


Defining Hot spare:
1. Hot spare pools are named as 'hspnnn'
where 'nnn' is a number in the range 000-999
2. A metadevice cannot be configured as a hot spare.
3. Once the hot spare pools are defined and associated with a sub-mirror, the hot spares are "availabe" for use. If a component failure occurs, disk-suite searches through the list of hot spares in the assinged pool and selects the first 'available" compoenet that is equal or greated in disk capacity.
4. If a hot spare of adequate size is found, the hot spare state changes to "in-use" and a resync operation is automatically performed. The resync operation brings the hot spare into sync with other sub-mirror or RAID-5 components.
5. If a component of adequate size is "not found" in the list of host spare, the sub-mirror that failed is considered "erred" and the porting of the sub-mirror no longer replicated the data.


Hot spare conditions to avoid:
1. Associating hot spares of the wrong size with sub-mirror. This condition occurs when hot spare pools are defined and associated with a sub-mirror & none of the hot spares in the hot spare pool are equal to or greater than the smallest component in the sub-mirror.
2. Having all the hot spare withing the hot spare pool in use.
In this case immediate action is required:
a. 2 possible solutions or actions can be taken
i. First is to add additional hot spare
ii. To repair some of the components that hace been
hot spare replaced
Note:
If all hot spare are in-use and a sub-mirror fails due to errors, that portion of the mirror will no longer be replicated.

Manipulating hot spare spools:
1. # metahs
= adding hot spares to hot spare pools
= deleting hot spares from hot spare pool
= replacing hot spares in hot spare pools
= enabling hot spare
= checking the status of the hot spare

Adding a hot spare:
Creating a hot spare spool:
1. # metainit hsp000 c0t2d0s5
Creates a hot spare device with the name 'hsp000'

2. # metainit
# metainit hsp001 c0t1d0s4 c0t11d0s4
(or)
# metahs -a hsp001 c0t1d0s4 c0t11d0s4
-a = to add a hot spare
-i = to obtain the information


Deleting hot spare:
1. Hot spares can be deleted from any or all the hot spare pools to which they have been associated.
2. When a hot spare is delted from a hot spare pool, the position of the remianinig hot spares changes to reflect the new position. For eg, if the second of 3 hot spares in a hot spare spool is deleted, the 3rd hot spare moves to the seocnd position.
3. # metahs -d hsp000 c0t11d0s4
Removes the slice from the hot spare pool
-d = to delete

4. Removing hot spare pool:
Note:
Before removing the hot spare pool, remove all the hot spare fromthe pools using 'metahs' with -d options and provide hot spare name.

# metahs -d
-d = deletes only the spare

# metahs -d
To delete the hot spare pool

Replacing hot spare:
Note:
1. Hot spares that are in the 'In-use' state cannot be replaced by other hot spare.
2. The order of hot spares in the hot spare pools is NOT CHANGED when replacemebt occurs.
3. # metahs -r
# metahs -r hsp000 c0t10d0s4 c0t11d0s4
c0t11d0s4 replaces c0t10d0s4

Associting the hot spare pool with sub-mirror/Raid-5 metadevice:
1. # metaparam
modifies the parameters of the meta devices.

# metaparam -h

# metaparam -h hsp000 d101
# metaparam -h hsp000 d102


Note:
Where d101, d102 sub-mirrors of d103 mirror.
where
-h = specifies the hot spare spool to be used by a meta device

Disassociating the hot spare pool with sub-mirror/raid-5 metadevice:
# metaparam -h none
# metaparam -h none d101
# metaparam -h none d102

where,
'none' specifies the meta decie is disassociated with the hot spare pool associated to it.

# metahs -d hsp000 c0t2d0s5 c0t2d0s6
# metahs -d hsp000
# metaclear d100
# metadetach d15 d12
# metaclear d12
# metaclear -r d15


To view the status fo hot spare pool:
# metahs -i

Note:
Suppose the failed disk is going to be repalced to free up hot spare.
# metadevadm
updates the meta device information
-u = obtain the device ID associated with the disk specifier.
This option is used when a disk drive has had its device ID changed during a firmware upgrade or due to changing the controller of a storage.
-v = execution in verbose mode. Has not effect when used with -u option. verbose is default.

# metadevadm -v -u
Updating the device infomation.

# metadevadm -v -u c0t11d0s4

# metareplace -e d103 c0t10d0s3
To replace in the same location
1. Now hot spare will be available
2. Stuatus of the spare disk will change from 'in-

use' to 'available'


Outputs:

bash-3.00# metahs -a hsp001 c0t9d0s0 c0t9d0s1 c0t9d0s3 c0t9d0s4
hsp001: Hotspares are added
bash-3.00# metahs -i
hsp001: 4 hot spares
Device Status Length Reloc
c0t9d0s0 Available 1027216 blocks Yes
c0t9d0s1 Available 1027216 blocks Yes
c0t9d0s3 Available 1027216 blocks Yes
c0t9d0s4 Available 1027216 blocks Yes

Device Relocation Information:
Device Reloc Device ID
c0t9d0 Yes id1,sd@SFUJITSU_MAG3182L_SUN18G_01534930____
bash-3.00#
bash-3.00# metastat -p
d5 -m d0 d10 1
d0 1 1 c0t8d0s0
d10 1 1 c0t10d0s0
d15 1 1 c0t12d0s0
hsp001 c0t9d0s0 c0t9d0s1 c0t9d0s3 c0t9d0s4
hsp001: 4 hot spares
Device Status Length Reloc
c0t9d0s0 Available 1027216 blocks Yes
c0t9d0s1 Available 1027216 blocks Yes
c0t9d0s3 Available 1027216 blocks Yes
c0t9d0s4 Available 1027216 blocks Yes
bash-3.00# metahs -a hsp001 c0t9d0s5
hsp001: Hotspare is added
bash-3.00# metastat -p
d5 -m d0 d10 1
d0 1 1 c0t8d0s0
d10 1 1 c0t10d0s0
d15 1 1 c0t12d0s0
hsp001 c0t9d0s0 c0t9d0s1 c0t9d0s3 c0t9d0s4 c0t9d0s5
bash-3.00# metahs -d hsp001 c0t9d0s5
hsp001: Hotspare is deleted
bash-3.00# metahs -i
hsp001: 4 hot spares
Device Status Length Reloc
c0t9d0s0 Available 1027216 blocks Yes
c0t9d0s1 Available 1027216 blocks Yes
c0t9d0s3 Available 1027216 blocks Yes
c0t9d0s4 Available 1027216 blocks Yes

Device Relocation Information:
Device Reloc Device ID
c0t9d0 Yes id1,sd@SFUJITSU_MAG3182L_SUN18G_01534930_
bash-3.00# metahs -r hsp001 c0t9d0s3 c0t9d0s5
hsp001: Hotspare c0t9d0s3 is replaced with c0t9d0s5
bash-3.00# metahs -i
hsp001: 4 hot spares
Device Status Length Reloc
c0t9d0s0 Available 1027216 blocks Yes
c0t9d0s1 Available 1027216 blocks Yes
c0t9d0s5 Available 1027216 blocks Yes
c0t9d0s4 Available 1027216 blocks Yes

Device Relocation Information:
Device Reloc Device ID
c0t9d0 Yes id1,sd@SFUJITSU_MAG3182L_SUN18G_01534930____

bash-3.00# metahs -d hsp001
metahs: ent250: hsp001: hotspare pool is busy

bash-3.00# metahs -d hsp001 c0t9d0s0 c0t9d0s1 c0t9d0s5 c0t9d0s4
hsp001: Hotspares are deleted
bash-3.00# metahs -d hsp001
hsp001: Hotspare pool is cleared
bash-3.00# metahs -i
metahs: ent250: no hotspare pools found

metaparam -h hsp005 d0
bash-3.00# metaparam -h hsp005 d10
bash-3.00# metastat -p
d5 -m d0 d10 1
d0 1 1 c0t8d0s0 -h hsp005
d10 1 1 c0t10d0s0 -h hsp005
d15 1 1 c0t12d0s0
hsp005 c0t9d0s0 c0t9d0s1 c0t9d0s3 c0t9d0s4
bash-3.00# metainit d100 -r c0t8d0s1 c0t10d0s1 c0t12d0s1
d100: RAID is setup
bash-3.00# metaparam -h hsp005 d100
bash-3.00# metastat -p
d5 -m d0 d10 1
d0 1 1 c0t8d0s0 -h hsp005
d10 1 1 c0t10d0s0 -h hsp005
d100 -r c0t8d0s1 c0t10d0s1 c0t12d0s1 -k -i 32b -h hsp005
d15 1 1 c0t12d0s0
hsp005 c0t9d0s0 c0t9d0s1 c0t9d0s3 c0t9d0s4


bash-3.00# metastat | more
d5: Mirror
Submirror 0: d0
State: Okay
Submirror 1: d10
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 1015808 blocks (496 MB)

d0: Submirror of d5
State: Okay
Hot spare pool: hsp005
Size: 1015808 blocks (496 MB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0t8d0s0 0 No Okay Yes


d10: Submirror of d5
State: Okay
Hot spare pool: hsp005
Size: 1015808 blocks (496 MB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0t10d0s0 0 No Okay Yes


d100: RAID
State: Okay
Hot spare pool: hsp005
Interlace: 32 blocks
Size: 2031616 blocks (992 MB)
(Output truncated)


bash-3.00# metaparam -h none d100
bash-3.00# metastat -p
d5 -m d0 d10 1
d0 1 1 c0t8d0s0 -h hsp005
d10 1 1 c0t10d0s0 -h hsp005
d100 -r c0t8d0s1 c0t10d0s1 c0t12d0s1 -k -i 32b
d15 1 1 c0t12d0s0
hsp005 c0t9d0s0 c0t9d0s1 c0t9d0s3 c0t9d0s4


Output - truncated:
# metastat
d0: Submirror of d5
State: Resyncing
Hot spare pool: hsp005
Size: 1015808 blocks (496 MB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0t8d0s0 0 No Resyncing Yes c0t9d0s1


d10: Submirror of d5
State: Okay
Hot spare pool: hsp005
Size: 1015808 blocks (496 MB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0t10d0s0 0 No Okay Yes

Comparison of Solaris Volume Manager Software & Veritas Volume Manager

Comparison of Solaris Volume Manager Software & Veritas Volume Manager


Sun Microsystems, discourage the use of Veritas on the system disk (root disk/boot disk. Veritas do not, by default, correspond to partitions. In the situation, irrespective of the cause, where the system no longer boots, the sysem administrator must be able to gain access to the file system on the system disk without the drives of the volume management software. This is guaranteed to be possible when each volume corresponds to a partition in the volume tableof contents (VTOC) of the system disk.

Solaris Volume Manager volumes can be accessed even when booted from CD-ROM. This inturn eliminates the need of breaking off a mirrror dring upgrades, thus reducing downtime and complexity of such an operation.

SVM software reservers the correspomdence between the volumes defined in its state database, and the disk partitions defined in the disk lable (VTOC), at all times; disaster recovery is always possible by s standard method, without extra complecations.

It’s easy to grow /var using the VxVM graphical tool. This can be done by anyone at any time, to solve a disk space problem. However, this breaks the volume-partition relation as the /var volume in now a concatenation of two (not necessarily contiguous) sub-disk.

When a disk breaks, the replacement disk is initialized. Slices 3 and 4 become the VxVM private and public region, subdisks are allocated to be mirrored with the surviving disk. Partitions may be created by VxVM software for these subdisks.

There are 2 drawbacks to using SVM software in combinations with VxVM software:

1. Cost
2. SVM software requires that a majority of the state databases be found at boot time (the quorum rule). When all data disks are under VxVM software, only two disks may be left under SVM software. If one of these disks breaks, there is no state database quorum and the system will not boot without manual intervention.

NOTE:
The intervention consists of removing the inaccessible state database copies (using the metadb –d command) and rebooting.

Monday, February 1, 2010

Veritas Volume Manager

Comparing CDS & Sliced disks

CDS

SLICED DISKS

1. Private region (meta data) and public region (user data) are created on a single partition.

1. Private region & public region are created on separate partitions. For eg at 3 and 4

2. Suitable for moving between different operating system.

2. Not suitable for moving between the different operating system.

3. Not suitable for boot partitions.

3. Suitable for boot partitions.