Latest interface: 0.3.1
Latest system: 010

Navigation: back to FreeBSD setup Guide

This guide focuses on setting up ZFS for use with FreeBSD. It assumes you have separate disks dedicated to ZFS. It also assumes you followed the previous two guides and are using the AHCI driver for your disks you want to use for ZFS.

Identifying your disks
We begin by identifying your disks:
dmesg | grep ad
atacontrol list
camcontrol devlist
Write down the device names together with the HDD brand/type/capacity so you know how your disks are identified by FreeBSD. I recommend you disconnect any disks that may have important data on them, so you avoid making the mistake of writing to this drive instead of the ones you intended. If you got nothing important on your disks, then there's little that can go wrong.

Cleaning your disks
If you used your disks earlier it may have partitions and stuff on it; you can clear those with a command like this one:
dd if=/dev/zero of=/dev/ada0 bs=1m count=1
That would write zeroes to the first MegaByte on the the harddrive /dev/ada0. Please, please, please! Do NOT make mistakes with the device name. This is pretty dangerous stuff; make sure any disk with valuable data is disconnected; just in case! Disks that are mounted such as your system disk would not allow to be written using this command; you would get a permission denied error or something similar.

You should make sure each disk you want to use for ZFS has no more partitions on it. If you would do ls -l /dev/ada* you should only see /dev/ada0 for example and not /dev/ada0s1 - the s1 being a partition.

Optional: create partitions
There may be reasons to use partitions on your disks. One is that Windows quickly writes to disks that have no partition index, it asks the user to 'initialize' the disk; doing that would result in corruption in case you connected your ZFS disks to a Windows host. Using partitions is a way to prevent that; windows would at least see the partition is created by a non-windows OS and won't touch it.

A second reason to use partitions is when you have your disks running on "FakeRAID" controllers that can not disable their RAID BIOS. These controllers can often be found on cheap add-on PCI or PCI-express SATA controllers. You can use them for ZFS, but make sure the last sector or 512 bytes is not used by FreeBSD or ZFS. Because this last "metasector" is used to store the RAID configuration by the RAID BIOS of that add-on controller. So we want to avoid using it in FreeBSD; partitions can help with that.

If these two concerns do not apply to you, you can skip this section and use raw disks instead. Please note that the default partition style would create unaligned partitions if you have 4K sector aka "Advanced Format" harddrives. These require to be aligned and you better not use partitions for these type of disks, or manually correct the partition offset from 63 to 64. Having said that, here is how you create partitions on your disks:
fdisk -I /dev/ada0
Repeat for all your disks for use with ZFS.

Writing GEOM labels to your disks
GEOM labels are very useful to identify your harddrives by a familiar name. GEOM labels will identify your drive by a unique name you choose. So it does not matter how you connect your disk, it will always be identified by its GEOM label to avoid confusion and allow for flexibility without headaches. GEOM labels are written to the last sector (512 bytes) of each drive, or partition, and should only be used on new (empty) drives as it overwrites a small part that may otherwise be in use.

If you chosen to skip the previous step and use raw devices (/dev/ada0), execute this:
glabel label disk1 /dev/ada0
glabel label disk2 /dev/ada1
glabel label disk3 /dev/ada2
glabel label disk4 /dev/ada3
Assuming your disks are named /dev/ada0 through /dev/ada3, these commands would write labels to them. Now each disk is accessible with /dev/label/diskX entry, while the original /dev/adaX entry remains available as well.

If you chosen to use partitions on your disks, you should write the GEOM label to your partition instead. Otherwise, GEOM label would write 512 bytes exactly to the spot where an add-on RAID BIOS writes their RAID configuration data. So perform everything on the 's1' partition instead:
glabel label disk1 /dev/ada0s1
glabel label disk2 /dev/ada1s1
glabel label disk3 /dev/ada2s1
glabel label disk4 /dev/ada3s1

Creating the ZFS pool
Now the moment is finally there. Go ahead and create your ZFS pool:
zpool create tank raidz label/disk{1..4}
This will create a RAID-Z or RAID5 array with 4 disks called "tank". Notice the {1..4} at the end, this is a short method for writing:
zpool create tank raidz label/disk1 label/disk2 label/disk3 label/disk4
So these two create commands are the same; only execute one of them. The {1..4} saves you alot of typing, especially with a lot of disks.
Now check the status of our array with:
zpool status tank

Mountpoints and Compression
It might be useful to move some parts of your system to the ZFS array, so you can use compression for example. The /usr/ports and /usr/src directories are the best examples of that, as they are a large collection of text files that compress well. But how do we move our /usr/ports and /usr/src to ZFS? This example shows how:
mv /usr/ports /usr/TEMPports
mv /usr/src /usr/TEMPsrc
zfs create tank/usr
zfs create tank/usr/ports
zfs create tank/usr/src
zfs set mountpoint=/usr/ports tank/usr/ports
zfs set mountpoint=/usr/src tank/usr/src
zfs set compression=gzip-9 tank/usr
mv /usr/TEMPports/* /usr/ports/
mv /usr/TEMPsrc/* /usr/src/
rmdir /usr/TEMPports /usr/TEMPsrc
Okay let's see what we did here. We first renamed our existing /usr/src and /usr/ports directories, then created new filesystems on the ZFS array, then mounted the ZFS filesystems to /usr/src and /usr/ports, then enabled the highest level of compression on everything in tank/usr (also affecting our ports and src filesystems), then moved all data from the UFS filesystem to the ZFS filesystem and finally removing the temporary directories. Now your /usr/src and /usr/ports are on ZFS instead, you can check the compression ratio with:
zfs get compressratio tank/usr

ZVOLs are files on the ZFS filesystem that act like virtual harddrives. So you can create a 8GB ZVOL on a filesystem and use it like it was a separate harddrive in the system. This is useful when you want to use iSCSI on ZFS-backed filesystems, enabling you to snapshot the iSCSI images and perform other ZFS related functions on them. To illustrate this:
zfs create -V 10g tank/image0
diskinfo -v /dev/zvol/tank/image0
We now created a 10GB large virtual harddrive called image0, we can set this as target for our istgt daemon that handles iSCSI requests.

To use NFS, you really need to have zfs_enable="YES" in the /etc/rc.conf file; just loading the ZFS kernel module is not enough! You also need the relevant nfs entries in this file to make everything work with ZFS. Sharing via NFS is very easy:
zfs create tank/public
zfs sharenfs="on" tank/public
zfs create tank/protected
zfs sharenfs="-network -mask" tank/protected
zfs create tank/private
zfs sharenfs="-network -mask" tank/private
This example creates three filesystems and shares them using NFS. The first one called 'public' is available to everyone on the network (or even internet - if it can reach your box). The second one called 'protected' is only available to your LAN; IPs The third and last example creates a private NFS share available only to on your LAN. Note that these addresses can still be spoofed. Real security can be enforced by a firewall like pf - the famous package filter ported from OpenBSD.

To enable SMB/CIFS access to the ZFS filesystem, so Windows clients can access it, you need to install Samba first:
make install clean
After installation make sure samba_enable="YES" is present in /etc/rc.conf. If that's true, continue with the configuration file:
ee /usr/local/etc/smb.conf

I prefer to delete everything in this file and start over fresh:
#============================ Global Settings ================================

workgroup = WORKGROUP
server string = Samba Server
security = user
; hosts allow = 192.168.1. 192.168.2. 127.
; interfaces =
load printers = no
guest account = nobody
log file = /var/log/samba/log.%m
log level = 2
max log size = 50
hide dot files = yes
; passdb backend = tdbsam
socket options = TCP_NODELAY SO_RCVBUF=131072 SO_SNDBUF=65536
use sendfile = yes
strict locking = no
follow symlinks = yes
wide symlinks = yes
unix extensions = no

# Charset settings
; display charset = koi8-r
; unix charset = koi8-r
; dos charset = cp866

# Use extended attributes to store file modes
; store dos attributes = yes
; map hidden = no
; map system = no
; map archive = no

#============================ Share Definitions ==============================

browseable = yes
writable = yes
valid users = nfs
create mask = 0660
directory mask = 0770
Now start the Samba daemon (smbd and nmbd) by hand:
/usr/local/etc/rc.d/samba start

Check if it is started:
ps auxw | grep smbd | grep -v grep

The last issue we have to resolve is permissions. If you want both NFS and Samba/CIFS access to the same filesystem, you better let them use the same usernames. NFS user IDs are determined client side. Meaning if an Ubuntu client connects with NFS it will use its own username with uid 1000 to access and write files. So on the FreeBSD server you should create a new user with uid 1000 called "nfs" which is used for both NFS and Samba access.

Go ahead and create the user:
pw useradd nfs
Now edit the password file and change the user id of our newly created 'nfs' user to 1000:
export EDITOR=ee
Now edit the group file and change the group id of the 'nfs' group to 1000:
ee /etc/group
Now reset all permissions:
chown -R nfs:nfs /tank
This should allow proper access without any 'access denied' errors on the client side.

While Samba is running, add the new created user to Samba configuration, and give it a password:
smbpasswd -a nfs

Note that you can make more users this way; but aside from Samba permissions the user you use in smb.conf must also have permission to the actual files. Look at the logs contained in /var/log/samba34 if it's not working!

Last updated: July-30-2010

Valid XHTML 1.1