ZFS as a volume manager

Updated:

Note : This page may contain outdated information and/or broken links; some of the formatting may be mangled due to the many different code-bases this site has been through in over 20 years; my opinions may have changed etc. etc.

While browsing the ZFS man page recently, I made an interesting discovery: ZFS can export block devices from a zpool, which means you can separate "ZFS the volume manager" from "ZFS the filesystem". This may well be old news to many; however I haven’t seen many references to this on the web, so thought I’d post a quick blog update.

The example used in this post is the creation of a mirrored zpool which is then used to create a block device, on top of which I’ll create a UFS filesystem. The reasons for doing this are many and varied : you may have an application that needs UFS (particularly forcedirectio); you may need to create a block device for some reason but all your storage is currently tied up in zpools; or you just need a quick block device to use for testing. Using ZFS as a volume manager also has it’s advantages over something like SVM (formerly "DiskSuite"). The management features are much improved (along with a browser-based GUI, if that’s your thing) and you also gain access to ZFS features which operate at the volume manager layer and aren’t dependant on the filesystem parts of ZFS. This includes features such as end-to-end error checking and recovery, along with snapshots. Read on for the full update…

First, as I don’t have any spare disks available, I’ll create some files to use as pseudo disks. One of the many cool things about ZFS is that you can use it on anything from files like this for testing, to USB keyfobs, to honking great storage arrays. As a proof of concept, all I need is a couple of 100Mb files, which I’ll then use to create a ZFS mirrored pool. I don’t actually gain any redundancy here, but you can see how this could be used in a real-life scenario :

[mark@solaris:~] # mkfile 100m disk1
[mark@solaris:~] # mkfile 100m disk2
[mark@solaris:~] # sudo zpool create test mirror $PWD/disk1 $PWD/disk2


Now we can see the "test" pool is online and view it’s status :

[mark@solaris:~] # zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
test 95.5M 52.5K 95.4M 0% ONLINE -
[mark@solaris:~] # zpool status test
pool: test
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
mirror ONLINE 0 0 0
/export/home/mark/disk1 ONLINE 0 0 0
/export/home/mark/disk2 ONLINE 0 0 0

errors: No known data errors
[mark@solaris:~] # zfs list
NAME USED AVAIL REFER MOUNTPOINT
test 75.5K 63.4M 24.5K /test

So I have around 63Mb to play with. Right now, I could just proceed as normal and create a few ZFS filesystems, but instead I’ll create a block volume. This is done by using the "-V" flag with "zfs create", and specifying a size for our block device :

[mark@solaris:~] # sudo zfs create -V 60m test/testvol

This now creates an entry under /dev/zvol, much like Veritas creates things under /dev/vx/. We can then format it and mount it under /mnt/testvol :

[mark@solaris:~] # sudo newfs /dev/zvol/rdsk/test/testvol 
newfs: construct a new file system /dev/zvol/rdsk/test/testvol: (y/n)? y
/dev/zvol/rdsk/test/testvol: 122880 sectors in 20 cylinders of 48 tracks, 128 sectors
60.0MB in 2 cyl groups (14 c/g, 42.00MB/g, 20160 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 86176,

[mark@solaris:~] # sudo mkdir /mnt/testvol
[mark@solaris:~] # sudo mount -F ufs /dev/zvol/dsk/test/testvol /mnt/testvol

Let’s check it’s really there :

[mark@solaris:~] # df -h /mnt/testvol 
Filesystem size used avail capacity Mounted on
/dev/zvol/dsk/test/testvol
55M 1.0M 49M 3% /mnt/testvol

Now let’s try snapshoting it. I’ll create a file, snapshot the filesystem and then delete it to prove we could get it back.

[mark@solaris:~] # sudo touch /mnt/testvol/testfile
[mark@solaris:~] # ls -lh /mnt/testvol/testfile
-rw-r--r-- 1 root root 0 2007-07-06 22:57 /mnt/testvol/testfile
[mark@solaris:~] # sudo zfs snapshot test/testvol@1

Now we’ve got our snapshot, we can delete that "testfile", as it’s safely on the snapshot. The process of taking the snapshot created another block device under /dev/zvol, which can be mounted read-only :

[mark@solaris:~] # sudo rm /mnt/testvol/testfile 
[mark@solaris:~] # sudo mkdir /mnt/testsnapshot
[mark@solaris:~] # sudo mount -F ufs -o ro /dev/zvol/dsk/test/testvol@1 /mnt/testsnapshot
[mark@solaris:~] # ls /mnt/testsnapshot/
lost+found/ testfile

And just to confirm, zfs now shows our snapshotted volume :

[mark@solaris:~] # zfs list
NAME USED AVAIL REFER MOUNTPOINT
test 60.1M 3.39M 24.5K /test
test/testvol 5.14M 58.2M 5.06M -
test/testvol@1 74.5K - 5.06M -

How cool is that ?

Comments