Published by Jim Salter // July 27th, 2017
One of my pet peeves is people talking about zfs “striping” writes across a pool. It doesn’t help any that zfs core developers use this terminology too – but it’s sloppy and not really correct.
ZFS distributes writes among all the vdevs in a pool. If your vdevs all have the same amount of free space available, this will resemble a simple striping action closely enough. But if you have different amounts of free space on different vdevs – either due to disks of different sizes, or vdevs which have been added to an existing pool – you’ll get more blocks written to the drives which have more free space available.
This came into contention on Reddit recently, when one senior sysadmin stated that a zpool queues the next write to the disk which responds with the least latency. This statement did not match with my experience, which is that a zpool binds on the performance of the slowest vdev, period. So, I tested, by creating a test pool with sparse images of mismatched sizes, stored side-by-side on the same backing SSD (which largely eliminates questions of latency).
root@banshee:/tmp# qemu-img create -f qcow2 512M.qcow2 512M root@banshee:/tmp# qemu-img create -f qcow2 2G.qcow2 2G root@banshee:/tmp# qemu-nbd -c /dev/nbd0 /tmp/512M.qcow2 root@banshee:/tmp# qemu-nbd -c /dev/nbd1 /tmp/2G.qcow2 root@banshee:/tmp# zpool create -oashift=13 test nbd0 nbd1
OK, we’ve now got a 2.5 GB pool, with vdevs of 512M and 2G, and pretty much guaranteed equal latency between the two of them. What happens when we write some data to it?
root@banshee:/tmp# dd if=/dev/zero bs=4M count=128 status=none | pv -s 512M > /test/512M.zero 512MiB 0:00:12 [41.4MiB/s] [================================>] 100% root@banshee:/tmp# zpool export test root@banshee:/tmp# ls -lh *qcow2 -rw-r--r-- 1 root root 406M Jul 27 15:25 2G.qcow2 -rw-r--r-- 1 root root 118M Jul 27 15:25 512M.qcow2
There you have it – writes distributed with a ratio of roughly 4:1, matching the mismatched vdev sizes. (I also tested with a 512M image and a 1G image, and got the expected roughly 2:1 ratio afterward.)
OK. What if we put one 512M image on SSD, and one 512M image on much slower rust? Will the pool distribute more of the writes to the much faster SSD?
root@banshee:/tmp# qemu-img create -f qcow2 /tmp/512M.qcow2 512M root@banshee:/tmp# qemu-img create -f qcow2 /data/512M.qcow2 512M root@banshee:/tmp# qemu-nbd -c /dev/nbd0 /tmp/512M.qcow2 root@banshee:/tmp# qemu-nbd -c /dev/nbd1 /data/512M.qcow2 root@banshee:/tmp# zpool create test -oashift=13 nbd0 nbd1 root@banshee:/tmp# dd if=/dev/zero bs=4M count=128 | pv -s 512M > /test/512M.zero 512MiB 0:00:48 [10.5MiB/s][================================>] 100% root@banshee:/tmp# zpool export test root@banshee:/tmp# ls -lh /tmp/512M.qcow2 ; ls -lh /data/512M.qcow2 -rw-r--r-- 1 root root 266M Jul 27 15:07 /tmp/512M.qcow2 -rw-r--r-- 1 root root 269M Jul 27 15:07 /data/512M.qcow2
Nope. Once again, zfs distributes the writes according to the amount of free space available – even when this causes performance to bind *severely* on the slowest vdev in the pool.
You should expect to see this happening if you have a vdev with failing hardware, as well – if any one disk is throwing massive latency instead of just returning errors, your entire pool will as well, until the deranged disk has been removed. You can usually spot this sort of problem using iotop – all of the disks in your pool will have roughly the same throughput in MB/sec (assuming they’ve got equivalent amounts of free space left!), but your problem disk will show a much higher %UTIL than the rest. Fault that slow disk, and your pool performance returns to normal.