July 10, 2016
By: gotwf

Steppin' Into the Void

No shit. There I was…​. Searchin'. Searchin' here…​ searchin' there…​ Searchin', searchin' everywhere…​ for a systemd free Linux platform that checked all my boxes. I’d all but settled on Gentoo when this musl’d up halfwit happened to mention Voidlinux. Whisky, Tango, Foxtrot?! Yet another new Linux distro? Nope. Turns out Voidlinux has been around since circa 2008. Hmm…​. worth a look, says I!

Well, well, well…​ front door looks promising, indeed! Voidlinux is not a fork, which means not just another niche respin of one of the big name distros and hence offers a chance for something fresh. Perhaps innovative, even. Rolling release. Cool. Voidlinux uses runit Rock on! [1]. And LibreSSL. Rock steady, baby! Very promising, indeed. Devs can think for themselves and don’t just follow the herd. By now I’ve taken the bait, hook, line, and sinker. Reel me in.

But wait! There’s more! Behind door #4, Voidlinux utilizes a native package management system that supports both binary and source builds. Coded in C, so it’s likely fast. A quick look under the hood reveals it checks my secure package manager boxes as well. xbps [2] and xbps-src also sport a 2-clause BSD License. Say what? Heresy. The GPL fascists will have their breeder bits for this! Might these devs come from BSD* backgrounds? Yep. Shore 'nuff. Project lead Juan Romero Pardines is a former NetBSD dev head. Hmm…​ starting to look like the folks behind Voidlinux are smarter than the average bears Boo-Boo!

Hot damn. I just peed myself.

ZOL on Voidlinux

Moving on, let’s see how well ZOL takes to Voidlinux. I am targeting a workstation setup to replace my aging Archlinux mdraid/lvm daily driver that’s become a sketchy proposition since Archlinux adopted systemd.

As of 2.02-beta3 Grub2 sports full ZFS feature flag support for ZOL 0.6.5. Opinions are still divided regarding use of separate /boot and / partitions but I’m going to live dangerously and do a single ZFS pool for root/boot for this setup.

Voidlive ISO

Voidlinux is not exactly mainstream and I erroneously presumed I would have to roll my own ZOL dkms package from source. Turns out it’s been available in xbps since August 2014. Void takes a minimalist approach, however, and there isn’t enough free space provided by the stock live image to install ZFS. Boo Hiss! What’s a body to do? Well, you can either spin up your own using void-mklive or save yourself some work and grab the ZOL enhanced image I spun up for this project. [3]

Make a bootable usb key. Substitute for sdX as appropriate.

# dd bs=512K if=./void-live-zfs-dkms-x86_64-20160626.iso \
of=/dev/sdX; sync

Let’s Get Soaking Wet!

Now git yerself booted into that puppy. I like to ssh in from my main squeeze so I have ready access to my favorite tools. Void enables ssh out of the box. No additional muss nor fuss. Me likey. [4]

$ ssh-copy-id -i ~/.ssh/id_ed25519.pub your-target-host
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/XXXXXXXX/.ssh/id_ed25519.pub"
The authenticity of host 'aaa.bbb.ccc.ddd (555.444.333.222)' can't be established.
ECDSA key fingerprint is SHA256:/ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ.
Are you sure you want to continue connecting (yes/no)? yes

/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.27.98's password: voidlinux

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'your-target-host'" and check to make sure that only the key(s) you wanted were added.

$ ssh your-target-host
Last login: Fri Jul  1 18:22:01 2016 from 111.222.333.444

That was easy ;D Now that we’re jacked into our target install box, let’s verify our hard disk info:

# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 698.7G  0 disk
sdb      8:16   0 698.7G  0 disk
sdc      8:32   1   3.8G  0 disk
├─sdc1   8:33   1   283M  0 part /run/initramfs/live
└─sdc2   8:34   1    16M  0 part
sr0     11:0    1  1024M  0 rom
loop0    7:0    0 233.5M  1 loop
loop1    7:1    0   718M  1 loop
loop2    7:2    0   512M  0 loop

Time to partition our drives. I am going to set this rig up with a raid1 mirror using sda and sdb. This box is capable of BIOS booting from GPT partitioned disks. Lucky me (may the Saints preserve us from UEFI hell). I prefer to use GPT over MSDOS style partitions for the following reasons:

  1. GPT provides an ondisk backup partition table. Yes, I know it’s best practice to keep an offline backup copy, and I do, but I still like having the ondisk redundancy should I ever have to recover a corrupt GPT partition table.

  2. GPT adds CRC32 checksums to its data structures. Yep. These also get stored twice.

  3. I don’t trust GRUB and favor using a dedicated BIOS boot partition (type ef02) over embedding Grub in the "MBR gap".

As such, the iso I built includes gptfdisk. Sweet ;D Let’s Git-R-Done!.

# gdisk /dev/sda
GPT fdisk (gdisk) version 1.0.1

Partition table scan:
  MBR: not present
  BSD: not present
  APM: not present
  GPT: not present

Creating new GPT entries.

Command (? for help): n
Partition number (1-128, default 1): 2
First sector (34-1465149134, default = 2048) or {+-}size{KMGTP}:
Last sector (2048-1465149134, default = 1465149134) or {+-}size{KMGTP}: +2M
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): ef02
Changed type of partition to 'BIOS boot partition'

Command (? for help): n
Partition number (1-128, default 1):
First sector (34-1465149134, default = 6144) or {+-}size{KMGTP}: +128M
Last sector (268288-1465149134, default = 1465149134) or {+-}size{KMGTP}: -128M
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): bf00
Changed type of partition to 'Solaris root'

Command (? for help): p
Disk /dev/sda: 1465149168 sectors, 698.6 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): AC45081A-160B-4DFC-BA93-58953FD6D80D
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 1465149134
Partitions will be aligned on 2048-sector boundaries
Total free space is 526302 sectors (257.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1          268288      1464886990   698.4 GiB   BF00  Solaris root
   2            2048            6143   2.0 MiB     EF02  BIOS boot partition

Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!

Do you want to proceed? (Y/N): Y
OK; writing new GUID partition table (GPT) to /dev/sda.
The operation has completed successfully.

Okay, what was that all about, eh? Lemme 'splain…​.

The first partition, partition #2, is the BIOS boot partition. If grub-install finds a partition type ef02, it installs itself there. While this partition only requires on the order of 31KiB, it’s commonly made 1MiB for rounding and to allow for future growth. As UEFI uptake by hardware manufacturers "progresses" I suspect GRUB’s future is less than bright but I double that nonetheless as my way of making a snarky comment about GRUB bloatware.

The second partiton, partition #1, will host our ZFS pool. Various how-to’s around the 'net use bf00, bf01, bf05, or whatever. ZFS doesn’t actually care what type this partition is. Labeling it something 'Solaris-Y' is just a crutch for us humanoids to help us remember it’s being used for a ZFS pool. I’ve chosen Solaris root here to help remind me that this is a root zpool.

Okay, what’s up with the +/- 128M thangs? I doubt I’ll ever use these disks with Mac OSX but predicting the future is tricky at best. Reportedly, some tools can also hork things up if they don’t have sufficient breathing room. [5] Hence, I suggest following Apple’s recommendation of leaving 128MiB of free space after each partition. Leaving 128MiB free space at the end also provides a bit of insurance against manufacturer variance should you ever need to replace one of your drives Moreover, 256MiB is insignificant on modern drives.

Finally, what is up with my crazy numbering? Well, again, it doesn’t really matter what number you give your partitions. We could have just as easily used 3 and 7. Indeed, ZOL on whole disk defaults to using 1 and 9. Designating the partiton we can boot from as partition #1 is just another memory crutch in the event something goes awry and we need to invoke GRUB command line commando mode.

With all that 'splainin out of the way, let’s clone that table onto our second device, sdb.

# sgdisk -b sda-part.table /dev/sda
The operation has completed successfully.
# sgdisk -l sda-part.table /dev/sdb
Creating new GPT entries.
The operation has completed successfully.

Being an exact copy, we now have two drives with identical GUID’s. Not good. So let’s give sdb a new GUID and confirm that sda and sdb are indeed unique:

# sgdisk -G /dev/sdb
The operation has completed successfully.
# sgdisk -p /dev/sdb | grep -i guid
Disk identifier (GUID): 0FCC8550-ED78-407D-BEEA-D43FB73C7257
# sgdisk -p /dev/sda | grep -i guid
Disk identifier (GUID): 169BB909-2184-4BFF-9A41-107740B0F043

Nice. We have at least one 7 in each GUID (I changed one of them behind the scenes ;D)

We are almost ready to create our ZFS pool. ZFS performs best when its block size matches your drives' sector size. ZFS automagically detects this and sets an alignment shift value as appropriate. Nevertheless, I prefer setting ashift explicitly. Read on, dear reader, for enlightenment as to why.

Modern Advanced Format (AF) disks use 4K. Some disks, particularly early AF disks, lied about this and reported 512 byte logical sector sizes in order to maintain compatability with Windows XP. Legacy operating systems notwithstanding, does anybody trust hardware manufacturers to tell the truth? Nope, didn’t think so. They’re in the marketing business, fer' chrissakes! So let’s double check it. [6]

# smartctl -i /dev/sda | grep -i 'sector size'
Sector Size:      512 bytes logical/physical

I like to use good quality drives. These are older WD RE2’s, which employ 512 bytes for both physical and logical sector size. Best practices suggest setting ashift of 12 (2^12 = 4096 = 4K) regardless since if a drive fails the replacement will almost certainly use AF. Well, if either of these drives fail both will be replaced and I’ll just clone the data over to a new pool. In the meantime, using 4K on 512 byte drives wastes space so I will be creating my pools using ashift=9 (2^9 = 512).

ZFS on Linux supports several options for selecting /dev names when creating a pool. For testing purposes top level dev names, e.g. /dev/sda1, are the default for consistency with other ZFS implementations. I’m going to use 'by-id' for this set up. This decision will cause us some pain points due to a GRUB bug that’s been outstanding since the last POTUS election, but these will be minor enough to overcome that the benefits justify the costs. Your mileage may vary. [7]

# ll -h /dev/disk/by-id/ | grep -i wdc | grep part1
lrwxrwxrwx 1 root root 10 Jul  1 18:57 ata-WDC_WD7500AYYS-01RCA0_WD-WCAPT0562110-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jul  1 18:54 ata-WDC_WD7500AYYS-01RCA0_WD-WCAPT0577337-part1 -> ../../sdb1

We’re almost ready to create our pool but not quite. Commercial software frequently utilizes "hostid" to deter license/subscription abuse. ZFS pools use "hostid" as a precaution to keep pools from inadvertenly being imported by the wrong system. This is more of a potential issue in datacenter environments than home but I still like to follow best practices. Voidlinux, being a true FOSS project, has no need to set a hostid and falls back to a default of 00000000.

# ll /etc/hostid
ls: cannot access '/etc/hostid': No such file or directory
# hostid
00000000

So we need to give it one. You can choose whatever you like for this but I will be basing mine on the MAC address of the first ethernet device. [8]

# ip link sh | grep ether
    link/ether 04:4b:80:80:80:03 brd ff:ff:ff:ff:ff:ff
    link/ether 00:e0:81:5d:fc:8a brd ff:ff:ff:ff:ff:ff

Now let’s write that to /etc/hostid. If you’re doing the copy pasta polka, don’t forget to strip out the colons.

# echo "044b80808003" > /etc/hostid
# hostid
62343430

Alright then. Let’s create that pool. While the name of the pool can be anything (w/in limits of naming restrictions) many guides around the net use "rpool" as their defacto default top level pool name. Some platforms automatically import anything so named. Not my cup of tea. Instead, I prefer to base my top level names on hostname. [9]

# zpool create -f -o ashift=9 -o cachefile= -O compression=lz4 \
> -m none -R /mnt/void rogue mirror \
> /dev/disk/by-id/ata-WDC_WD7500AYYS-01RCA0_WD-WCAPT0562110-part1 \
> /dev/disk/by-id/ata-WDC_WD7500AYYS-01RCA0_WD-WCAPT0577337-part1

Let’s check out our handiwork.

# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
rogue   696G   286K   696G         -     0%     0%  1.00x  ONLINE  /mnt/void
# zpool status
  pool: rogue
 state: ONLINE
  scan: none requested
config:

	NAME                                                 STATE     READ WRITE CKSUM
	rogue                                                ONLINE       0     0     0
	  mirror-0                                           ONLINE       0     0     0
	    ata-WDC_WD7500AYYS-01RCA0_WD-WCAPT0562110-part1  ONLINE       0     0     0
	    ata-WDC_WD7500AYYS-01RCA0_WD-WCAPT0577337-part1  ONLINE       0     0     0

errors: No known data errors

Break It on Down!

Lord have mercy! As of yet our pool doesn’t have any filesystems/datasets.

# zfs list
NAME    USED  AVAIL  REFER  MOUNTPOINT
rogue   244K   674G    19K  none

Let’s make some. But what?

I like using high level "containers" to separate out stuff that changes often from that which does not. Ditto essential platform bits from non essential stuff like package distfiles. This sets me up for being able to snapshot the important things without wasting space on bits that are readily replaceable.

Opinions are varied on how to handle logs. I prefer mine to be comprehensive. Should I roll back a snapshot, I don’t want my logs to go back in time as well and loose information. Additionally, archived logs are usually compressed by log rotating daemons. ZFS features built in compression at the filesystem level. Kind of silly to be compressing twice, eh? I may decide to let ZFS handle the compression at the dataset level and not have my logging daemon compress logs at all. No zgrep’ing. I haven’t decided yet but breaking out logs to a dedicated data set provides the flexibility to season to taste.

I also like to separate out stuff that is not necessarily tied to a particular host or operating system, e.g. user directories and application data directories. Other candidates include things you may want to share out to other hosts, e.g. a directory of flac audio.

One can spend many hours of analysis paralysis scratching their noggin over such considerations. You may want to consult the Linux Filesystem Hierarchy Standard for insights. The good news is that ZFS makes it damn easy to change your mind later, so getting it perfect out of the gate isn’t do or die. Give it your best shot and get on with it. [10]

I will use "ROOT" as a container for essential platform system bits that I want to snapshot and possibly clone. [11]

# zfs create rogue/ROOT
# zfs create -o mountpoint=/ rogue/ROOT/void

Let’s give my users some love.

# zfs create rogue/USERS
# zfs create -o mountpoint=/root rogue/USERS/root
# zfs create -o mountpoint=/home/mortalme rogue/USERS/mortalme

"VOID" will be a container for stuff that I don’t want/need to keep synchronized with my platform bits, e.g. logs, and/or can be readily replaced, e.g. package manager and source distfiles. I will also use this for locally installed software, i.e. things I build from source that are not under the Voidlinux umbrella.

# zfs create rogue/VOID
# zfs create -o mountpoint=/var/log rogue/VOID/logs
# zfs create -o mountpoint=/var/cache/xbps rogue/VOID/distfiles
# zfs create -o mountpoint=/usr/local rogue/VOID/local
# zfs create -o mountpoint=/usr/src rogue/VOID/src

All together now:

# zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
rogue                  382K   674G    19K  none
rogue/ROOT              39K   674G    19K  none
rogue/ROOT/void         20K   674G    20K  /mnt/void
rogue/USERS             57K   674G    19K  none
rogue/USERS/mortalme         19K   674G    19K  /mnt/void/home/mortalme
rogue/USERS/root        19K   674G    19K  /mnt/void/root
rogue/VOID              95K   674G    19K  none
rogue/VOID/distfiles    19K   674G    19K  /mnt/void/var/cache/xbps
rogue/VOID/local        19K   674G    19K  /mnt/void/usr/local
rogue/VOID/logs         19K   674G    19K  /mnt/void/var/log
rogue/VOID/src          19K   674G    19K  /mnt/void/usr/src

ZFS automatically mounts filesystems, creating any directories as needed. Of course ZFS also suports legacy mounts via /etc/fstab. This is not an either, or, and you are free to mix and match should you have need to embrace your inner lunatic. Yep. You guessed it. With the exception of swap space, I do not use legacy mounts. [12]

# mount | grep void
rogue/ROOT/void on /mnt/void type zfs (rw,relatime,xattr,noacl)
rogue/USERS/root on /mnt/void/root type zfs (rw,relatime,xattr,noacl)
rogue/USERS/mortalme on /mnt/void/home/mortalme type zfs (rw,relatime,xattr,noacl)
rogue/VOID/logs on /mnt/void/var/log type zfs (rw,relatime,xattr,noacl)
rogue/VOID/distfiles on /mnt/void/var/cache/xbps type zfs (rw,relatime,xattr,noacl)
rogue/VOID/local on /mnt/void/usr/local type zfs (rw,relatime,xattr,noacl)
rogue/VOID/src on /mnt/void/usr/src type zfs (rw,relatime,xattr,noacl)

Properties set at pool creation time are inherited by child datasets.

# zfs get compression rogue/VOID/logs
NAME             PROPERTY     VALUE     SOURCE
rogue/VOID/logs  compression  lz4       inherited from rogue

As noted above I have yet to decide how I am going to handle my logs on Void. Should I so desire, turning off compression is a simple matter, e.g.

# zfs set compression=off rogue/VOID/logs

Just a small taste of the awesome sauce that is ZFS. See man zfs(8) for the full smorgasboard.

Time to Install Void!

Void’s official repositories are signed with RSA keys. I’ll be using the glibc repo. Other repositories, e.g. musl, will have a different signature. You should confirm signatures as appropriate prior to proceeding. [13]

# xbps-install -S -R https://repo.voidlinux.eu/current \
> -r /mnt/void base-system grub zfs

[*] Updating `https://repo.voidlinux.eu/current/x86_64-repodata' ...
x86_64-repodata: 1074KB [avg rate: 112KB/s]
`https://repo.voidlinux.eu/current' repository has been RSA signed by "Void Linux"
Fingerprint: 60:ae:0c:d6:f0:95:17:80:bc:93:46:7a:89:af:a3:2d
Do you want to import this public key? [Y/n] Y
110 packages will be downloaded:

  xbps-triggers-0.102_2
  base-files-0.139_1
  ncurses-base-6.0_2
  glibc-2.23_1

.... Snip bunches 'o output before we're prompted to proceed....

  dracut-044_1
  linux-4.6_1
  base-system-0.112_1
  os-prober-1.71_1
  device-mapper-2.02.158_1
  grub-2.02~beta3_2

Size to download:              119MB
Size required on disk:         459MB
Free space on disk:            674GB

Do you want to continue? [Y/n]
[*] Downloading binary packages
xbps-triggers-0.102_2.noarch.xbps: 8108B [avg rate: -- stalled --]
xbps-triggers-0.102_2.noarch.xbps.sig: 512B [avg rate: 10MB/s]
base-files-0.139_1.x86_64.xbps: 50KB [avg rate: 148KB/s]
base-files-0.139_1.x86_64.xbps.sig: 512B [avg rate: 9434KB/s]

.... Snip yet more bunches 'o stuff....

dracut-044_1: configuring ...
dracut-044_1: installed successfully.
linux-4.6_1: configuring ...
linux-4.6_1: installed successfully.
base-system-0.112_1: configuring ...
base-system-0.112_1: installed successfully.
os-prober-1.71_1: configuring ...
os-prober-1.71_1: installed successfully.
device-mapper-2.02.158_1: configuring ...
device-mapper-2.02.158_1: installed successfully.
grub-2.02~beta3_2: configuring ...
grub-2.02~beta3_2: installed successfully.

110 downloaded, 110 installed, 0 updated, 110 configured, 0 removed.

After doing its thing xbps-install exits with something similar to above.

Riding Into the Void

Next stop on our journey is to get chroot’d into our fresh Void bits and do some final configuration.

We need copy the hostid file we created above into our new world.

# cp /etc/hostid /mnt/void/etc

Mount up! We ride!

# mount -t proc proc /mnt/void/proc
# mount -t sysfs sys /mnt/void/sys
# mount -B /dev /mnt/void/dev
# mount -t devpts pts /mnt/void/dev/pts

Ready to chroot.

# pwd
/root
# env -i HOME=/root TERM=$TERM chroot /mnt/void bash -l
# pwd
/

Yep. We’re in like flint. Check yer' saddle and confirm your mounts if you like.

# mount
rogue/ROOT/void on / type zfs (rw,relatime,xattr,noacl)
rogue/USERS/root on /root type zfs (rw,relatime,xattr,noacl)
rogue/USERS/mortalme on /home/mortalme type zfs (rw,relatime,xattr,noacl)
rogue/VOID/logs on /var/log type zfs (rw,relatime,xattr,noacl)
rogue/VOID/distfiles on /var/cache/xbps type zfs (rw,relatime,xattr,noacl)
rogue/VOID/local on /usr/local type zfs (rw,relatime,xattr,noacl)
rogue/VOID/src on /usr/src type zfs (rw,relatime,xattr,noacl)
proc on /proc type proc (rw,relatime)
sys on /sys type sysfs (rw,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,noexec,size=1978504k,nr_inodes=494626,mode=755)
pts on /dev/pts type devpts (rw,relatime,mode=600,ptmxmode=000)

Everything looks just dandy. Give your box a hostname.

# echo YOUR_HOSTNAME > /etc/hostname

Void’s BSD roots are evidenced in it’s use of rc.conf. I’ll set some basic stuff here. You can peruse the commented file for more suggestions.

# echo "TIMEZONE=\"America/Denver\"" >> /etc/rc.conf
# echo "KEYMAP=\"us\"" >> /etc/rc.conf
# echo "HARDWARECLOCK=\"UTC\"" >> /etc/rc.conf

Presuming you’re using glibc, configure your locale. If you’re a musl user, omit this step.

# cp /etc/default/libc-locales /etc/default/libc-locales.dist
# echo "en_US.UTF-8 UTF-8" > /etc/default/libc-locales
# echo "en_US ISO-8859-1" >> /etc/default/libc-locales
# xbps-reconfigure -f glibc-locales
glibc-locales: configuring ...
Generating GNU libc locales...
  en_US.UTF-8... done.
  en_US.ISO-8859-1... done.
glibc-locales: configured successfully.

Confirm that the ZFS modules are indeed loaded.

# lsmod | grep zfs
zfs                  2629632  7
zunicode              331776  1 zfs
zavl                   16384  1 zfs
zcommon                36864  1 zfs
znvpair                57344  2 zfs,zcommon
spl                    73728  3 zfs,zcommon,znvpair

Yippie yi yo kiyah!

Once upon a time there was talk of ditching ZOL’s use of a cachefile and you may have seen dated how-to’s that omit this. That plan was scraped, however, so we need to set our zpool.cache.

# zpool set cachefile=/etc/zfs/zpool.cache rogue

# ll /etc/zfs
total 13
-rw-r--r-- 1 root root  165 May 15 13:20 vdev_id.conf.alias.example
-rw-r--r-- 1 root root  166 May 15 13:20 vdev_id.conf.multipath.example
-rw-r--r-- 1 root root  520 May 15 13:20 vdev_id.conf.sas_direct.example
-rw-r--r-- 1 root root  152 May 15 13:20 vdev_id.conf.sas_switch.example
drwxr-xr-x 2 root root   12 Jul  1 20:30 zed.d
-rwxr-xr-x 1 root root 9519 May 15 13:20 zfs-functions
-rw-r--r-- 1 root root 1712 Jul  1 23:14 zpool.cache

Set up some swap. You should match your system’s page size. This will most likely be 4096 but it’s always wise to confirm.

# getconf PAGESIZE
4096
# zfs create -o sync=always -o primarycache=metadata -o secondarycache=none \
> -b 4k -V 8G -o logbias=throughput rogue/swap
# mkswap -f /dev/zvol/rogue/swap
Setting up swapspace version 1, size = 8 GiB (8589930496 bytes)
no label, UUID=1a9fa9cb-6ad3-4f22-b5e2-44b1c355cfc1

Edit your /etc/fstab using your editor of choice to end up with something that looks like this:

# See fstab(5).
#
# <file system>	<dir>	<type>	<options>		<dump>	<pass>
tmpfs		/tmp	tmpfs	defaults,nosuid,nodev   0       0
# zol swap vol
/dev/zvol/rogue/swap	none	swap	sw		0	0

Rustlin' Up Some Grub

GRUB needs to know which dataset to boot from:

# zpool set bootfs=rogue/ROOT/void rogue

Let’s confirm this:

# zpool get bootfs rogue
NAME   PROPERTY  VALUE            SOURCE
rogue  bootfs    rogue/ROOT/void  local

We also need to tune our kernel command for ZOL. Nowadays this is done via knobs in /etc/default/grub.

The Linux kernel attempts to optimize disk I/O via the use of various "elevator" schemes. We want to let ZFS handle it’s own I/O optimizations so we’ll use the noop elevator, which is basically a FIFO queue sans any extra logic.

Linux also presumes that the presence of a swap file means we’re super psyched about using power management features like suspend. Unlike traditional dedicated swap partitions, however, our ZOL swap device isn’t actually available for such. Hence, during bootup our kernel hangs on in quiet desperation for 120 seconds, hoping said missing swap dev will magically appear and it can resume being politically, if not technically, correct. This is a drag. So we’ll just give it a little spanking and tell it to get on with it by passing "noresume".

I prefer text to graphic terminals on boot. This is also a dedicated Linux workstation and I don’t want to screw around probing for other operating systems, so I’ll tweak those knobs as well. Fire up your editor of choice and caress your keyboard lovingly until you have something similar to what I’ve concocted below:

# cat /etc/default/grub
#
# Configuration file for GRUB.
#
GRUB_DEFAULT=0
#GRUB_HIDDEN_TIMEOUT=0
#GRUB_HIDDEN_TIMEOUT_QUIET=false
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="Void"
GRUB_CMDLINE_LINUX_DEFAULT="loglevel=4 elevator=noop noresume"
# Uncomment to use basic console
#GRUB_TERMINAL_INPUT="console"
# Uncomment to disable graphical terminal
GRUB_TERMINAL_OUTPUT=console
#GRUB_BACKGROUND=/usr/share/void-artwork/splash.png
#GRUB_GFXMODE=1920x1080x32
#GRUB_DISABLE_LINUX_UUID=true
#GRUB_DISABLE_RECOVERY=true
GRUB_DISABLE_OS_PROBER=true

Remember those minor pain points about using disk by-id I alluded to when creating our pool? Well, they’ve just come to roost. Turns out grub-probe/install is too brain dead to actually recognize any of the /dev/disk/by-* options. Instead, it probes zpool status and prepends "/dev" to the devices returned thereupon, and pukes up the infamous "failed to get canonical path of /dev/ata-driveid-partX" error message.

So we’ll outfool the hapless fool by grabbing our vdevs:

# zpool status
  pool: rogue
 state: ONLINE
  scan: none requested
config:

	NAME                                                 STATE     READ WRITE CKSUM
	rogue                                                ONLINE       0     0     0
	  mirror-0                                           ONLINE       0     0     0
	    ata-WDC_WD7500AYYS-01RCA0_WD-WCAPT0562110-part1  ONLINE       0     0     0
	    ata-WDC_WD7500AYYS-01RCA0_WD-WCAPT0577337-part1  ONLINE       0     0     0

errors: No known data errors

And creating the requisite symlinks: [14]

# ln -s /dev/sda1 /dev/ata-WDC_WD7500AYYS-01RCA0_WD-WCAPT0562110-part1
# ln -s /dev/sdb1 /dev/ata-WDC_WD7500AYYS-01RCA0_WD-WCAPT0577337-part1

grub-probe should now be able to deduce we’re using ZFS:

# grub-probe /
zfs

We’re finally ready to install GRUB:

# grub-install /dev/sda
Installing for i386-pc platform.
Installation finished. No error reported.
# grub-install /dev/sdb
Installing for i386-pc platform.
Installation finished. No error reported.

Turnin' Dracut Loose

Voidlinux uses dracut. We need to tune a couple things for ZOL. I am new to dracut and there may well be things I missed. See man dracut(8) and/or man dracut.conf(5) for enlightenment.

# echo "hostonly=\"yes\"" > /etc/dracut.conf.d/zol.conf
# echo "nofsck=\"yes\"" >> /etc/dracut.conf.d/zol.conf
# echo "add_dracutmodules+=\" zfs \"" >> /etc/dracut.conf.d/zol.conf
# echo "omit_dracutmodules+=\" btrfs resume \"" >> /etc/dracut.conf.d/zol.conf

Setting "hostonly" tells dracut to skip all the extraneous needful things bundled into a generic kernel and generate an initramfs trimmed for booting our host’s particular hardware potpouri.

ZOL doesn’t use fsck, so that bit should be self explanatory.

I don’t think the add_dracutmodules line is required in light of our "hostonly" but it doesn’t hurt to explicitly ensure inclusion of the zfs and spl modules, especially since on this rig they will be essential for booting. As this is my first tour with dracut I’ll error on the side of caution.

Alas, for reasons that elude me, Void devs have included btrfs (shudder…​) in their default kernel. Oh well, nobody’s perfect. We shall endeavor to reform their evil ways. In the meantime, I am unsure whether including "btrfs" in the "omit_dracutmodules" line has any effect after having specified "hostonly", but explicitly excluding the mess that is btrfs makes me feel better and certainly doesn’t hurt.[15]

Recall that 120s resume hang I mentioned? Omitting the resume module from our configuration is another solution to this issue and provides an additional benefit of shaving a few more KiB from our initramfs.

Sometimes I have fat fingers so I like to confirm I done did what I thought I did…​

# cat /etc/dracut.conf.d/zol.conf
hostonly="yes"
nofsck="yes"
add_dracutmodules+=" zfs "
omit_dracutmodules+=" btrfs resume "

We’re ready to regenerate our kernel and initramfs. As of this writing, Voidlinux is on a 4.6 kernel. As of your reading, you may not be. Beware of mindless copy pasta. Let 'er buck! [16]

# xbps-reconfigure -f linux4.6
linux4.6: configuring ...
Executing post-install kernel hook: 10-dracut ...
Executing post-install kernel hook: 20-dkms ...
Available DKMS module: spl-0.6.5.7.
Available DKMS module: zfs-0.6.5.7.
Building DKMS module: spl-0.6.5.7... done.
Building DKMS module: zfs-0.6.5.7... done.
Executing post-install kernel hook: 50-grub ...
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.6.3_1
Found initrd image: /boot/initramfs-4.6.3_1.img
done
linux4.6: configured successfully.

Awesome ;D

Note that hook that just reconfigured GRUB and generated your grub.cfg. Interested parties can review my my grub.cfg for troubleshooting purposes. Heck, feel free to go for it anyway just because it’s so darn memezmerizng ;D

We’re almost ready to reboot into our shinny new ZOL workstation, but let’s do a couple final sanity checks first.

# lsinitrd -m
Image: /boot/initramfs-4.6.3_1.img: 7.6M
================================================
Version:

dracut modules:
bash
dash
caps
i18n
drm
kernel-modules
zfs
rootfs-block
terminfo
udev-rules
usrmount
base
fs-lib
shutdown
================================================

Cool. Our initramfs does indeed include the requisite zfs dkms. Let’s double check our zpool.cache.

# lsinitrd | grep zpool.cache
-rw-r--r--   1 root     root         1712 Jul  1 23:14 etc/zfs/zpool.cache

We need to set a root password before bailing out of our chroot environment.

# passwd root
New password:
Retype new password:
passwd: password updated successfully
# exit
# pwd
/root

And umount some mounts just to keep things copacetic…​

# umount -n /mnt/void/dev/pts
# umount -n /mnt/void/dev
# umount -n /mnt/void/sys
# umount -n /mnt/void/proc

# mount | grep void
rogue/ROOT/void on /mnt/void type zfs (rw,relatime,xattr,noacl)
rogue/USERS/root on /mnt/void/root type zfs (rw,relatime,xattr,noacl)
rogue/USERS/mortalme on /mnt/void/home/mortalme type zfs (rw,relatime,xattr,noacl)
rogue/VOID/logs on /mnt/void/var/log type zfs (rw,relatime,xattr,noacl)
rogue/VOID/distfiles on /mnt/void/var/cache/xbps type zfs (rw,relatime,xattr,noacl)
rogue/VOID/local on /mnt/void/usr/local type zfs (rw,relatime,xattr,noacl)
rogue/VOID/src on /mnt/void/usr/src type zfs (rw,relatime,xattr,noacl)

Export our pool.

# zpool export rogue

And reboot into our new rig!

# shutdown -r now

Broadcast message from root@void-live (pts/0) (Fri Jul  1 23:19:39 2016):

system is going down NOW

After rebooting into our shinny new Voidlinux box, we need to enable some basic networking services. I’ll be using dhcp. I also like being able to ssh into my boxes. Season runit to taste and create symlinks as appropriate.

# ln -s /etc/sv/dhcpcd /var/service/
# ln -s /etc/sv/sshd /var/service/

Now the real journey begins. Before venturing too far afield, you may want to take a snapshot of your fresh tableau noir.

# zfs snapshot rogue/ROOT/void@2016070100-freshie
# zfs list -t snapshot
NAME                                 USED  AVAIL  REFER  MOUNTPOINT
rogue/ROOT/void@2016070100-freshie      0      -   316M  -

Awesome sauce, indeed!

Congratulations! Welcome to the Void!

Stay tuned for exciting adventures in fleshing this puppy out ;D

Terminal Recordings

I captured two terminal recordings that closely follow the steps outlined above. Exceptions are noted below.

  1. Soaking Wet takes you from gpt partitioning your drives through installing Voidlinux. This recording creates a pool using a single drive. A second drive can easily be added later to create a mirror. The ef02 partition was kept to 1M.

  2. Riding Into takes up from there through being ready to reboot into your freshie Voidlinux install. The swap volume creation step was omitted as it had been done previously. Swap can be set up whenever, so I wouldn’t sweat it.

Colophon

Is Voidlinux for you? If you read my "The Challenge To Be Free" article you no doubt realize that Voidlinux ticks a lot of my boxes. So what’s not to like? Well, candidly speaking, documentation is minimal, at best. Be prepared to do your homework, be self reliant, and spend some effort troubleshooting. In addition to the wiki, Void sports a forum and a small but active irc channel on freenode. In the weeks I have been hanging out on #voidlinux, I have noted an influx of new users, many of whom seem to be looking to escape the tyranny of systemd. The other main incoming cohort tend to be from the embeded device camp. Some of these folks are quite frankly on the bleeding edge of newbie and invested minimal to zero effort reading documentation. I’m truly impressed by the patience of Void’s elder statesmen and like to take this opportunity to give them a big shout out! You know who you are. I have also noted some of these new users giving back and the wiki getting some love. Kudos to you as well! I think there is opportunity for a healthy community here. Don’t be a taker and exploit it, lest we burn helpers out. [17]

As for Gentoo? It’s still on my short, short list and I may end up there yet. I’m going to spend some time exploring Voidlinux before I make that call.


1. Void devs do not share my disdain for systemd and once upon a time even used systemd until it started borking up musl builds.
2. It’s speculated that xbps stands for "xtraeme binary package system", after the lead developer, xtraeme, a.k.a. Juan RP. I have no clue how to pronounce xtraeme. I associate Spain with Ibiza, however, and wonder if might be "eXTRA E ME". I just call it the "eXtra Bitchin' Package System".
3. Subsequent to publishing this article, I was pointed to a heretofore unknown to me zfs medium maintained by one of the Void developers. I’ve not used it but suspect it’s much more likely to be maintained over the long term with Void’s latest and greatest bits. You may well want to grab it instead.
4. I am ssh’d into my target install box via an Emacs shell. Code listings may not be formatted exactly the same as what you see if you’re using a terminal, but should be close enough for you to follow along.
5. See Rod’s Books for more than you ever wanted to know about GPT and UEFI.
6. Smartmontools is included on the iso I built. Otherwise you may need to install it.
7. The ONE time I got bit by ZFS was on an early OpenSolaris set up using 4 drives and multiple pools. Something went wonky with an IPS update/clone operation resulting in my drives being renumbered and my pools horked to the point I could not even boot. I probably could have recovered from this but it occured during my early days with both ZFS and OpenSolaris and I lacked requisite skill. Moral of this story is that shit does happen.
8. In theory this should ensure unique hostids but I have heard reports of duplicate MAC addresses. I don’t view this as likely but it’s something to be aware of if you’re running a huge server farm.
9. Other guides you may encounter around the net are fond of "zroot" so as to distinguish a ZFS root pool from common convention amongst btrfs (shudder…​) afficianodos to name their root pools "broot". This then sets up the use of "zswap" for swapspace, which I consider ill advised, since zswap is something else entirely.
10. Prior to transitionng to runit, Void was infested by the systemd virus and in a break from BSD* tradition apparently seduced into using symlinks for /bin and /sbin pointing to /usr/bin. Hence, you most likely do NOT want to break /usr/bin out onto a separate dataset unless you have good reason, know what you’re doing, and prepared for some pain points. Praise the gods Void Dev Heads came to their senses sooner rather than later and we have this otherwise sweet distro. Party on, Wayne.
11. I use a convention of upcasing names of top level containers that will themselves never be mounted. This is a matter of personal taste. Use whatever trips your trigger. ZFS doesn’t care.
12. See Systemd/journald and Legacy Mounts for further elaboration.
13. Had I been slightly quicker on the draw I might have included "zfs" in my xbps-install command and saved myself the additional step of doing so in the "Configuring Voidlinux" section. Being new to Voidlinux, however, I was unsure what hooks xbps might run and played it conservatively. I’ve since been advised that it would have been fine to have included it here.
14. GRUB is like the gift that keeps on giving and we’ll have to recreate these every time we do a kernel or grub update. Else maybe create some eudev rules or script it. Reportedly you can also make symlinks from /dev/disk/by-id/$ID to /dev/$ID. I seem to recall trying this somewhere along the way and it not working for me. Maybe I fat fingered something. In any case, for the time being at least, I’m okay with creating them as needed as a safety check in the event I fail to scrutinize xbps updates as closely as I should.
15. This doesn’t exclude btrfs from Void’s vmlinuz but we’ll save that remedy for another day.
16. Son, you just gotta Let 'er Buck and git 'er done!
17. Which, in my estimation, has unfortunately happened to #archlinux. More often than not, it’s pretty hostile and toxic towards those not part of the "in club". This is not to say that there are not some pretty cool people there. No offense but I calls 'em like I see 'em.
Tags: voidlinux zfsonlinux guides