Trying btrfs... and giving up for now
I recently tried to use btrfs to manage some data. My main use case was incremental backups relying on btrfs's snapshot feature. Indeed, btrfs allows you to:
- create snapshots of your filesystem at various points in time: the snapshots essentially take no additional space, except that files of that FS will not really be deleted as long as they survive in a snapshot;
- send snapshots to remote hosts, even computing efficiently the diff between each snapshot and the previous one, to minimize IO, bandwidth, and backup storage costs;
- browse old snapshots seamlessly as if they were actual folders, and restoring files easily.
This is much better than my current solution, which is to use rsync. Indeed, by contrast, rsync has the following drawbacks:
- rsync only synchronizes the current version (overwriting the previous one), and if you want to keep multiple versions they are not compressed relative to each other;
- each rsync must rescan the entire filesystem, even if almost nothing has changed;
- rsync is not always intelligent about transfers, as it tries to avoid re-sending files that haven't changed, but receives no help from the FS to understand what went on: for instance, if you move a large directory on the master, in most cases rsync will fail to notice and re-transfer the whole directory to the backup.
This post is a terse documentation of what I have learnt about btrfs in the process of exploring it. Sadly, the main outcome of my investigations is that btrfs does not seem sufficiently mature for me to use it yet. I am sorry about the negative conclusion: I think that btrfs is a great project and I imagine that the remaining rough edges will eventually be fixed. Further, the good news is that (as far as I can tell) I have only encountered crashes but I have not encountered any data loss issue.
General considerations about btrfs
So here are some general things about btrfs that I discovered when playing around:
- btrfs supports transparent file compression with zlib and lzo. This is done by passing an option to mount. I am not too sure about what happens if you forget to pass this option (or pass the wrong value for this option). It seems to work fine, though.
- btrfs supports
deduplication, but
it turns out that this did not mean what I thought it would.
Unlike, e.g., git repositories, if you write data to the disk which happens to already exist someplace else, btrfs will not notice it and use it to share space. What it means is that btrfs supports copy-on-write, i.e., when you write data on the FS that comes from another file of the FS, then btrfs will only save a pointer to the old data, and will not create two different copies until one piece of data is modified.
This implies that, if you want to deduplicate data which has not been created using copies, you need to do it offline with specific tools: btrfs does not support it out of the box. I tried bedup, which was quite slow; its savings amounted to 110 GB out of 2.6 TB of data when I tested it on a partition. (Of course, your mileage may vary.) It is quite worrying that the deduplication tools (in particular, bedup) do not seem very sure of what they are doing, so this does not give at all the impression of being robust. - btrfs supports many nice features that I didn't need: splitting a FS across multiple devices (with replication or not), adding/removing devices on the fly, performing resizes online, etc. I did not try these features out.
Here are things you will need to know when trying out btrfs, and traps in which you may fall:
- The btrfs utilities are not shipped with Debian by default, you need to
apt-get install btrfs-tools
. - If you want to start playing with btrfs, you will probably want to convert
data from an ext3 or ext4 partition. There is
a tool designed to do that,
btrfs-convert,
but closer
inspection
reveals that it is now reported to be unreliable.
As I didn't want to build the FS on shaky foundations, I created a partition from scratch, and moved my terabytes of data around. - When creating test filesystems, note that you cannot create btrfs filesystems that are too small (apparently, less than 100 MB), and you will get a confusing error message if you try.
- btrfs exposes quite a lot of its internals which you apparently may need to be aware of. In particular, you may have to defragment it1. It seems that you may also need to balance the filesystem (amongst other things) to avoid problems when it is full
- btrfs makes it possible to have subvolumes which you can mount
independently. In other words, if your disk contains games and music,
you could imagine having a subvolume
games/
and a subvolumemusic/
, and mounting only one of the two (or mounting them at different endpoints). In this case, if you mount the root of the filesystem,games/
andmusic/
will appear as folders (which are actually different filesystems).
This means that you should be careful when starting to organize your filesystem: the root of the filesystem doesn't play the same role as in other filesystems, and you should probably always be mounting a subvolume of it instead. If you miss this point initially and want to change your mind later, it's not so simple. - While btrfs supports copy-on-write,
cp
will not use it by default. You need to pass the option:--reflink=always
tocp
, as explained in this FAQ entry. This is a bit unpleasant because it means that scripts must be usingcp
properly to take advantage of copy-on-write, and that other programs will not necessarily support it. In particular, rsync does not, for now.
Incremental backups: snapshotting, sending, receiving
Now, here is more about my specific experience with subvolumes, snapshots, and
btrfs send
and btrfs receive
, which were the main features I was interested
in. In summary, here are the basic principles:
- You can run
btrfs subvolume snapshot foo/ snap/
to create a snapshot offoo/
assnap/
. This createssnap/
as a folder (but it's actually a different subvolume), which contains the contents offoo/
(using copy-on-write, so without duplicating the actual contents on disk). For backups, you want to create read-only snapshots (btrfs subvolume snapshot -r
).
If you create snapshots at different points in time, you do not need (and cannot) tellbtrfs subvolume snapshot
which ones are the most recent: however, for your own purposes, you should probably indicate it in the volume name.
You can be quite trigger-happy with snapshots, I created one every hour for weeks without any problem. - You can run
btrfs send snap/
to produce on standard output a representation of the (read-only) snapshotsnap/
. Alternatively, you can runbtrfs send -p old_snap/ snap/
to prepare efficiently a compressed representation ofsnap/
that relies onold_snap/
. I tested that, indeed, when the difference fromold_snap/
tosnap/
is that a very large folder was moved,btrfs send
is able to create a very concise representation in comparatively little time. - You can run
btrfs receive snapshots/
, wheresnapshots/
is in the backup FS, to read on standard input a dump produced bybtrfs send
, and create insnapshots/
the corresponding snapshot (here,snap/
: the name depends on whatbtrfs send
is sending). Of course, the backup FS can be on a different machine: you can pipe the stream acrossssh
, or simply store it to a file and move that from one machine to the other.
That's the theory. Now, details and traps. First, about snapshot creation:
- When creating snapshots periodically, it is quite easy to end up with
filesystems with a very large number of files (which are very similar copies
of the same hierarchy). This is very undesirable, e.g., for
locate:
I had updatedb wasting lots of CPU and disk
space indexing a large number of these snapshots and polluting my
locate
results. You'll want to tellupdatedb
not to explore the snapshot folder, using the settingPRUNEPATHS
in/etc/updatedb.conf
. - In terms of access rights, you do not need to be root to create a
snapshot (or subvolume). Indeed, if you couldn't read some files in the source,
you will still be unable to read them from the snapshot.
However, deleting subvolumes is not possible as an unprivileged user unless you pass a specific mount option: I am not sure of the implications of this, in particular, I do not know why it is not the default. Further, deleting subvolumes that were created to be read-only requires a specific step to make them writable.
Another thing to understand is that, to remove a subvolume, whether as root or otherwise, usingrm
will fail withOperation not permitted
; a different error than the usualPermission denied
, but a possible source of confusion. You should be usingbtrfs subvolume delete
instead. - Having snapshots also makes it quite complicated to
understand where your disk space is going. Is it used by files currently in
your FS? Or files
deleted in the FS but retained because of some snapshot? If so, which snapshot(s)? How many
space would you reclaim by removing a given snapshots, or, say, all snapshots older than one month?
To answer such questions, you need to use (in particular, enable) btrfs's quota support. But even then it is not very obvious to figure all of this out.
About sending and receiving snapshots:
btrfs send
requires root, even for snapshots that you created: this is unsurprising, as remember that you can snapshot files that you cannot read, and of course you shouldn't be able to read them from the output ofbtrfs send
- You should not interrupt
btrfs send
orbtrfs receive
, either with SIGSTOP or by putting the computer in hibernation mode. If you do so, the operation will fail. In particular, an incomplete copy of the subvolume will stay around on the receiving end, which can easily mislead you and make you believe that the operation succeeded. Apparently, btrfs is smart enough to notice that the copy is incomplete (in particular, fortunately, refusing to use it as a parent to reconstruct another snapshot), but it is not sufficiently intelligent to delete the leftover files or (preferably) to resume the operation from where it left off, like rsync does. This means that, in practice, you probably want to snapshot often and have relatively small diffs between snapshots.
Also note thatbtrfs send
andbtrfs receive
give no progress information when they run. - Once you have created snapshots and you want to transfer them to the backup
host, the problem is figuring out which backup depends on which, and what to
send. You can only choose this at the level of
btrfs send
: snapshot creation does not need a parent, andbtrfs receive
is apparently able to use some ID specified in thebtrfs send
invocation to identify which volume it should use (or fail if a suitable volume does not exist, although I don't know whether this check is bulletproof or not). - Hence, when sending snapshots, btrfs leaves you free to choose the right set
of
send
operations with the right parents to minimize IO and network cost.
A program called buttersink attempts to do this, i.e., choosing an intelligent sequence of transfers. For my use case, sadly, it did not work. This is pretty surprising, as my case is quite simple: a series of chronological snapshots, each of which should be sent based on the previous one. Maybe the reason is that buttersink does not know in which order the snapshots were made, and relies on a size estimation of the diff between two btrfs snapshots, which apparently is both slow to compute and wildly inaccurate.
So I wrote instead a much simpler script which order the snapshots by date (as indicated in their name) and sends them in that order. There are probably exist more elaborate tools for that purpose, like btrbk which I did not test.
Messy problems
And finally, here are the nasty problems I ran into. When running my script to perform the transfers, and disconnecting hard drives at random points to simulate messy hardware failures, I observed the following:
- Backtraces in syslog suggesting a problem with btrfs (even during normal operation, I think):
kernel: [52053.405416] ------------[ cut here ]------------
kernel: [52053.405456] WARNING: CPU: 0 PID: 12046 at /build/linux-HoPide/linux-4.5.1/fs/btrfs/qgroup.c:2650 btrfs_qgroup_free_meta+0x88/ 0x90 [btrfs]()
kernel: [52053.405459] Modules linked in: ufs(E) qnx4(E) hfsplus(E) hfs(E) minix(E) ntfs(E) vfat(E) msdos(E) fat(E) jfs(E) xfs(E) libcrc32c(E) crc32c_generic(E) vboxpci(OE) vboxnetadp(OE) vboxnetflt(OE) vboxdrv(OE) veth(E) ebtable_filter(E) ebtables(E) xt_conntrack(E) ipt_MASQUERADE(E) nf_nat_masquerade_ipv4(E) xt_addrtype(E) br_netfilter(E) bridge(E) stp(E) llc(E) overlay(E) pci_stub(E) nfsd(E) auth_rpcgss(E) nfs_acl(E) lockd(E) grace(E) sunrpc(E) fuse(E) ip6t_REJECT(E) nf_reject_ipv6(E) ip6table_filter(E) ip6_tables(E) iptable_nat(E) nf_conntrack_ipv4(E) nf_defrag_ipv4(E) nf_nat_ipv4(E) nf_nat(E) nf_conntrack(E) ipt_REJECT(E) nf_reject_ipv4(E) xt_tcpudp(E) xt_owner(E) xt_multiport(E) iptable_filter(E) ip_tables(E) x_tables(E) binfmt_misc(E) quota_v2(E) quota_tree(E) dm_crypt(E) algif_skcipher(E) af_alg(E) snd_hda_codec_hdmi(E) uas(E) usb_storage(E) iTCO_wdt(E) iTCO_vendor_support(E) ppdev(E) intel_rapl(E) x86_pkg_temp_thermal(E) intel_powerclamp(E) kvm_intel(E) kvm(E) irqbypass(E) crct10dif_pclmul(E) crc32_pclmul(E) ghash_clmulni_intel(E) hmac(E) dm_mod(E) drbg(E) ansi_cprng(E) aesni_intel(E) aes_x86_64(E) lrw(E) gf128mul(E) glue_helper(E) ablk_helper(E) cryptd(E) uvcvideo(E) pcspkr(E) snd_hda_codec_realtek(E) sg(E) serio_raw(E) snd_hda_codec_generic(E) videobuf2_vmalloc(E) i2c_i801(E) i915(E) videobuf2_memops(E) videobuf2_v4l2(E) videobuf2_core(E) drm_kms_helper(E) snd_usb_audio(E) videodev(E) snd_hda_intel(E) media(E) snd_hda_codec(E) snd_usbmidi_lib(E) snd_rawmidi(E) snd_hda_core(E) snd_seq_device(E) snd_hwdep(E) snd_pcm_oss(E) drm(E) snd_mixer_oss(E) snd_pcm(E) snd_timer(E) evdev(E) joydev(E) lpc_ich(E) snd(E) mei_me(E) cdc_acm(E) mfd_core(E) i2c_algo_bit(E) soundcore(E) mei(E) shpchp(E) 8250_fintek(E) battery(E) parport_pc(E) parport(E) video(E) soc_button_array(E) tpm_infineon(E) tpm_tis(E) button(E) tpm(E) processor(E) it87(E) hwmon_vid(E) coretemp(E) loop(E) autofs4(E) ext4(E) crc16(E) mbcache(E) jbd2(E) btrfs(E) xor(E) raid6_pq(E) sr_mod(E) cdrom(E) sd_mod(E) hid_generic(E) usbhid(E) hid(E) crc32c_intel(E) ahci(E) libahci(E) r8169(E) psmouse(E) libata(E) xhci_pci(E) ehci_pci(E) xhci_hcd(E) ehci_hcd(E) scsi_mod(E) mii(E) usbcore(E) usb_common(E) fan(E) thermal(E) fjes(E) [last unloaded: vboxdrv]
kernel: [52053.405581] CPU: 0 PID: 12046 Comm: rsync Tainted: G W OE 4.5.0-1-amd64 #1 Debian 4.5.1-1
kernel: [52053.405583] Hardware name: Gigabyte Technology Co., Ltd. H87M-HD3/H87M-HD3, BIOS F3 05/09/2013
kernel: [52053.405585] 0000000000000286 000000008e92a6d5 ffffffff81307b65 0000000000000000
kernel: [52053.405589] ffffffffc02f15b0 ffffffff8107905d ffff8800a959e800 0000000000004000
kernel: [52053.405592] ffff8800a959e800 0000000000004000 0000000000000002 ffffffffc02cfaf8
kernel: [52053.405595] Call Trace:
kernel: [52053.405605] [<ffffffff81307b65>] ? dump_stack+0x5c/0x77
kernel: [52053.405610] [<ffffffff8107905d>] ? warn_slowpath_common+0x7d/0xb0
kernel: [52053.405630] [<ffffffffc02cfaf8>] ? btrfs_qgroup_free_meta+0x88/0x90 [btrfs]
kernel: [52053.405650] [<ffffffffc0268702>] ? start_transaction+0x3e2/0x4a0 [btrfs]
kernel: [52053.405668] [<ffffffffc026e507>] ? btrfs_dirty_inode+0x97/0xc0 [btrfs]
kernel: [52053.405672] [<ffffffff81205538>] ? touch_atime+0xa8/0xd0
kernel: [52053.405676] [<ffffffff8116d7bd>] ? generic_file_read_iter+0x63d/0x790
kernel: [52053.405681] [<ffffffff811ee2b1>] ? cp_new_stat+0x151/0x180
kernel: [52053.405683] [<ffffffff811e8913>] ? new_sync_read+0xa3/0xe0
kernel: [52053.405686] [<ffffffff811e9101>] ? vfs_read+0x81/0x120
kernel: [52053.405689] [<ffffffff811ea072>] ? SyS_read+0x52/0xc0
kernel: [52053.405693] [<ffffffff815b6ab2>] ? system_call_fast_compare_end+0xc/0x67
kernel: [52053.405695] ---[ end trace 6c76a866f1f3e28c ]---
kernel: [52053.790081] ------------[ cut here ]------------
- At one point, when I disconnected a hard drive that contained a mounted btrfs system, an instant hard reset of the machine (!).
- Messed up filesystems where some operations would
take apparently forever (e.g.,
subvolume delete
, on the target of the transfer), during which mysterious processes likebtrfs-cleaner
andbtrfs-transaction
were performing varying levels of CPU/IO, and the lagging operation could not be aborted with SIGINT. I saw no way to find out what the processes were trying to do. - Even weirder filesystems with which the entire machine started being unresponsive and
OOM-ing for unclear
reasons, around 2 hours after they had been mounted. I eventually had
the idea of checking
slabtop
, which showed that the kernel was filling the RAM (8 GB) with its own structures, presumably because of some sisyphean operations that btrfs was currently undertaking on them. - While the above happened, in other cases, it happened to me that
syslog
got flooded with messages about btrfs, filling up my root partition.
This is where I give up: even though I would very much like to have incremental backups at the FS level, for now, I do not feel comfortable handing over my data to a FS that suffers from this kind of problems. I know that, in principle, I should try to report the bugs to developers and help fixing these issues, but sadly I do not feel I can invest the time and effort to help debug an FS before I can use it. Note that I did not even do very ambitious things: essentially just snapshots, send, and receive, randomly disconnecting the devices at various stages of the process. So maybe it could be straightforward to reproduce the problems I ran into.
So I'm back to rsync for now, and I'll have to investigate incremental backup programs that are smarter than rsync but do not rely on collaboration from the FS, e.g., Borg. Or maybe I could try ZFS...
-
It's very funny to hear that btrfs must be defragmented when you have heard for years the propaganda "only Microsoft file systems must be defragmented, because they are inferior"... ↩