Recovering from btrfs errors
- 27 Jun 2017: Post was created (diff)
After vacuuming under my desk, my SSD went into read-only mode while the system was on. I have a LVM on Luks and btrfs setup, and after rebooting, it went straight into btrfs emergency rescue mode.
Before continuing, I mirrored the drive to a backup drive with a 2048 block size. The reason for that block size is that it is what I determined to be the sector size of the SSD. Whether that is the correct assumption or not I cannot guarantee.
# dd if=/dev/sda of=/dev/sdb bs=2048
I created a btrfs filesystem on a usb pen drive, as the btrfs recovery shell didn’t support vfat nor ext4. Inserted the pen drive in the recovering computer and mounted it. I had two other disks connected, so my USB drive became sdc.
# mkdir /usbmnt # mount /dev/sdc1 /usbmnt
Then attempt to recover my git repositories that I, of course, hadn’t pushed back to origin.
# btrfs restore -c --path-regex '^/(|home/(|/sshow(|/repos/(|/.*))))$' \ /dev/mapper/lvmvg-rootvol /usbmnt
And my dotfiles (this copies over all directories starting with a dot too, but not their contents, so it’s okay for me)
# btrfs restore -c --path-regex '^/(|home/(|/sshow(|/\.\w*)))$' \ /dev/mapper/lvmvg-rootvol /usbmnt
And I want my
# btrfs restore -c --path-regex '^/(|home/(|/sshow(|/.config(|/.*))))$' \ /dev/mapper/lvmvg-rootvol /usbmnt
IMPORTANT: At this point, I have the backup that I want, and I’m continuing to figure btrfs out by blindly attempting recovery options with no fear of permanently damaging my data or drive, even though that might actually happen.
Now that I have the backup, I’ll try to run the
# btrfs check --repair /dev/mapper/lvmvg-rootvol
It outputted the same errors as
--repair, so I stepped up my
faith game and attempted
init-extent-tree as well. I
haven’t even read it up, but according to the errors from a plain
this is exactly what the disk was having problems with.
# btrfs check --repair --init-csum-tree --init-extent-tree /dev/mapper/lvmvg-rootvol
Now ended with success and reporting 0 errors. Time for a reboot.
It went pretty good. Now booting properly again.
System seems okay. After boot I had a corrupted journal file, which was rotated, but other than that, it seems fine. Some questions to ask myself though:
- Should I continue to use this disk, even though it might have hardware issues?
- Should I continue to use btrfs + luks + lvm, even though btrfs is a bit experimental?
Yes to both. Let’s live on in certain uncertainty <3