I’ve been working on getting SSHFS working to connect to a remote system via an entry in the /etc/fstab. I’ve had some success on other systems for different use cases. I was able to get it working on those so I thought I’d try it again on the one computer that has always seemed to give me issues.
This computer has the ZFS filesystem comprised of RaidZ2 with 8 drives. It’s been working fine after many reboots over the past couple of months. Today after a reboot it stopped mounting the ZFS file system. When I rebooted I watched the computer as it was going through the boot process and I noticed that it displayed a message indicating that it failed to mount the ZFS filesystem. The computer was up and running as the boot device is a smallish SSD, meaning it wasn’t part of the ZFS pool. That meant that I could log in and look. I found that where the file system was supposed to mount there was only one folder where there should be many.
I examined that folder. Tthe contents of it pertained to the SSHFS mount that I was trying to get working. Apparently SSHFS when a folder is missing at boot when done through the fstab it will create the necessary folders. This is at the heart of the problem. When the computer boots and the computer tries to mount the ZFS file system, if the folder where it is to be mounted into is already there, and it contains anything, it fails to mount.
This is well documented. There are ways around it by using the -O option on the command to mount the ZFS filesystem (when mounted manually). Using that work around isn’t a solid solution, IMHO.
What I did to resolve it was comment out the SSHFS mount line in the /etc/fstab, delete the folders and sub-folders (that were empty) that it created, and reboot. After a reboot the system was back up and normal.