sshfs (SSH FileSystem) is a FUSE based file system that allows mounting of remote directories using SSH (or more specifically, sftp).
I have all my Linux machines mounting remote directories from a central NAS on my home network. Some machines are connecting at 100Mb LAN (though reduced as I actually run PowerLine adapters between the machines and the NAS), but the more frequently used machines (laptops/netbooks) connect via wireless. One connects at 802.11n, another connects at 802.11g speeds.
Recently, NFS performance has been poor on the UNR Acer Aspire 1 (A110) netbook which is the most used device in the home. To improve this without opting to add on a 802.11n USB dongle I recently started to look at sshfs. SSH is stable – I’ve never had issues scping files netween NAS and netbook. NFS hooks at lower levels into the kernel, and when NFS hangs, the netbook becomes unstable enough to warrant a reboot. Turn it off and on again just isn’t an option for something that “should just work”.
sshfs uses SFTP to present the filesystem on the local machine so make sure the sftp subsystem is enabled in your sshd_config on the server:
Subsystem sftp /usr/libexec/sftp-server
Now the great thing about sshfs being based on sftp and a FUSE implementation means you can (and as shown here initally) is that you run this as a regular user.
On my Qnap NAS I have a number of NFS exports. To test the performance I used an NFS export containing lots of photos.
The NFS export is the following
nas:/share/NFS/test /media/test nfs rw,bg,tcp,wsize=32768,rsize=32768,vers=3
The general syntax for sshfs is
sshfs user@host:/directory /mountpoint
After a number of tests, the following options gave me performance with reliability
sshfs -o idmap=user -o uid=1000 -o gid=1000 -o workaround=nodelaysrv:buflimit -o no_check_root -o kernel_cache -o auto_cache admin@nas:/share/MD0_DATA/test /media/sshfs
Note that when you execute this command it will ask for the password of the username specified. Normal SSH rules apply here – access using ssh keys is the way to provide secure, seamless, unprompted access [or prompted if you sign your key with a password, of course].
So this gives me two areas on my wireless laptop:
/media/test is the NFS mount point of /share/MD0_DATA/test on ‘nas’
/media/sshfs is the sshfs mount point of /share/MD0_DATA/test on ‘nas’
It is crucial that you add the following in to your SSH client config which is located under .ssh/ssh_config
ServerAliveInterval 15 ServerAliveCountMax 3
This is to avoid the SSHFS mount hanging after a timeout – which is quite messy to clean up.
Performance sshfs vs NFS
Performance tests were rough and ready, but I needed to represent the real world. I did timings of directory listings/finds and also visually using Gnome as this is to fix performance issues on a netbook which would be the crux of the issue.
The test area had 4,501 photos of various formats and file sizes.
time find /media/sshfs
time find /media/test
I can repeat those tests over and over and I get NFS consistently quicker than sshfs. Visually I see Gnome creating thumbnails slower under sshfs than I do under NFS – but it is still acceptable. The reason I’m looking at improving performance of the remote filesystems currently mounted under NFS is because of the instability witnessed using NFS – although there is a caveat to this stability…
So, NFS is quicker than sshfs, but is it enough to not use it? I think sshfs is a great idea and will certainly be used for some parts of my home network. It will easily work its way into the enterprise too as a replacement to age old habit of using scp – especially when used with an automount set up and that niggling issue of “do I really want to run NFS on that server just to access some files?”.
Will I use this completely at home as opposed to NFS? I’m not so sure – the jury’s still out. I’m currently using UNR on the netbook and I’ve tracked down another issue with NFS over wireless – the latest kernels are what seems to be the cause of the instability with my ath5k wireless driver. It appears that the Ubuntu kernels, 2.6.32-23 kernels and later are causing my issue. I’m currently running an older 2.6.32-21 kernel and all is well… for now.