Auto mount (autofs) sshfs access

1. Install autofs

Ubuntu/Debian: sudo apt-get install autofs
Red Hat/Fedora based: sudo yum install autofs

2. Edit /etc/auto.master and add a line:

/media/sshfs   /etc/auto.sshfs uid=1000,gid=1000,--timeout=30,--ghost

3. Edit /etc/auto.sshfs

mountpoint   -fstype=fuse,rw,nodev,nonempty,allow_other,reconnect,uid=1000
,gid=1000,max_read=65536,compression=yes,auto_cache,no_check_root,
kernel_cache :sshfs\#user@server\:/remotedir

4. Make the autofs mount point

mkdir -p /media/sshfs

5. SSH Access using Keys – for root

To make efficient use of sshfs access and a prequisite for autofs you need to set up host based key authentication. It is required that you can ssh from the root user to the target user on the remote filesystem using keys.

ssh-keygen -t rsa
scp .ssh/id_rsa.pub user@server:
ssh user@server
mkdir --mode=0700 -p .ssh
cat id_rsa.pub >> .ssh/authorized_keys
chmod 0600 .ssh/authorized_keys

Now test you can log in to user@remote from the root user without it prompting for a password

6. Start Autofs

Ubuntu/Debian: sudo autofs start
RedHat/Fedora: sudo service autofs start

7. Access your remote filesystem by going to /media/sshfs/mountpoint

cd /media/sshfs/mountpoint

You should now be access the remote machine as if it was part of your local filesystem

sshfs – SSH FileSystem, replacement to NFS over wireless?

sshfs (SSH FileSystem) is a FUSE based file system that allows mounting of remote directories using SSH (or more specifically, sftp).
I have all my Linux machines mounting remote directories from a central NAS on my home network. Some machines are connecting at 100Mb LAN (though reduced as I actually run PowerLine adapters between the machines and the NAS), but the more frequently used machines (laptops/netbooks) connect via wireless. One connects at 802.11n, another connects at 802.11g speeds.

Recently, NFS performance has been poor on the UNR Acer Aspire 1 (A110) netbook which is the most used device in the home.  To improve this without opting to add on a 802.11n USB dongle I recently started to look at sshfs.  SSH is stable – I’ve never had issues scping files netween NAS and netbook.  NFS hooks at lower levels into the kernel, and when NFS hangs, the netbook becomes unstable enough to warrant a reboot. Turn it off and on again just isn’t an option for something that “should just work”.

sshfs uses SFTP to present the filesystem on the local machine so make sure the sftp subsystem is enabled in your sshd_config on the server:

Subsystem       sftp    /usr/libexec/sftp-server

Now the great thing about sshfs being based on sftp and a FUSE implementation means you can (and as shown here initally) is that you run this as a regular user.

On my Qnap NAS I have a number of NFS exports. To test the performance I used an NFS export containing lots of photos.

NFS

The NFS export is the following

"/share/NFS/test" *(rw,nohide,async,no_root_squash)
This is mounted with the following options in fstab
nas:/share/NFS/test /media/test nfs rw,bg,tcp,wsize=32768,rsize=32768,vers=3

sshfs

The general syntax for sshfs is

sshfs user@host:/directory /mountpoint

After a number of tests, the following options gave me performance with reliability

sshfs -o idmap=user -o uid=1000 -o gid=1000 -o workaround=nodelaysrv:buflimit -o no_check_root -o kernel_cache -o auto_cache admin@nas:/share/MD0_DATA/test /media/sshfs

Note that when you execute this command it will ask for the password of the username specified. Normal SSH rules apply here – access using ssh keys is the way to provide secure, seamless, unprompted access [or prompted if you sign your key with a password, of course].

So this gives me two areas on my wireless laptop:

/media/test is the NFS mount point of /share/MD0_DATA/test on ‘nas’

/media/sshfs is the sshfs mount point of /share/MD0_DATA/test on ‘nas’

SSH Timeout

It is crucial that you add the following in to your SSH client config which is located under .ssh/ssh_config

ServerAliveInterval 15
ServerAliveCountMax 3

This is to avoid the SSHFS mount hanging after a timeout – which is quite messy to clean up.

Performance sshfs vs NFS

Performance tests were rough and ready, but I needed to represent the real world.  I did timings of directory listings/finds and also visually using Gnome as this is to fix performance issues on a netbook which would be the crux of the issue.

The test area had 4,501 photos of various formats and file sizes.

sshfs

time find /media/sshfs

real 0m6.254s
user 0m0.010s
sys 0m0.110s

NFS

time find /media/test

real 0m3.738s
user 0m0.020s
sys 0m0.110s

Conclusion

I can repeat those tests over and over and I get NFS consistently quicker than sshfs. Visually I see Gnome creating thumbnails slower under sshfs than I do under NFS – but it is still acceptable.  The reason I’m looking at improving performance of the remote filesystems currently mounted under NFS is because of the instability witnessed using NFS – although there is a caveat to this stability…

So, NFS is quicker than sshfs, but is it enough to not use it? I think sshfs is a great idea and will certainly be used for some parts of my home network.  It will easily work its way into the enterprise too as a replacement to age old habit of using scp – especially when used with an automount set up and that niggling issue of “do I really want to run NFS on that server just to access some files?”.

Will I use this completely at home as opposed to NFS? I’m not so sure – the jury’s still out.  I’m currently using UNR on the netbook and I’ve tracked down another issue with NFS over wireless – the latest kernels are what seems to be the cause of the instability with my ath5k wireless driver.  It appears that the Ubuntu kernels, 2.6.32-23 kernels and later are causing my issue.  I’m currently running an older 2.6.32-21 kernel and all is well… for now.