Securely Sharing Storage with NFS and Tailscale

On this post, I will explain how to Securely Sharing Storage with NFS and Tailscale.

I have 2 VPS with 1 production vps (with nvme) and 1 storage vps (hdd ssd-cached), so its good for me to backup some of my website data in production vps to storage vps with NFS.

NFS is unencrypted by default, so it's totally not secured if you are used it on public internet, however you can encrypt it with wireguard like on Daniel's blog or use tailscale like on this tutorial 😁 :

If you have installed tailscale before, and your machines are already connected to tailscale, you will get private ip for each machines you have connected, and replace my IP on this post with yours.


This tutorial will use this following IP, so make sure you replace with your own IP :

  • redhawk ( as primary server
  • storagevm ( as storage server

How NFS Works?

how nfs work

What is NFS

NFS is (Network File System) an Internet Standard, client/server protocol developed in 1984 by Sun Microsystems to support shared, originally stateless, (file) data access to LAN-attached network storage. As such, NFS enables a client to view, store, and update files on a remote computer as if they were locally stored. It allow us to share file between all operating system. The main versions in deployment these days (client and server) are NFSv3, NFSv4, and NFSv4.1.

Why Use NFS

  • NFS can help us to access remote file on local system.
  • We can Setup Centralize storage using NFS
  • We can backup our data to remote server with the help of NFS.

NFS Server

On the NFS Server (here: storagevm/ as storage server), let’s first update the system packages using this command:

sudo apt update

next, install nfs-kernel-server package :

sudo apt install nfs-kernel-server

The next step will be creating a directory that will be shared among client systems. This is also referred to as the export directory and this directory will be accessible by nfs client systems.

Run the command below by specifying the NFS mount directory name.

sudo mkdir -p /data/hello-world

Edit the /etc/exports file, which lists the server's filesystems to export over NFS to client machines. And create the NFS table with exportfs -a.

The following example shows the addition of a line which adds the path "/data/hello-world", for grant access to nfs-client IP (here: redhawk/

echo "/data/hello-world,sync,no_root_squash,no_subtree_check)" >> /etc/exports

explanation :

  • rw: Stands for Read/Write.
  • sync: Requires changes to be written to the disk before they are applied.
  • no_root_squash : Since in this tutorial both machine use root, so you'll need it to work properly
  • no_subtree_check: Eliminates subtree checking.

next, create the NFS table :

exportfs -a

don't forget to restart nfs-kernel-server

systemctl restart nfs-kernel-server

Optional : Only NFSv4

= = Skip this step if you need NFSv3 as well. = =

A best practice these days is to only enable NFSv4 unless you really need NFSv3. If you use Windows OS or macosx machines as nfs-client, ensure enable NFSv3 and NFSv4 simultaneously but don't forget about the security risks of NFS with clients that can not be trusted.

To enable only NFSv4, set the following variables in /etc/default/nfs-common:


Next, add the following variables in /etc/default/nfs-kernel-server. Note that RPCNFSDOPTS is not present by default, and needs to be added.

RPCMOUNTDOPTS="--manage-gids -N 2 -N 3"

* as my storage server, so replace with your storage vm IP
Additionally, rpcbind is not needed by NFSv4 but will be started as a prerequisite by nfs-server.service. This can be prevented by masking rpcbind.service and rpcbind.socket:

systemctl mask rpcbind.service
systemctl mask rpcbind.socket

and restart nfs-kernel-server

systemctl restart nfs-kernel-server

NFS Client

On the NFS client (here: redhawk ( as primary server), you need to install the nfs-common package, as usual don't forget to update your system package first ! πŸ˜…

apt update
apt install nfs-common

After that, let's create a mount point on which you will mount the nfs share from the NFS server.

mkdir -p /mnt/data

Now, you can use the mount command to mount the directory over NFS:

mount -t nfs /mnt/data/

Verify NFS Server and NFS Client

After redhawk ( as nfs-client successfully mounted nfs-server:/data/hello-world to local/mnt/data directory, we can verify by create a few files :

cd /mnt/data

touch a b c z && echo "okemantap" > z

Now check on /data/hello-world in storagevm ( or storage server
cd /data/hello-world



Reference :