NFS storage management in Kubernetes Cluster

Raman Pandey
3 min readMay 8, 2019

There are lots of ways to store kubernetes pods data. Here we will learn how to setup NFS storage in Kubernetes cluster for persistence storage. This storage depends on the lifetime of the cluster means that as long as the cluster lives. Network File System storage is a good way to keep the pod specific data for long time rather than relying on pod Volumes.

Kubernetes provides a strategy to setup storage management through PersistentVolume and PersistentVolumeClaim. There is another mechanism known as StorageClass which is used to provide dynamic provisioning of storage management which we will discuss in next topic.

PersistentVolume is a kubernetes resource that is used to define storage in the cluster and PersistentVolumeClaim are the request to persistent volume resources that will be used by deployments/pods of kubernetes cluster.

First we need to setup NFS server and for this you need to choose a node and this node should be accessible by all nodes of a kubernetes cluster. In better practice, this NFS server node should be outside the cluster node. To setup the NFS server do following steps:
1. we need to install nfs-server dependencies on the node
$ apt update && sudo apt upgrade -y
$ sudo apt-get install nfs-kernel-server nfs-common -y

2. create a directory in nfs server which will be used to mount the pod(s) volumes and set the permission to this directory
mkdir /kubedata
chown 777 /kubedata

3. now export this directory, open /etc/exports file and put following
/kubedata *(rw,sync,no_subtree_check,insecure)
this means /kubedata directory will accept incoming data from all ips (nodes) and they can read and write and sync enforces NFS to write changes to disk.

4. export this directory by executing exportfs -rav
now that your NFS server is ready to mount data on /kubedata directory.

One important thing to note is that cluster nodes will access this server so nfs-client should be present there and to install nfs-client on cluster nodes install nfs-common i.e. sudo apt install nfs-common and all cluster nodes should have successful ping on NFS server.

now go to master node and create persistent volume and persistent volume claims. following are example of persistent volume(nfs-pv.yaml) and persistent volume claims(nfs-pvc.yaml)
root@master:# cat nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
storageClassName: slow
persistentVolumeReclaimPolicy: Retain
accessModes:
— ReadWriteOnce
nfs:
server: 161.X.X.X<nfs_server_ip>
path: “/kubedata”
readOnly: false

root@master:# cat nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
accessModes:
— ReadWriteOnce
storageClassName: slow
resources:
requests:
storage: 500Mi

Now these resources created with
kubectl apply -f nfs-pv.yaml
kubectl apply -f nfs-pvc.yaml

Now when kubernetes sees any persistent volume claim request can be fulfilled by any persistent volume resource it will bind that claim with that resource and it will be shown something like this:
root@master:~# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv-nfs-pv1 1Gi RWO Retain Bound default/pvc-nfs-pv1 slow 3h11m

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pvc-nfs-pv1 Bound pv-nfs-pv1 1Gi RWO slow 3h8m

Now this claim can be used in any deployment/pod to mount volume in nfs server like this
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myApp-deploy
labels:
app: myApp
spec:
template:
metadata:
labels:
app: myApp
spec:
containers:

volumeMounts:
— name: myApp-storage
mountPath: /path/to/data
subPath: myApp
volumes:
— name: myApp-storage
persistentVolumeClaim:
claimName: nfs-pvc

here one thing is important that in deployment name of volumeMounts must be same as name of volumes spec which is myApp-storage in this case.

I hope this article helps to setup and understand storage management in kubernetes cluster.
Thanks!!!

--

--