8

We are migrating legacy java and .net applications from on-premises VMs to an on-premises Kubernetes cluster.

Many of these applications make use of windows file shares to transfer files from and to other existing systems. Deploying to Kubernetes has less priority than re-engineering all the solutions to avoid using samba shares, so if we want to migrate we will have to find a way of keeping many things as they are.

We have setup a 3-node cluster on 3 centos 7 machines using Kubeadm and Canal.

I could not find any actively maintained plugin or library to mount SMB except for azure volumes.

What I came up with was to mount the SMB shares on each centos node using the same mountpoint on all nodes, i.e.: "/data/share1", then I created a local PersistentVolume

kind: PersistentVolume
apiVersion: v1
metadata:
  name: samba-share-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteMany
  hostPath:
    path: "/data/share1"

and a claim,

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: samba-share-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

and assigned the claim to the application.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: samba-share-deployment
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: samba-share-deployment
        tier: backend
    spec:
      containers:
      - name: samba-share-deployment
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: samba-share-volume
      volumes:
      - name: samba-share-volume
        persistentVolumeClaim:
          claimName: samba-share-claim

it works from each replica, yet there are huge warnings about using local volumes in production. I do not know any other way to do this or what are the actual caveats of using this configuration.

Can I do it another way? Can this be ok if I monitor the mountpoints and disable the node in kubernetes if a mount fails?

  • Hmm, yes, local volumes are solving a different use case. It sounds like the samba shares already exist on a central file server. If so, the linux containers should be able to be told to mount them directly as smb/cifs volumes, without using claims, see: https://stackoverflow.com/questions/27989751/mount-smb-cifs-share-within-a-docker-container?noredirect=1&lq=1 – Jonah Benton Feb 12 '18 at 21:15

1 Answers1

5

I asked the same question on r/kubernetes and a user commented with this. We are trying this now and it seems ok.

https://www.reddit.com/r/kubernetes/comments/7wcwmt/accessing_windows_smbcifs_shares_from_pods/duzx0rs/

We had to deal with a similar situation and I ended up developing a custom Flexvolume driver to mount CIFS shares into pods from examples I found online.

I have written a repo with the solution that works for my use case.

https://github.com/juliohm1978/kubernetes-cifs-volumedriver

You still need to intall cifs-utils and jq on each Kubernetes host as a pre-requisite. But it does allow you to create PersistentVoluems that mount CIFS volumes and use them in your pods.

I hope it helps.