This time round we’ll be installing the Dell CSI driver for the Unity storage platform into a Rancher RKE2 environment. In order to do this we simply use the “charts” feature embedded in Rancher, and we simply delpoy the CSI driver from the GUI after some simple prep work in the CLI. Finally we’ll be deploying a small persistent app to show that the CSI driver is functional. Of course there is a demo video as well 🙂
Prepping the RKE2 environment
In the current lab setup I have a small RKE2 cluster that was already prepped for persistent storage (for example it already has the required iSCSI driver installed). In order to add the Dell Unity CSI driver, we need to do some light prep work.
Let’s first create a yaml file so that we can create a secret containing the Unity array details:
storageArrayList:
- arrayId: "arrayIDnr"
username: "user"
password: "password"
endpoint: "https://10.0.0.1/"
skipCertificateValidation: true
isDefault: true
Next, we create the namespace, the secret and we fetch the certificate from the Unity array:
kubectl create ns unity
openssl s_client -showcerts -connect 10.0.0.1:443 </dev/null 2>/dev/null | openssl x509 \
-outform PEM > ca_cert_0.pem
kubectl create secret generic unity-certs-0 --from-file=cert-0=ca_cert_0.pem -n unity
kubectl create secret generic unity-creds -n unity --from-file=config=secret.yaml
Finally, we create two StorageClasses for the Unity for both iSCSI and NFS using the example storageclasses on github:
kubectl create -f unity-iscsi.yaml
kubectl create -f unity-nfs.yaml
With this, all the prep work was already done and we can now proceed into Rancher to do the actual install!
Installing the Dell Unity CSI driver using the Rancher GUI
In order to install the Dell CSI driver for Unity, simply go into the Rancher GUI, click on Charts and filter on “Dell”:
From here, we simply click on “Dell CSI Unity”, select the correct namespace (in our case that is “unity”), leave the rest at default and initiate the install. The installation should complete in a matter of seconds and is now ready to use:
Time to test: Deploying a persistent workload
In order to sohw that this works, all that remains is to deploy a persistent workload. For this I’ll be using a small microservice app called Yelb. All I need to do is to change the StorageClass of the two PVCs that get created in Yelb:
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: redis-1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClassName: unity-vsa01-iscsi
Just deploy yelb and see the rsulting PVCs:
rancher20:~/unity # kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pg-sql-1 Bound csivol-b2818e88c4 16Gi RWO unity-vsa01-iscsi 92s
redis-1 Bound csivol-f81bc959e4 8Gi RWO unity-vsa01-iscsi 92s
That’s it! As usual, of course there is a demo video recorded of this as well: