How To Install PortWorx On A Kubernetes Cluster
Introduction
Portworx is a software-defined persistent storage solution designed and purpose-built for applications deployed as containers, via container orchestrators such as Kubernetes, Nomad, Marathon, and Docker Swarm. In this tutorial, you will learn how to install Portworx on a Kubernetes Cluster consisting of 3 nodes. It is a clustered block storage solution providing a Cloud-Native layer from which containerized stateful applications programmatically employ block, file, and object storage services directly through the scheduler.
Portworx well complements Minio object storage by implementing a flexible, scalable, container-granular data services foundation beneath the Minio object storage server.
Minio object storage perfectly compliments Portworx by providing a simple object storage service layer on top of Portworx data services. Both products complement each other with their striking simplicity.
In this tutorial, you will learn how to deploy a 3 node Portworx Cluster, and eventually run 3 Nginx instances on the cluster using the Portworx storage mechanism.
Requirments
- A running Kubernetes Cluster with 1 Master and 3 nodes.
- Root access to all machines.
- If you want to learn how to deploy a Kubernetes Cluster, refer to this tutorial.
Installing PortWorx On A Kubernetes Cluster
Prepare hosts with storage
Portworx requires at least some nodes in the cluster to have dedicated storage for Portworx to use. PX will then generate virtual volumes from these storage pools. In this example, we use a 20GB block device that exists on each node.
Step 1: Verifying Cluster Readiness
By running the command : kubectl get nodes
you should get a similar result:
NAME STATUS ROLES AGE VERSION
master Ready master 5h37m v1.19.3
node01 Ready <none> 5h36m v1.19.3
node02 Ready <none> 5h34m v1.19.3
node03 Ready <none> 5h31m v1.19.3
Step 2: List block devices on all nodes
This is as easy as running the Click ssh root@node01 lsblk
to list the available devices on node1.
Note the storage device vdb
, which will be used by PX as one of its raw block disks. All the nodes in this setup have the vdb
device.
$ ssh root@node01 lsblk
Warning: Permanently added 'node01,172.17.0.11' (ECDSA) to the list of known hosts.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
sr0 11:0 1 1024M 0 rom
vda 253:0 0 97.7G 0 disk
├─vda1 253:1 0 93.7G 0 part /
├─vda2 253:2 0 1K 0 part
└─vda5 253:5 0 3.9G 0 part
vdb 253:16 0 20G 0 disk
Step 3: Fetching the Portworx Spec
We will use query parameters in the curl command to customize our spec according to our kubectl version. To achieve that, we pipe the stdout of kubectl --version
to stdin of awk command.
$ K8S_VERSION=`kubectl version --short | awk -Fv '/Server Version: /{print $3}'`
And:
$ curl -L -s -o px-spec.yaml "https://install.portworx.com/2.6?mc=false&kbver=${K8S_VERSION}&b=true&s=%2Fdev%2Fvdb&c=px-cluster&stork=true&st=k8s"
Step 4: Apply the PortWorks Spec
$ kubectl apply -f px-spec.yaml
Allow up to 5 mins for the containers to be created. Eventually, by running the command
$ watch kubectl get pods -n kube-system -l name=portworx -o wide
you should see something like this:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
portworx-7cqpf 1/1 Running 0 7m3s 172.17.0.16 node01 <none> <none>
portworx-hfm5q 1/1 Running 0 7m3s 172.17.0.33 node03 <none> <none>
portworx-qspb6 1/1 Running 0 7m3s 172.17.0.32 node02 <none> <none>
Step 5: Fetch Portworx cluster status
Portworx ships with a pxctl
command line that you can use for managing Portworx.
The below command executes the pxctl status
command using kubectl
in one of the Portworx pods to fetch the overall cluster status.
Firstly, you will need to set up the PX_POD
environment variable.
$ PX_POD=$(kubectl get pods -l name=portworx -n kube-system -o jsonpath='{.items[0].metadata.name}')
Next, use kubectl to execute pxctl status on the cluster.
$ kubectl exec -it $PX_POD -n kube-system -- /opt/pwx/bin/pxctl status
The output of this command should be similar to this:
Status: PX is operational
License: Trial (expires in 31 days)
Node ID: 5f1e4785-7d91-4e5a-8578-a1127f6f5f80
IP: 172.17.0.16
Local Storage Pool: 1 pool
POOL IO_PRIORITY RAID_LEVEL USABLE USED STATUS ZONE REGION
0 HIGH raid0 20 GiB 3.1 GiB Online default default
Local Storage Devices: 1 device
Device Path Media Type Size Last-Scan
0:1 /dev/vdb STORAGE_MEDIUM_MAGNETIC 20 GiB 18 Mar 21 17:16 UTC
* Internal kvdb on this node is sharing this storage device /dev/vdb to store its data.
total - 20 GiB
Cache Devices:
* No cache devices
Cluster Summary
Cluster ID: px-demo
Cluster UUID: e0a19108-a8f4-4604-98c4-aafd5ccc062c
Scheduler: kubernetes
Nodes: 3 node(s) with storage (3 online)
IP ID SchedulerNodeName StorageNode Used CapacityStatus StorageStatus Version Kernel OS
172.17.0.32 a63a8730-6a84-4710-afab-6e92f0393b1d node02 Yes 3.1 GiB 20 GiB Online Up 2.6.3.0-4419aa4 4.4.0-62-generic Ubuntu 16.04.2 LTS
172.17.0.33 8097468c-022f-4bd4-8962-827c5ccc789c node03 Yes 3.1 GiB 20 GiB Online Up 2.6.3.0-4419aa4 4.4.0-62-generic Ubuntu 16.04.2 LTS
172.17.0.16 5f1e4785-7d91-4e5a-8578-a1127f6f5f80 node01 Yes 3.1 GiB 20 GiB Online Up (This node) 2.6.3.0-4419aa4 4.4.0-62-generic Ubuntu 16.04.2 LTS
Warnings:
WARNING: Insufficient CPU resources. Detected: 2 cores, Minimum required: 4 cores
WARNING: Insufficient Memory (RAM) resources. Detected: 2.0 GiB, Minimum required: 4.0 GiB
WARNING: Persistent journald logging is not enabled on this node.
WARNING: Internal Kvdb is not using dedicated drive on nodes [172.17.0.33 172.17.0.32 172.17.0.16]. This configuration is not recommended for production clusters.
Global Storage Pool
Total Used : 9.4 GiB
Total Capacity : 60 GiB
Congrats, now you have a 3-node Portworx cluster up and running!
Let’s examine now the cluster status.
- All 3 nodes are online and use Kubernetes node names as the Portworx node IDs.- Observe that Portworx clustered the 20GB block device from each node in a 60GB storage cluster.
- Portworx detected the block device media type as magnetic and created a storage pool for this. If you have different types of devices, for example, an SSD, a different storage pool is created for the SSD type.
- Portworx printed for us some warnings regarding our cpu cores, and about the fact but we should use a dedicated drive for our nodes.
Step 6: Create A Storage Class For Dynamic Provisioning
In order to create a Portworx storage class, we have to create the appropriate YAML file to apply. Execute the command below:
$ cat > portworx-sc.yaml <<EOF
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: portworx-sc
provisioner: kubernetes.io/portworx-volume
parameters:
snap_interval: "70"
priority_io: "high"
repl: "1"
shared: "true"
EOF
And then apply the file with:
$ kubectl apply -f portworx-sc.yaml
Output:
storageclass.storage.k8s.io/portworx-sc created
Now, if you execute kubectl describe storageclass portworx-sc
you will see the storage class was successfully created:
Name: portworx-sc
IsDefaultClass: No
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"portworx-sc"},"parameters":{"priority_io":"high","repl":"1","snap_interval":"70"},"provisioner":"kubernetes.io/portworx-volume"}
Provisioner: kubernetes.io/portworx-volume
Parameters: priority_io=high,repl=1,snap_interval=70
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
Step 7: Create a Persistent Volume Claim
Now is time to create our PVC. Run the command below:
$ cat > portworx-volume-pvcsc.yaml <<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-sc-01
annotations:
volume.beta.kubernetes.io/storage-class: portworx-sc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
EOF
And apply the file:
$ kubectl apply -f portworx-volume-pvcsc.yaml
Output:
persistentvolumeclaim/pvc-sc-01 created
We can verify the PVC creation by executing:
$ kubectl describe pvc pvc-sc-01
And the output:
Name: pvc-sc-01
Namespace: default
StorageClass: portworx-sc
Status: Bound
Volume: pvc-aa8f59dd-8818-11eb-af8c-0242ac11000a
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"volume.beta.kubernetes.io/storage-class":"portworx-sc"},"nam...
pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-class: portworx-sc
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/portworx-volume
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 10Gi
Access Modes: RWX
VolumeMode: Filesystem
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ProvisioningSucceeded 3m24s persistentvolume-controller Successfully provisioned volume pvc-aa8f59dd-8818-11eb-af8c-0242ac11000a using kubernetes.io/portworx-volume
Mounted By: <none>
Step 8: Configure Nginx To Use PortWorx Persistent Volume Claim
Now that we have our persistent volumes and persistent volume claims set up, we will create an Nginx Deployment to the Kubernetes Cluster, to take advantage of the clustered environment across our 3 nodes. Below is the deployment we will use. Go ahead and run the command:
$ cat > nginx.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 3 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
securityContext:
runAsUser: 2000
runAsGroup: 2000
fsGroup: 2000
containers:
- name: nginx
image: admintuts/nginx:1.19.8-rtmp-geoip2-alpine
ports:
- containerPort: 3080
name: "http-server"
- containerPort: 3443
name: "https-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: nginx-volume
securityContext:
allowPrivilegeEscalation: false
volumes:
- name: nginx-volume
persistentVolumeClaim:
claimName: pvc-sc-01
EOF
By executing
$ kubectl get pods -o wide
we get:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-756df8d64-9tf2v 1/1 Running 0 3m21s 10.42.0.3 node03 <none> <none>
nginx-deployment-756df8d64-tfkhh 1/1 Running 0 3m21s 10.36.0.4 node02 <none> <none>
nginx-deployment-756df8d64-tlbkn 1/1 Running 0 3m21s 10.44.0.5 node01 <none> <none>
As you can see, each pod was scheduled in a different node, and each pod’s IP address is perfectly visible. If you want to take a closer look at what has happened below the hood, execute the command:
$ kubectl describe deployment nginx-deployment
Which will output:
Name: nginx-deployment
Namespace: default
CreationTimestamp: Fri, 19 Mar 2021 02:35:28 +0000
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"},"spec":{"replica...
Selector: app=nginx
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: admintuts/nginx:1.19.8-rtmp-geoip2-alpine
Ports: 3080/TCP, 3443/TCP
Host Ports: 0/TCP, 0/TCP
Environment: <none>
Mounts:
/usr/share/nginx/html from nginx-volume (rw)
Volumes:
nginx-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pvc-sc-01
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-deployment-756df8d64 (3/3 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 3m37s deployment-controller Scaled up replica set nginx-deployment-756df8d64 to 3
As you can see here, all Nginx replicated containers are using the pvc-sc-o1 Persistent Volume Claim, and they are ready to accept traffic.