1. Volume

Test cases for Volume

# Test Case Test Instructions Expected Results
1 Check volume Details Prerequisite:

* Longhorn Nodes has node tags
* Node Disks has disk tags
* Backup target is set to NFS server, or S3 compatible target

1. Create a workload using Longhorn volume
2. Check volume details page
3. Create volume backup
* Volume Details
* State should be Attached
* Health should be healthy
* Frontend should be Block Device
* Attached Node & Endpoint should be node name that volume is attached to and PATH of the volume device file on that node.
* Size should match volume size specified in Create Volume step
* Actual Size should be 0Bi (No data has been written to the volume yet)
* Engine Image should be longhornio/longhorn-engine:<LONGHORN_VERSION>
* Created should indicate time since volume is created.
* Node Tags should be empty (no node tags has been specified during creation)
* Disk Tags should be empty (no disk tags has been specified during creation)
* Last Backup should be empty (no backup has been created)
* Last Backup At should be empty (no backup has been created)
* Instance Manager should contain instance manager image name
* Namespace should match namespace specified in Volume Create step.
* PVC Name should be empty (no PV has been created for that volume yet)
* PV Name should be empty (no PV has been created for that volume yet)
* PV Status should be empty (no PV/PVC has been created for that volume yet)
2 Filter Volumes User should be able to filter volumes using the following filters

* Name
* Node
* Status (Healthy, In progress, Degraded, Faulted, detached)
* Namespace
* Node redundancy (Yes, Limited, No)
* PV Name
* PVC Name
* Node tag
* Disk tag

Notes:

* Limited node redundancy: at least one healthy replica is running at the same node as another
* Volume list should match filtering criteria applied.
3 Delete multiple volumes * Prerequisite:
* Create multiple volumes

1. Select multiple volumes and delete
* Volumes should be deleted
4 Attach multiple volumes * Prerequisite:
* Create multiple volumes

1. Select multiple volumes and Attach them to a node
* All Volumes should be attached to the same node specified in volume attach request.
5 Attach multiple volumes in maintenance mode * Prerequisite:
* Create multiple volumes

1. Select multiple volumes and Attach them to a node in maintenance mode
* All Volumes should be attached in maintenance mode to the same node specified in volume attach request.
6 Detach multiple volumes * Prerequisite:
* Multiple attached volumes
* Select multiple volumes and detach
* Volumes should be detached
7 Backup multiple Volumes * Prerequisite:
* Longhorn should be configured to point to a backupstore
* Multiple volumes existed and attached to node/used buy kubernetes workload
* Write some data to multiple volumes and compute it’s checksum
* Select multiple volumes and Create a backup
* restore volumes backups and check its data checksum
* Volume backups should be created
* Restored volumes from backup should contain the same data when backup is created
8 Create PV/PVC for multiple volumes Prerequisite:

* Create multiple volumes

1. Select multiple volumes
2. Create a PV, specify filesystem
3. Check PV in Lonhgorn UI and in Kubernetes
4. Create PVC
5. Check PVC in Lonhgorn UI and in Kubernetes
6. Delete PVC
7. Check PV in Lonhgorn UI and in Kubernetes
* For all selected volumes
* PV should created
* PV/PVC status in UI should be Available
* PV spec.csi.fsType should match filesystem specified in PV creation request
* PV spec.storageClassName should match the setting in Default Longhorn Static StorageClass Name
* PV spec.csi.volumeHandle should be the volume name
* PV/PVC status in UI should be Bound in Longhorn UI
* PVC namespace should match namespace specified in PVC creation request
* After Deleting PVC, PV/PVC status should be Released in Longhorn UI.
9 Volume expansion Check Multiple Volume expansion test cases work for multiple volumes

Test Cases in Volume Details page
Volume expansion should work for multiple volumes.
10 Engine Offline Upgrade For Multiple Volumes Prerequisite:

* Volume is consumed by Kubernetes deployment workload
* Volume use old Longhorn Engine

1. Write data to volume, compute it’s checksum (checksum#1)
2. Scale down deployment , volume gets detached
3. Upgrade Longhorn engine image to use new deployed engine image
4. Scale up deployment, volume gets attached
* Volume read/write operations should work before and after engine upgrade.
* Old Engine Reference Count will be decreased by 1
* New Engine Reference Count will be increased by 1
12 Show System Hidden Prerequisite:

* Volume is created and attached to a pod.

1. Click the volume appearing on volume list page, it takes user to volume.
2. Take snapshot and upgrade the replicas.
3. Under snapshot section, enable option ‘Show System Hidden
Enabling this option will show system created snapshots while rebuilding of replicas.
13 Event log Prerequisite:

* Volume is created and attached to a pod.

1. Click event log to expand
Verify details appearing in logs.

Replica

# Test Case Test Instructions Expected Results Automated ? / test name
1 Replica list 1. Create a volume and change the default number of replicas
2. Attach volume to a node
3. Check replica list
* Number of replicas should match number specified in volume creation request
* All replicas should be Running, and Healthy
* Replica info also should contain, Node Name, Replica Instance Manager Name, Replica Path
test_volume_update_replica_count
2 Update volume replica count (increase) 1. Create a volume
2. Attach volume to a node
3. Increase replica count +1
* New system hidden snapshot should be created
* A new replica should be created
* New replicas should be Running & Rebuilding
test_volume_update_replica_count
3 Update volume replica count (decrease) 1. Create a volume
2. Attach volume to a node
3. decrease replica count by +1
4. Delete a replica
* After decreasing replica count, nothing should happen
* Deleting a replica will not trigger replica rebuild
test_volume_update_replica_count

Snapshot

# Test Case Test Instructions Expected Results
1 Create Snapshot 1. Create a workload using Longhorn volume
2. Write data to volume, compute it’s checksum (checksum#1)
3. Create a snapshot (snapshot#1)
* Volume head should have parent as snapshot#1
* Snapshot should be created
2 Revert Snapshot 1. Create a deployment workload with nReplicas = 1 using Longhorn volume
2. Write data to volume, compute it’s checksum (checksum#1)
3. Write some other data, compute it’s checksum (checksum#2)
4. Create a snapshot (snapshot#1)
5. Scale down deployment nReplicas = 0
6. Attach volume in maintenance mode
7. Revert to (snapshot#1)
8. Detach volume
9. Scale back deployment nReplicas = 1
10. Compute data checksum (checksum#3)
* Volume head should have parent as snapshot#1
* Volume state should be Detached after scaling down deployment nReplicas = 0
* In Volume Details Attached Node should be Node name which volume is attached to, without an Endpoint (block device path)
* Data checksum after revert should match data checksum when taking snapshot checksum#3 == checksum#1
3 Delete Snapshot 1. Create a workload using Longhorn volume
2. Write data to volume, compute it’s checksum (checksum#1)
3. Create a snapshot (snapshot#1)
4. Repeat steps 2,3 two more times to have (snapshot#2, snapshot#3) with different data files (checksum#2, checksum#3)
5. Write data to volume, compute it’s checksum (checksum#4) → live data
6. Delete (snapshot#2)
7. Revert to (snapshot#3)
8. Revert to (snapshot#1)
* Snapshot#2 will be deleted, verify in replica /var/lib/rancher/longhorn/replicas/
* After reverting to snapshot#3 verify the data.
* After reverting to snapshot#1 data checksum should match (checksum#1)
4 Delete Snapshot while rebuilding replicas 1. Create a workload using Longhorn volume
2. Write data to volume (1GB+), compute it’s checksum (checksum#1)
3. Create a snapshot (snapshot#1)
4. Delete a replica
5. while replica is rebuilding, try to delete (snapshot#1)
* New system snapshot should be created
* Will NOT be able to delete snapshot#1
5 Snapshot Purge 1. Create a workload using Longhorn volume
2. Write data to volume (1GB+), compute it’s checksum (checksum#1)
3. Create a snapshot (snapshot#1)
4. Delete a replica
5. After rebuild is complete, delete (snapshot#1)
* New system snapshot should be created
* Snapshot#1 will be delete
* Snapshot purge process will be triggered
* Only system snapshot should be present
* You will not be able revert to the system snapshot
6 Create recurring snapshots 1. Create a deployment workload with nReplicas = 1 using Longhorn volume
2. Write data to volume , compute it’s checksum (checksum#1)
3. Create a recurring snapshot every 5 minutes. and set retain count to 5
4. Wait for 2 recurring snapshots to triggered (snapshot#1, snapshot#2 )
5. Scale down deployment nReplicas = 0
6. Attach volume in maintenance mode
7. Revert to (snapshot#1)
8. Scale back deployment nReplicas = 1
9. Wait for another recurring snapshots to triggered (snapshot#3)
10. Delete (snapshot#1)
* Snapshots (snapshot#1, snapshot#2) should be created
* Before deleting (snapshot#1), Parent snapshot of (snapshot#3) should be (snapshot#1)
* After deleting (snapshot#1), Parent snapshot of (snapshot#3) should be starting point.
* Only max of 5 snapshots should be retained
* Oldest snapshot will be removed when number of snapshots created by recurring job exceeds retain count
* only snapshots generated by recurring job is affected by retain count, user can created manual snapshots and it will not be deleted automatically.
8 Disabling/Deleting recurring snapshots 1. Create a deployment workload with nReplicas = 1 using Longhorn volume
2. Write data to volume , compute it’s checksum (checksum#1)
3. Create a recurring snapshot every 5 minutes. and set retain count to 5
4. Wait for 2 recurring snapshots to triggered (snapshot#1,snapshot#2 )
5. Delete the recurring snapshots
* Recurring snapshots will stop after deletion of it.
* Existing snapshots should retain.
* User should be able to take snapshot manually.
9 Operation with volume created using rancher UI 1. Create pv/pvc in rancher UI.
2. Deploy a workload with PVC created in rancher UI.
3. Write data to volume, compute it’s checksum (checksum#1)
4. In longhorn UI, create a snapshot.
5. Write data to volume again.
6. Revert to snapshot.
7. Delete the snapshot.
* User should be able to create snapshot.
* User should to revert to snapshot created verify this by checksum.
* User should be able to delete the snapshot. Verify this in replicas in the nodes.
10 Delete the last snapshot 1. Create a workload using Longhorn volume
2. Write data to volume(more than 4k), compute it’s checksum (checksum#1)
3. Create a snapshot (snapshot#1)
4. Repeat steps 2,3 two more times to have (snapshot#2, snapshot#3) with different data files (checksum#2, checksum#3)
5. Write data to volume, compute it’s checksum (checksum#4) → live data
6. Delete (snapshot#3)
* Data from last snapshot should not get lost.
11 Multi branch snapshot 1. Create a workload using Longhorn volume
2. Write data to volume(more than 4k), compute it’s checksum (checksum#1)
3. Create a snapshot (snapshot#1)
4. Repeat steps 2,3 two more times to have (snapshot#2, snapshot#3) with different data files (checksum#2, checksum#3)
5. Write data to volume, compute it’s checksum (checksum#4) → live data
6. Revert to snapshot#2
7. Write data to volume, compute it’s checksum.
8. Repeat steps 2,3 two more times to have (snapshot#4, snapshot#5) with different data files (checksum#5, checksum#6)
9. Revert to snapshot#3
* Verify the data in any of the replica
12 Backup from a snapshot 1. Create a workload using Longhorn volume
2. Write data to volume, compute it’s checksum (checksum#1)
3. Create a snapshot (snapshot#1)
4. Repeat steps 2,3 two more times to have (snapshot#2, snapshot#3) with different data files (checksum#2, checksum#3)
5. Write data to volume, compute it’s checksum (checksum#4) → live data
6. Take backup from a snapshot#2.
7. Restore from the backup taken
Verify the data of the backup, it should match data from snapshot#2

Volume Expansion

# Test Case Test Instructions Expected Results
1 Volume Online expansion for attached volume

(Not Supported for now)
1. Create multiple 4 volumes, each of size 5 GB, Attach them to nodes
2. Format each volume using one of the following formats (ext 2/3/4 & xfs)
3. Mount volumes to directories on the nodes.
4. Check volume size and used space using df -h command
5. Write 4 GB data file to each volume
6. For each volume, from Operation menu, Click Expand Volume, set size to 10 GB, and click OK
7. Check volume size and used space using df -h command
8. Add more 4 GB data file to each volume
9. Check volume size and used space using df -h command
10. Check volume size expanded
* Volumes should be expanded to the new size
* Volume read/write operations should work after size expansion
2 Volume Online expansion for volume consumed by Kubernetes workload

→ Kubernetes Version: 1.15
1. Create a 2GB volume used by a Kubernetes workload
2. Expand Volume size to 10 GB
3. In Kubernetes, edit PV/PVC capacity to match new volume size.
4. Check volume size using df -h command from Kubernetes workload
5. Write 8 GB data file to volume, and compute its checksum
* When resizing, A message indicate that The capacity of related PV and PVC will not be updated
* Volumes should be expanded to the new size
* Volume read/write operations should work after size expansion
3 Volume Online expansion for volume consumed by Kubernetes workload

→ Kubernetes Version: 1.16+
Prerequisite:

* PVC is dynamically provisioned by the StorageClass.
* The Kubernetes is version 1.16+ or the feature gate for volume expansion is enabled.
* The StorageClass should support resize, allowVolumeExpansion: true is set in the StorageClass

1. Create a 2GB volume used by a Kubernetes workload
2. Expand Volume size to 10 GB
3. Check volume size using df -h command from Kubernetes workload
4. Write 8 GB data file to volume, and compute its checksum
* When resizing, A message indicate that The capacity of related PV and PVC will not be updated
* Volumes should be expanded to the new size
* Volume read/write operations should work after size expansion
4 Volume Offline expansion

Kubernetes Version: < 1.16
1. Create and attach a volume
2. Format and mount the volume. Fill up the volume and get the checksum (checksum#1)
3. Unmount and detach the volume.
4. Expand the volume and wait for the expansion complete.
5. Reattach and remount the volume. Check the checksum and if the filesystem is expanded.
6. Fill up the expanded parts and get the checksum. (checksum#2)
7. Unmount and detach the volume.
8. Launch a workload for it on a different node. Check data checksum
* Volume size should be expanded
* Volume read/write operations should work after size expansion
* After expansion, data checksum should match checksum#1
* Final data checksum should match checksum#2
5 Volume expansion with revert and backup 1. Create and attach a volume.
2. Format and mount the volume. Fill up the volume and get the checksum. (checksum#1)
3. Create the 1st snapshot and backup. (snapshot#1 & backup#1)
4. Expand the volume. Fill up the expanded part and get the checksum (checksum#2)
5. Create the 2nd snapshot and backup. (snapshot#2 & backup#2)
6. Check if the backup volume size is expanded.
7. Restore backup#2 to a volume and check its data
8. Clean up then refill the volume. Get the checksum. (checksum#3)
9. Create the 3rd snapshot and backup. (snapshot#3 & backup#3)
10. Revert to the 2nd snapshot. Check the checksum.
11. Revert to the 1st snapshot. Check the checksum and if we can still use the expanded part.
* Volume should be expanded
* backup#2 size should be expanded and match volume new expanded size
* Restored volume data from backup#2 should match checksum#2
* After reverting to snapshot#1, data checksum should match checksum#1
* After reverting to snapshot#1 expanded size should be usable.
* Volume read/write operations should work after expansion and revert.

RWX Volume native support starting with v1.1.0

Prerequisite:

  1. Longhorn is deployed in a cluster having 4 nodes (1 etcd/control plane and 3 worker)
  2. NFS-Common is installed on the nodes.
# Test Scenario Test Steps
1 Create StatefulSet/Deployment with single pod with volume attached in RWX mode 1. Create a StatefulSet/Deployment with 1 pod.
2. Attach a volume with RWX mode using longhorn class and selecting the option read write many.
3. Verify that a PVC, ShareManger pod, CRD and volume in Longhorn get created.
4. Verify there is directory with the name of PVC exists in the ShareManager mount point i.e. export
5. Write some data in the pod and verify the same data reflects in the ShareManager.
6. Verify the longhorn volume, it should reflect the correct size.
2 Create StatefulSet/Deployment with more than 1 pod with volume attached in RWX mode. 1. Create a StatefulSet/Deployment with multiple pods.
2. Attach a volume with RWX mode using longhorn class and selecting the option read write many.
3. Verify that one volume per pod in Longhorn gets created.
4. Verify there is directory with the name of PVC exists in the ShareManager mount point i.e. export
5. Verify that Longhorn UI shows all the pods name attached to the volume.
6. Write some data in all the pod and verify all the data reflects in the ShareManager.
7. Verify the longhorn volume, it should reflect the correct size.
3 Create StatefulSet/Deployment with the existing PVC of a RWX volume. 1. Create a StatefulSet/Deployment with 1 pod.
2. Attach a volume with RWX mode using longhorn class and selecting the option read write many.
3. Verify that a PVC, ShareManger pod, CRD and volume in Longhorn get created.
4. Write some data in the pod and verify the same data reflects in the ShareManager.
5. Create another StatefulSet/Deployment using the above created PVC.
6. Write some data in the new pod, the same should be reflected in the ShareManager pod.
7. Verify the longhorn volume, it should reflect the correct size.
4 Scale up StatefulSet/Deployment with one pod attached with volume in RWX mode. 1. Create a StatefulSet/Deployment with 1 pod.
2. Attach a volume with RWX mode using longhorn class and selecting the option read write many.
3. Write some data in the pod and verify the same data reflects in the ShareManager.
4. Scale up the StatefulSet/Deployment.
5. Verify a new volume gets created.
6. Write some data in the new pod, the same should be reflected in the ShareManager pod.
7. Verify the longhorn volume, it should reflect the correct size.
5 Scale down StatefulSet/Deployment attached with volume in RWX mode to zero. 1. Create a StatefulSet/Deployment with 1 pod.
2. Attach a volume with RWX mode using longhorn class and selecting the option read write many.
3. Write some data in the pod and verify the same data reflects in the ShareManager.
4. Scale down the StatefulSet/Deployment to zero
5. Verify the ShareManager pod gets deleted.
6. Verify the volume should be in detached state.
7. Create a new StatefulSet/Deployment with the existing PVC with different mount point.
8. Verify the ShareManager should get created and volume should become attached.
9. Verify the data.
10. Delete the newly created StatefulSet/Deployment.
11. Verify the ShareManager pod gets deleted again.
12. Scale up the first StatefulSet/Deployment.
13. Verify the ShareManager should get created and volume should become attached.
14. Verify the data.
6 Delete the Workload StatefulSet/Deployment attached with RWX volume. 1. Create a StatefulSet/Deployment with 1 pod.
2. Attach a volume with RWX mode using longhorn class and selecting the option read write many.
3. Write some data in the pod and verify the same data reflects in the ShareManager.
4. Delete the workload.
5. Verify the ShareManager pod gets deleted but the CRD should not be deleted.
6. Verify the volume should be in detached state.
7. Create another StatefulSet with existing PVC.
8. Verify the ShareManager should get created and volume should become attached.
9. Verify the data.
7 Take snapshot and backup of a RWX volume in Longhorn. 1. Create a StatefulSet/Deployment with 2 pods.
2. Attach a volume with RWX mode using longhorn class and selecting the option read write many.
3. Write some data in the pod.
4. Take a snapshot and a backup.
5. Write some more data into the pod.
6. Revert to snapshot 1 and verify the data.
8 Restore a backup taken from a RWX volume in Longhorn. 1. Create a StatefulSet/Deployment with 2 pods.
2. Attach a volume with RWX mode using longhorn class and selecting the option read write many.
3. Write some data in the pod.
4. Take a backup of a RWX volume.
5. Restore from the backup and attach the volume to a pod.
6. Verify the data and the volume should be read write once.
9 Create DR volume of a RWX volume in Longhorn. 1. Create a StatefulSet/Deployment with 2 pods.
2. Attach a volume with RWX mode using longhorn class and selecting the option read write many.
3. Write some data in the pod.
4. Take a backup of the volume.
5. Create a DR volume of the backup.
6. Write more data in the pods and take more backups.
7. Verify the DR volume is getting synced with latest backup.
8. Activate the DR volume and verify the data.
10 Expand the RWX volume. 1. Create a StatefulSet/Deployment with 2 pods.
2. Attach a volume with RWX mode using longhorn class and selecting the option read write many.
3. Write some data in the pod.
4. Expand the volume.
5. Verify that user is able to write data in the expanded volume.
11 Recurring Backup/Snapshot with RWX volume. 1. Create a StatefulSet/Deployment with 2 pods.
2. Attach a volume with RWX mode using longhorn class and selecting the option read write many.
3. Write some data in the pod.
4. Schedule a recurring backup/Snapshot.
5. Verify the recurring jobs are getting created and is taking backup/snapshot successfully.
12 Deletion of the replica of a Longhorn RWX volume. 1. Create a StatefulSet/Deployment with 2 pods.
2. Attach a volume with RWX mode using longhorn class and selecting the option read write many.
3. Write some data in the pod.
4. Delete one of the replica and verify that the rebuild of replica is working fine.
13 Parallel writing 1. Write data in multiple pods attached to the same volume at the same time.
14 Data locality with RWX volume. 1. Create a StatefulSet/Deployment with 2 pods.
2. Attach a volume with RWX mode using longhorn class and selecting the option read write many.
3. Write some data in the pod.
4. Enable Data-locality
5. Disable Node soft anti-affinity.
6. Disable the node where the volume is attached for some time.
7. Wait for replica to be rebuilt on another node.
8. Enable the node scheduling and verify a replica gets rebuilt on the attached node.
15 Node eviction with RWX volume. 1. Create a StatefulSet/Deployment with 2 pods.
2. Attach a volume with RWX mode using longhorn class and selecting the option read write many.
3. Write some data in the pod.
4. Do a node eviction and verify the data.
16 Auto salvage feature on an RWX volume. 1. Create a StatefulSet/Deployment with 2 pods.
2. Attach a volume with RWX mode using longhorn class and selecting the option read write many.
3. Write some data in the pod.
4. Crash all the replicas and verify the auto-salvage works fine.
17 RWX volume with Allow Recurring Job While Volume Is Detached enabled. 1. Create a StatefulSet/Deployment with 2 pods.
2. Attach a volume with RWX mode using longhorn class and selecting the option read write many.
3. Write some data in the pod.
4. Set a recurring backup and scale down all the pods.
5. Verify the volume get attached at scheduled time and backup/snapshot get created.
18 RWX volume with Toleration. 1. Create a StatefulSet/Deployment with 2 pods.
2. Attach a volume with RWX mode using longhorn class and selecting the option read write many.
3. Write some data in the pod.
4. Set some Toleration.
5. Verify the ShareManager pods have the toleration and annotation updated.
19 Detach/Delete operation on an RWX volume. 1. Detach action on the Longhorn UI should not work on RWX volume.
2. On deletion of the RWX volume, the ShareManager CRDs should also get deleted.
20 Crash instance e manager of the RWX volume 1. Create a StatefulSet/Deployment with 2 pods.
2. Attach a volume with RWX mode using longhorn class and selecting the option read write many.
3. Write some data in the pod.
4. Crash the instance manager.
5. On crashing the IM, the ShareManager pods should be immediately redeployed.
6. Based on the setting Automatically Delete Workload Pod when The Volume Is Detached Unexpectedly, the workload pods will get redeployed.
7. On recreating on workload pods, the volume should get attached successfully.
8. If Automatically Delete Workload Pod when The Volume Is Detached Unexpectedly is disabled, user should see I/O error on the mounted point.
21 Reboot the ShareManager and workload node 1. Create a StatefulSet/Deployment with 2 pods.
2. Attach a volume with RWX mode using longhorn class and selecting the option read write many.
3. Write some data in the pod.
4. Reboot the ShareManager node.
5. The ShareManager pod should move to another node.
6. As the instance e manager is on the same node and based on setting Automatically Delete Workload Pod when The Volume Is Detached Unexpectedly, the workload should be redeployed and volume should be available to user.
7. Reboot the workload node.
8. On restart on the node, pods should get attached to the volume. Verify the data.
22 Power down the ShareManager and workload node. 1. Create a StatefulSet/Deployment with 2 pods.
2. Attach a volume with RWX mode using longhorn class and selecting the option read write many.
3. Write some data in the pod.
4. Power down the ShareManager node.
5. The ShareManager pod should move to another node.
6. As the instance manager is on the same node and based on the setting Automatically Delete Workload Pod when The Volume Is Detached Unexpectedly, the workload should be redeployed and volume should be available to user.
7. Power down the workload node.
8. The workload pods should move to another node based on Pod Deletion Policy When Node is Down setting.
9. Once the pods are up, they should get attached to the volume. Verify the data.
23 Kill the nfs process in the ShareManager 1. Create a StatefulSet/Deployment with 2 pods.
2. Attach a volume with RWX mode using longhorn class and selecting the option read write many.
3. Write some data in the pod.
4. Kill the NFS server in the ShareManager pod.
5. The NFS server should retry to come up.
6. Volume should continue to accessible.
24 Delete the ShareManager CRD. 1. Create a StatefulSet/Deployment with 2 pods.
2. Attach a volume with RWX mode using longhorn class and selecting the option read write many.
3. Write some data in the pod.
4. Delete the ShareManager CRD.
5. A new ShareManager CRD should be created.
25 Delete the ShareManager pod. 1. Create a StatefulSet/Deployment with 2 pods.
2. Attach a volume with RWX mode using longhorn class and selecting the option read write many.
3. Write some data in the pod.
4. Delete the ShareManager pod.
5. A new ShareManager pod should be immediately created.
26 Drain the ShareManager node. 1. Create a StatefulSet/Deployment with 2 pods.
2. Attach a volume with RWX mode using longhorn class and selecting the option read write many.
3. Write some data in the pod.
4. Drain the ShareManager pod node.
5. The volume should get detached first, then the shareManager pod should move to another node and Volume should get reattached.
27 Disk full on the ShareManager node. 1. Create a StatefulSet/Deployment with 2 pods.
2. Attach a volume with RWX mode using longhorn class and selecting the option read write many.
3. Write some data in the pod and make the disk almost full.
4. Verify the RWX volume is not failed.
5. Verify the creation of snapshot/backup.
6. Try to write more data, and the it should error out no space left.
28 Scheduling failure with RWX volume. 1. Disable 1 node.
2. Create a StatefulSet/Deployment with 2 pods.
3. Attach a volume with RWX mode using longhorn class and selecting the option read write many.
4. Verify the RWX volume gets created with degraded state.
5. Write some data in the pod.
6. Enable the node and the volume should become healthy.
29 Add a node in the cluster. 1. Add a node in the cluster.
2. Create multiple statefulSet/deployment with RWX volume.
3. Verify that the ShareManager pod is able to scheduled on the new node.
30 Delete a node from the cluster. 1. Create a StatefulSet/Deployment with 2 pods.
2. Attach a volume with RWX mode using longhorn class and selecting the option read write many.
3. Write some data in the pod.
4. Delete the ShareManager node from the cluster.
5. Verify the ShareManager pod move to new node and volume continues to be accessible.
31 RWX with Linux/SLES OS
32 RWX with K3s set up
33 RWX in Air gap set up.
34 RWX in PSP enabled set up.
[Edit]