Module tests.test_csi
Functions
def backupstore_test(client, core_api, csi_pv, pvc, pod_make, pod_name, vol_name, backing_image, test_data)
def create_and_verify_block_volume(client, core_api, storage_class, pvc, pod_manifest, is_rwx)
def create_and_wait_csi_pod(pod_name, client, core_api, csi_pv, pvc, pod_make, backing_image, from_backup)
def create_and_wait_csi_pod_named_pv(pv_name, pod_name, client, core_api, csi_pv, pvc, pod_make, backing_image, from_backup)
def create_pv_storage(api, cli, pv, claim, backing_image, from_backup)
-
Manually create a new PV and PVC for testing.
def csi_backup_test(client, core_api, csi_pv, pvc, pod_make, backing_image='')
def csi_io_test(client, core_api, csi_pv, pvc, pod_make, backing_image='')
def csi_mount_test(client, core_api, csi_pv, pvc, pod_make, volume_size, backing_image='')
def md5sum_thread(pod_name, destination_in_pod)
-
For test case test_csi_block_volume_online_expansion and test_csi_mount_volume_online_expansion use.
Use a new api instance in thread or when this threading is still running, execute other k8s command in main thread will hit error Handshake status 200 OK
def test_allow_volume_creation_with_degraded_availability_csi(client, core_api, apps_api, make_deployment_with_pvc)
-
Test Allow Volume Creation with Degraded Availability (CSI)
Requirement: 1. Set
allow-volume-creation-with-degraded-availability
to true. 2. Setnode-level-soft-anti-affinity
to false.Steps: 1. Disable scheduling for node 3. 2. Create a Deployment Pod with a volume and 3 replicas. 1. After the volume is attached, scheduling error should be seen. 3. Write data to the Pod. 4. Scale down the deployment to 0 to detach the volume. 1. Scheduled condition should become true. 5. Scale up the deployment back to 1 and verify the data. 1. Scheduled condition should become false. 6. Enable the scheduling for node 3. 1. Volume should start rebuilding on the node 3 soon. 2. Once the rebuilding starts, the scheduled condition should become true. 7. Once rebuild finished, scale down and back the deployment to verify the data.
def test_csi_backup(set_random_backupstore, client, core_api, csi_pv, pvc, pod_make)
-
Test that backup/restore works with volumes created by CSI driver.
Run the test for all the backupstores
- Create PV/PVC/Pod using dynamic provisioned volume
- Write data and create snapshot using Longhorn API
- Verify the existence of backup
- Create another Pod using restored backup
- Verify the data in the new Pod
def test_csi_block_volume(client, core_api, storage_class, pvc, pod_manifest)
-
Test CSI feature: raw block volume
- Create a PVC with
volumeMode = Block
- Create a pod using the PVC to dynamic provision a volume
- Verify the pod creation
- Generate
test_data
and write to the block volume directly in the pod - Read the data back for validation
- Delete the pod and create
pod2
to use the same volume - Validate the data in
pod2
is consistent withtest_data
- Create a PVC with
def test_csi_block_volume_online_expansion(client, core_api, storage_class, pvc, pod_manifest)
-
Test CSI feature: online expansion for block volume
- Create a new
storage_class
withallowVolumeExpansion
set - Create PVC with access mode "block" and Pod with the new StorageClass
- Use
dd
command copy data into volume block device. - During the copy, update pvc.spec.resources to expand the volume
- Verify the volume expansion done using Longhorn API and Check the PVC & PV size
- Wait for the copy complete.
- Calculate the checksum for the copied data inside the block volume asynchronously.
- During the calculation, update pvc.spec.resources to expand the volume again.
- Wait for the calculation complete, then compare the checksum.
- Do cleanup: Remove the original
test_data
as well as the pod and PVC.
- Create a new
def test_csi_encrypted_block_volume(client, core_api, storage_class, crypto_secret, pvc, pod_manifest)
-
Test CSI feature: encrypted block volume
- Create a PVC with encrypted
volumeMode = Block
- Create a pod using the PVC to dynamic provision a volume
- Verify the pod creation
- Generate
test_data
and write to the block volume directly in the pod - Read the data back for validation
- Delete the pod and create
pod2
to use the same volume - Validate the data in
pod2
is consistent withtest_data
- Create a PVC with encrypted
def test_csi_encrypted_migratable_block_volume(client, core_api, storage_class, crypto_secret, pvc, pod_manifest)
-
Test CSI feature: encrypted migratable block volume
Issue: https://github.com/longhorn/longhorn/issues/7678
- Create a PVC with encrypted
volumeMode = Block
andmigratable = true
- Create a pod using the PVC to dynamic provision a volume
- Verify the pod creation
- Generate
test_data
and write to the block volume directly in the pod - Read the data back for validation
- Delete the pod and create
pod2
to use the same volume - Validate the data in
pod2
is consistent withtest_data
- Create a PVC with encrypted
def test_csi_expansion_with_replica_failure(client, core_api, storage_class, pvc, pod_manifest)
-
Test expansion success but with one replica expansion failure
- Create a new
storage_class
withallowVolumeExpansion
set - Create PVC and Pod with dynamic provisioned volume from the StorageClass
- Create an empty directory with expansion snapshot tmp meta file path for one replica so that the replica expansion will fail
- Generate
test_data
and write to the pod - Update pvc.spec.resources to expand the volume
- Check expansion result using Longhorn API. There will be expansion error caused by the failed replica but overall the expansion should succeed.
- Check if the volume will reuse the failed replica during rebuilding.
- Validate the volume content, then check if data writing looks fine
- Create a new
def test_csi_io(client, core_api, csi_pv, pvc, pod_make)
-
Test that input and output on a statically defined CSI volume works as expected.
Note: Fixtures are torn down here in reverse order that they are specified as a parameter. Take caution when reordering test fixtures.
- Create PV/PVC/Pod with dynamic positioned Longhorn volume
- Generate
test_data
and write it to volume using the equivalent ofkubectl exec
- Delete the Pod
- Create another pod with the same PV
- Check the previous created
test_data
in the new Pod
def test_csi_minimal_volume_size(client, core_api, csi_pv, pvc, pod_make)
-
Test CSI Minimal Volume Size
- Create a PVC requesting size 5MiB. Check the PVC requested size is 5MiB and capacity size get is 10MiB.
- Remove the PVC.
- Create a PVC requesting size 10MiB. Check the PVC requested size and capacity size get are both 10MiB.
- Create a pod to use this PVC.
- Write some data to the volume and read it back to compare.
def test_csi_mount(client, core_api, csi_pv, pvc, pod_make)
-
Test that a statically defined CSI volume can be created, mounted, unmounted, and deleted properly on the Kubernetes cluster.
Note: Fixtures are torn down here in reverse order that they are specified as a parameter. Take caution when reordering test fixtures.
- Create a PV/PVC/Pod with pre-created Longhorn volume
- Using Kubernetes manifest instead of Longhorn PV/PVC creation API
- Make sure the pod is running
- Verify the volume status
- Create a PV/PVC/Pod with pre-created Longhorn volume
def test_csi_mount_volume_online_expansion(client, core_api, storage_class, pvc, pod_manifest)
-
Test CSI feature: online expansion for mount volume
- Create a new
storage_class
withallowVolumeExpansion
set - Create PVC and Pod with the new StorageClass
- Use
dd
command copy data into volume mount point. - During the copy, update pvc.spec.resources to expand the volume
- Verify the volume expansion done using Longhorn API and Check the PVC & PV size
- Wait for the copy complete.
- Calculate the checksum for the copied data inside volume mount point.
- Update pvc.spec.resources to expand the volume and get data checksum again.
- Wait for the calculation complete, then compare the checksum.
- Do cleanup: Remove the original
test_data
as well as the pod and PVC.
- Create a new
def test_csi_offline_expansion(client, core_api, storage_class, pvc, pod_manifest)
-
Test CSI feature: offline expansion
- Create a new
storage_class
withallowVolumeExpansion
set - Create PVC and Pod with dynamic provisioned volume from the StorageClass
- Generate
test_data
and write to the pod - Delete the pod
- Update pvc.spec.resources to expand the volume
- Verify the volume expansion done using Longhorn API
- Create a new pod and validate the volume content
- Create a new
def test_restage_volume_if_node_stage_volume_not_called()
-
Test restage volume if NodeStageVolume not called (CSI)
- Create a PVC with spec.volumeMode == Block.
- Create a Deployment with spec.replicas == 1 that uses the PVC. Set a spec.selector on the Deployment so it can only run Pods on one node.
- Hard reboot the node running the Deployment's single Pod.
- Before the node comes back, force delete the "running" Pod.
- Before the node comes back, verify there is now one pending Pod and one terminating Pod.
- After the node comes back, verify that a Pod becomes running and remains running. It is fine if that Pod is different not the pending one from above. The automatic remount mechanism may cause some churn.
- Force delete the running Pod again.
- Verify that a Pod becomes running and remains running.
def test_xfs_pv(client, core_api, pod_manifest)
-
Test create PV with new XFS filesystem
- Create a volume
- Create a PV for the existing volume, specify
xfs
as filesystem - Create PVC and Pod
- Make sure Pod is running.
- Write data into the pod and read back for validation.
Note: The volume will be formatted to XFS filesystem by Kubernetes in this case.
def test_xfs_pv_existing_volume(client, core_api, pod_manifest)
-
Test create PV with existing XFS filesystem
- Create a volume
- Create PV/PVC for the existing volume, specify
xfs
as filesystem - Attach the volume to the current node.
- Format it to
xfs
- Create a POD using the volume
FIXME: We should write data in step 4 and validate the data in step 5, make sure the disk won't be reformatted
def update_storageclass_references(name, pv, claim)
-
Rename all references to a StorageClass to a specified name.