Module tests.test_csi

Functions

def backupstore_test(client, core_api, csi_pv, pvc, pod_make, pod_name, vol_name, backing_image, test_data)
def create_and_verify_block_volume(client, core_api, storage_class, pvc, pod_manifest, is_rwx)
def create_and_wait_csi_pod(pod_name, client, core_api, csi_pv, pvc, pod_make, backing_image, from_backup)
def create_and_wait_csi_pod_named_pv(pv_name, pod_name, client, core_api, csi_pv, pvc, pod_make, backing_image, from_backup)
def create_pv_storage(api, cli, pv, claim, backing_image, from_backup)

Manually create a new PV and PVC for testing.

def csi_backup_test(client, core_api, csi_pv, pvc, pod_make, backing_image='')
def csi_io_test(client, core_api, csi_pv, pvc, pod_make, backing_image='')
def csi_mount_test(client, core_api, csi_pv, pvc, pod_make, volume_size, backing_image='')
def md5sum_thread(pod_name, destination_in_pod)

For test case test_csi_block_volume_online_expansion and test_csi_mount_volume_online_expansion use.

Use a new api instance in thread or when this threading is still running, execute other k8s command in main thread will hit error Handshake status 200 OK

def test_allow_volume_creation_with_degraded_availability_csi(client, core_api, apps_api, make_deployment_with_pvc)

Test Allow Volume Creation with Degraded Availability (CSI)

Requirement: 1. Set allow-volume-creation-with-degraded-availability to true. 2. Set node-level-soft-anti-affinity to false.

Steps: 1. Disable scheduling for node 3. 2. Create a Deployment Pod with a volume and 3 replicas. 1. After the volume is attached, scheduling error should be seen. 3. Write data to the Pod. 4. Scale down the deployment to 0 to detach the volume. 1. Scheduled condition should become true. 5. Scale up the deployment back to 1 and verify the data. 1. Scheduled condition should become false. 6. Enable the scheduling for node 3. 1. Volume should start rebuilding on the node 3 soon. 2. Once the rebuilding starts, the scheduled condition should become true. 7. Once rebuild finished, scale down and back the deployment to verify the data.

def test_csi_backup(set_random_backupstore, client, core_api, csi_pv, pvc, pod_make)

Test that backup/restore works with volumes created by CSI driver.

Run the test for all the backupstores

  1. Create PV/PVC/Pod using dynamic provisioned volume
  2. Write data and create snapshot using Longhorn API
  3. Verify the existence of backup
  4. Create another Pod using restored backup
  5. Verify the data in the new Pod
def test_csi_block_volume(client, core_api, storage_class, pvc, pod_manifest)

Test CSI feature: raw block volume

  1. Create a PVC with volumeMode = Block
  2. Create a pod using the PVC to dynamic provision a volume
  3. Verify the pod creation
  4. Generate test_data and write to the block volume directly in the pod
  5. Read the data back for validation
  6. Delete the pod and create pod2 to use the same volume
  7. Validate the data in pod2 is consistent with test_data
def test_csi_block_volume_online_expansion(client, core_api, storage_class, pvc, pod_manifest)

Test CSI feature: online expansion for block volume

  1. Create a new storage_class with allowVolumeExpansion set
  2. Create PVC with access mode "block" and Pod with the new StorageClass
  3. Use dd command copy data into volume block device.
  4. During the copy, update pvc.spec.resources to expand the volume
  5. Verify the volume expansion done using Longhorn API and Check the PVC & PV size
  6. Wait for the copy complete.
  7. Calculate the checksum for the copied data inside the block volume asynchronously.
  8. During the calculation, update pvc.spec.resources to expand the volume again.
  9. Wait for the calculation complete, then compare the checksum.
  10. Do cleanup: Remove the original test_dataas well as the pod and PVC.
def test_csi_encrypted_block_volume(client, core_api, storage_class, crypto_secret, pvc, pod_manifest)

Test CSI feature: encrypted block volume

  1. Create a PVC with encrypted volumeMode = Block
  2. Create a pod using the PVC to dynamic provision a volume
  3. Verify the pod creation
  4. Generate test_data and write to the block volume directly in the pod
  5. Read the data back for validation
  6. Delete the pod and create pod2 to use the same volume
  7. Validate the data in pod2 is consistent with test_data
def test_csi_encrypted_migratable_block_volume(client, core_api, storage_class, crypto_secret, pvc, pod_manifest)

Test CSI feature: encrypted migratable block volume

Issue: https://github.com/longhorn/longhorn/issues/7678

  1. Create a PVC with encrypted volumeMode = Block and migratable = true
  2. Create a pod using the PVC to dynamic provision a volume
  3. Verify the pod creation
  4. Generate test_data and write to the block volume directly in the pod
  5. Read the data back for validation
  6. Delete the pod and create pod2 to use the same volume
  7. Validate the data in pod2 is consistent with test_data
def test_csi_expansion_with_replica_failure(client, core_api, storage_class, pvc, pod_manifest)

Test expansion success but with one replica expansion failure

  1. Create a new storage_class with allowVolumeExpansion set
  2. Create PVC and Pod with dynamic provisioned volume from the StorageClass
  3. Create an empty directory with expansion snapshot tmp meta file path for one replica so that the replica expansion will fail
  4. Generate test_data and write to the pod
  5. Update pvc.spec.resources to expand the volume
  6. Check expansion result using Longhorn API. There will be expansion error caused by the failed replica but overall the expansion should succeed.
  7. Check if the volume will reuse the failed replica during rebuilding.
  8. Validate the volume content, then check if data writing looks fine
def test_csi_io(client, core_api, csi_pv, pvc, pod_make)

Test that input and output on a statically defined CSI volume works as expected.

Note: Fixtures are torn down here in reverse order that they are specified as a parameter. Take caution when reordering test fixtures.

  1. Create PV/PVC/Pod with dynamic positioned Longhorn volume
  2. Generate test_data and write it to volume using the equivalent of kubectl exec
  3. Delete the Pod
  4. Create another pod with the same PV
  5. Check the previous created test_data in the new Pod
def test_csi_minimal_volume_size(client, core_api, csi_pv, pvc, pod_make)

Test CSI Minimal Volume Size

  1. Create a PVC requesting size 5MiB. Check the PVC requested size is 5MiB and capacity size get is 10MiB.
  2. Remove the PVC.
  3. Create a PVC requesting size 10MiB. Check the PVC requested size and capacity size get are both 10MiB.
  4. Create a pod to use this PVC.
  5. Write some data to the volume and read it back to compare.
def test_csi_mount(client, core_api, csi_pv, pvc, pod_make)

Test that a statically defined CSI volume can be created, mounted, unmounted, and deleted properly on the Kubernetes cluster.

Note: Fixtures are torn down here in reverse order that they are specified as a parameter. Take caution when reordering test fixtures.

  1. Create a PV/PVC/Pod with pre-created Longhorn volume
    1. Using Kubernetes manifest instead of Longhorn PV/PVC creation API
  2. Make sure the pod is running
  3. Verify the volume status
def test_csi_mount_volume_online_expansion(client, core_api, storage_class, pvc, pod_manifest)

Test CSI feature: online expansion for mount volume

  1. Create a new storage_class with allowVolumeExpansion set
  2. Create PVC and Pod with the new StorageClass
  3. Use dd command copy data into volume mount point.
  4. During the copy, update pvc.spec.resources to expand the volume
  5. Verify the volume expansion done using Longhorn API and Check the PVC & PV size
  6. Wait for the copy complete.
  7. Calculate the checksum for the copied data inside volume mount point.
  8. Update pvc.spec.resources to expand the volume and get data checksum again.
  9. Wait for the calculation complete, then compare the checksum.
  10. Do cleanup: Remove the original test_dataas well as the pod and PVC.
def test_csi_offline_expansion(client, core_api, storage_class, pvc, pod_manifest)

Test CSI feature: offline expansion

  1. Create a new storage_class with allowVolumeExpansion set
  2. Create PVC and Pod with dynamic provisioned volume from the StorageClass
  3. Generate test_data and write to the pod
  4. Delete the pod
  5. Update pvc.spec.resources to expand the volume
  6. Verify the volume expansion done using Longhorn API
  7. Create a new pod and validate the volume content
def test_restage_volume_if_node_stage_volume_not_called()

Test restage volume if NodeStageVolume not called (CSI)

  1. Create a PVC with spec.volumeMode == Block.
  2. Create a Deployment with spec.replicas == 1 that uses the PVC. Set a spec.selector on the Deployment so it can only run Pods on one node.
  3. Hard reboot the node running the Deployment's single Pod.
  4. Before the node comes back, force delete the "running" Pod.
  5. Before the node comes back, verify there is now one pending Pod and one terminating Pod.
  6. After the node comes back, verify that a Pod becomes running and remains running. It is fine if that Pod is different not the pending one from above. The automatic remount mechanism may cause some churn.
  7. Force delete the running Pod again.
  8. Verify that a Pod becomes running and remains running.
def test_xfs_pv(client, core_api, pod_manifest)

Test create PV with new XFS filesystem

  1. Create a volume
  2. Create a PV for the existing volume, specify xfs as filesystem
  3. Create PVC and Pod
  4. Make sure Pod is running.
  5. Write data into the pod and read back for validation.

Note: The volume will be formatted to XFS filesystem by Kubernetes in this case.

def test_xfs_pv_existing_volume(client, core_api, pod_manifest)

Test create PV with existing XFS filesystem

  1. Create a volume
  2. Create PV/PVC for the existing volume, specify xfs as filesystem
  3. Attach the volume to the current node.
  4. Format it to xfs
  5. Create a POD using the volume

FIXME: We should write data in step 4 and validate the data in step 5, make sure the disk won't be reformatted

def update_storageclass_references(name, pv, claim)

Rename all references to a StorageClass to a specified name.