Module tests.test_backing_image
Functions
def backing_image_basic_operation_test(client, volume_name, bi_name, bi_url)
-
Test Backing Image APIs.
- Create a backing image.
- Create and attach a Volume with the backing image set.
- Verify that the all disk states in the backing image are "downloaded".
- Try to use the API to manually clean up one disk for the backing image but get failed.
- Try to use the API to directly delete the backing image but get failed.
- Delete the volume.
- Use the API to manually clean up one disk for the backing image
- Delete the backing image.
def backing_image_cleanup(core_api, client)
def backing_image_content_test(client, volume_name_prefix, bi_name, bi_url)
-
Verify the content of the Backing Image is accessible and read-only for all volumes.
- Create a backing image. (Done by the caller)
- Create a Volume with the backing image set then attach it to host node.
- Verify that the all disk states in the backing image are "downloaded".
- Verify volume can be directly mounted and there is already data in the filesystem due to the backing image.
- Verify the volume r/w.
- Launch one more volume with the same backing image.
- Verify the data content of the new volume is the same as the data in step 4.
- Do cleanup. (Done by the caller)
def test_backing_image_auto_resync(bi_url, client, volume_name)
-
- Create a backing image.
- Create and attach a 3-replica volume using the backing image.
- Wait for the attachment complete.
- Manually remove the backing image on the current node.
- Wait for the file state in the disk/on this node become failed.
- Wait for the file recovering automatically.
- Validate the volume.
def test_backing_image_basic_operation(client, volume_name)
def test_backing_image_cleanup(core_api, client)
-
- Create multiple backing image.
- Create and attach multiple 3-replica volume using those backing image.
- Wait for the attachment complete.
- Delete the volumes then the backing images.
- Verify all backing image manager pods will be terminated when the last backing image is gone.
- Repeat step1 to step5 for multiple times. Make sure each time the test is using the same the backing image namings.
def test_backing_image_content(client, volume_name)
def test_backing_image_disk_eviction(client)
-
Test Disk Eviction - Create BackingImage with one copy - Evict the disk where the copy is currently on - The BackingImage will be evicted to other node because every node has only one disk - Check the disk is not the same as before
def test_backing_image_min_number_of_replicas(client)
-
Test the backing image minNumberOfCopies
Given - Create a BackingImage - Update the minNumberOfCopies to 2
When - The BackingImage is prepared and transferred to manager
Then - There will be two ready file of the BackingIamge
Given - Set the setting backing-image-cleanup-wait-interval to 1 minute
When - Update the minNumberOfCopies to 1 - Wait for 1 minute
Then - There will be one ready BackingImage file left.
def test_backing_image_node_eviction(client)
-
Test Node Eviction - Create BackingImage with one copy - Evict the node where the copy is on - The BackingImage will be evicted to another node
def test_backing_image_selector_setting(client, volume_name)
-
- Set node1 with nodeTag = [node1], diskTag = [node1]
- Set node2 with nodeTag = [node2], diskTag = [node2]
- Create a BackingImage with
- minNumberOfCopies = 2
- nodeSelector = [node1]
- diskSelector = [node1]
- After creation, the first BackingImage copy will be on node1, disk1
- Wait for a while, the second one will never show up
- Create the Volume with
- numberOfReplicas = 1
- nodeSelector = [node2]
- diskSelector = [node2]
- The volume condition Scheduled will be false
def test_backing_image_unable_eviction(client)
-
- Set node1 nodeTag = [node1], diskTag = [node1]
- Create a BackingImage with following settings to place the copy on node1
- minNumberOfCopies = 1
- nodeSelector = [node1]
- diskSelector = [node1]
- Evict node1
- The copy would not be deleted because it is the only copy
- The copy can't be copied to other nodes because of the selector settings.
def test_backing_image_with_disk_migration()
-
- Update settings:
- Disable Node Soft Anti-affinity.
- Set Replica Replenishment Wait Interval to a relatively long value.
- Create a new host disk.
- Disable the default disk and add the extra disk with scheduling enabled for the current node.
- Create a backing image.
- Create and attach a 2-replica volume with the backing image set. Then verify:
- there is a replica scheduled to the new disk.
- there are 2 entries in the backing image disk file status map,
and both are state
ready
. - Directly mount the volume (without making filesystem) to a directory.
Then verify the content of the backing image by checking the existence
of the directory
<Mount point>/guests/
. - Write random data to the mount point then verify the data.
- Unmount the host disk. Then verify:
- The replica in the host disk will be failed.
- The disk state in the backing image will become
unknown
. - Remount the host disk to another path. Then create another Longhorn disk based on the migrated path (disk migration).
- Verify the following.
- The disk added in step3 (before the migration) should be "unschedulable".
- The disk added in step9 (after the migration) should become "schedulable".
- The failed replica will be reused. And the replica DiskID as well as the disk path is updated.
- The 2-replica volume r/w works fine.
- The disk state in the backing image will become
ready
again.
- Do cleanup.
def test_backing_image_with_wrong_md5sum(bi_url, client)
def test_backup_labels_with_backing_image(set_random_backupstore, client, random_labels, volume_name)
def test_backup_with_backing_image(set_random_backupstore, client, volume_name, volume_size)
def test_csi_backup_with_backing_image(set_random_backupstore, client, core_api, csi_pv, pvc, pod_make)
def test_csi_io_with_backing_image(client, core_api, csi_pv_backingimage, pvc_backingimage, pod_make)
def test_csi_mount_with_backing_image(client, core_api, csi_pv_backingimage, pvc_backingimage, pod_make)
def test_engine_live_upgrade_rollback_with_backing_image(client, core_api, volume_name)
def test_engine_live_upgrade_with_backing_image(client, core_api, volume_name)
def test_engine_offline_upgrade_with_backing_image(client, core_api, volume_name)
def test_exporting_backing_image_from_volume(client, volume_name)
-
- Create and attach the 1st volume.
- Make a filesystem for the 1st volume.
- Export this volume to the 1st backing image via the backing image creation HTTP API. And the export type is qcow2.
- Create and attach the 2nd volume which uses the 1st backing image.
- Make sure the 2nd volume can be directly mount.
- Write random data to the mount point then get the checksum.
- Unmount and detach the 2nd volume.
- Export the 2nd volume as the 2nd backing image. Remember to set the export type to qcow2.
- Create and attach the 3rd volume which uses the 2nd backing image.
- Directly mount the 3rd volume. Then verify the data in the 3rd volume is the same as that of the 2nd volume.
- Do cleanup.
def test_ha_backup_deletion_recovery(set_random_backupstore, client, volume_name)
def test_ha_salvage_with_backing_image(client, core_api, disable_auto_salvage, volume_name)
def test_ha_simple_recovery_with_backing_image(client, volume_name)
def test_recurring_job_labels_with_backing_image(set_random_backupstore, client, random_labels, volume_name)
def test_snapshot_prune_and_coalesce_simultaneously_with_backing_image(client, volume_name)
def test_snapshot_prune_with_backing_image(client, volume_name)
def test_snapshot_with_backing_image(client, volume_name)
def test_volume_basic_with_backing_image(client, volume_name)
def test_volume_iscsi_basic_with_backing_image(client, volume_name)
def test_volume_wait_for_backing_image_condition(client)
-
Test the volume condition "WaitForBackingImage"
Given - Create a BackingImage
When - Creating the Volume with the BackingImage while it is still in progress
Then - The condition "WaitForBackingImage" of the Volume would be first True and then change to False when the BackingImage is ready and all the replicas are in running state.