Module tests.test_backing_image

Functions

def backing_image_basic_operation_test(client, volume_name, bi_name, bi_url)

Test Backing Image APIs.

  1. Create a backing image.
  2. Create and attach a Volume with the backing image set.
  3. Verify that the all disk states in the backing image are "downloaded".
  4. Try to use the API to manually clean up one disk for the backing image but get failed.
  5. Try to use the API to directly delete the backing image but get failed.
  6. Delete the volume.
  7. Use the API to manually clean up one disk for the backing image
  8. Delete the backing image.
def backing_image_cleanup(core_api, client)
def backing_image_content_test(client, volume_name_prefix, bi_name, bi_url)

Verify the content of the Backing Image is accessible and read-only for all volumes.

  1. Create a backing image. (Done by the caller)
  2. Create a Volume with the backing image set then attach it to host node.
  3. Verify that the all disk states in the backing image are "downloaded".
  4. Verify volume can be directly mounted and there is already data in the filesystem due to the backing image.
  5. Verify the volume r/w.
  6. Launch one more volume with the same backing image.
  7. Verify the data content of the new volume is the same as the data in step 4.
  8. Do cleanup. (Done by the caller)
def test_backing_image_auto_resync(bi_url, client, volume_name)
  1. Create a backing image.
  2. Create and attach a 3-replica volume using the backing image.
  3. Wait for the attachment complete.
  4. Manually remove the backing image on the current node.
  5. Wait for the file state in the disk/on this node become failed.
  6. Wait for the file recovering automatically.
  7. Validate the volume.
def test_backing_image_basic_operation(client, volume_name)
def test_backing_image_cleanup(core_api, client)
  1. Create multiple backing image.
  2. Create and attach multiple 3-replica volume using those backing image.
  3. Wait for the attachment complete.
  4. Delete the volumes then the backing images.
  5. Verify all backing image manager pods will be terminated when the last backing image is gone.
  6. Repeat step1 to step5 for multiple times. Make sure each time the test is using the same the backing image namings.
def test_backing_image_content(client, volume_name)
def test_backing_image_disk_eviction(client)

Test Disk Eviction - Create BackingImage with one copy - Evict the disk where the copy is currently on - The BackingImage will be evicted to other node because every node has only one disk - Check the disk is not the same as before

def test_backing_image_min_number_of_replicas(client)

Test the backing image minNumberOfCopies

Given - Create a BackingImage - Update the minNumberOfCopies to 2

When - The BackingImage is prepared and transferred to manager

Then - There will be two ready file of the BackingIamge

Given - Set the setting backing-image-cleanup-wait-interval to 1 minute

When - Update the minNumberOfCopies to 1 - Wait for 1 minute

Then - There will be one ready BackingImage file left.

def test_backing_image_node_eviction(client)

Test Node Eviction - Create BackingImage with one copy - Evict the node where the copy is on - The BackingImage will be evicted to another node

def test_backing_image_selector_setting(client, volume_name)
  • Set node1 with nodeTag = [node1], diskTag = [node1]
  • Set node2 with nodeTag = [node2], diskTag = [node2]
  • Create a BackingImage with
    • minNumberOfCopies = 2
    • nodeSelector = [node1]
    • diskSelector = [node1]
  • After creation, the first BackingImage copy will be on node1, disk1
  • Wait for a while, the second one will never show up
  • Create the Volume with
    • numberOfReplicas = 1
    • nodeSelector = [node2]
    • diskSelector = [node2]
  • The volume condition Scheduled will be false
def test_backing_image_unable_eviction(client)
  • Set node1 nodeTag = [node1], diskTag = [node1]
  • Create a BackingImage with following settings to place the copy on node1
    • minNumberOfCopies = 1
    • nodeSelector = [node1]
    • diskSelector = [node1]
  • Evict node1
  • The copy would not be deleted because it is the only copy
  • The copy can't be copied to other nodes because of the selector settings.
def test_backing_image_with_disk_migration()
  1. Update settings:
  2. Disable Node Soft Anti-affinity.
  3. Set Replica Replenishment Wait Interval to a relatively long value.
  4. Create a new host disk.
  5. Disable the default disk and add the extra disk with scheduling enabled for the current node.
  6. Create a backing image.
  7. Create and attach a 2-replica volume with the backing image set. Then verify:
  8. there is a replica scheduled to the new disk.
  9. there are 2 entries in the backing image disk file status map, and both are state ready.
  10. Directly mount the volume (without making filesystem) to a directory. Then verify the content of the backing image by checking the existence of the directory <Mount point>/guests/.
  11. Write random data to the mount point then verify the data.
  12. Unmount the host disk. Then verify:
  13. The replica in the host disk will be failed.
  14. The disk state in the backing image will become unknown.
  15. Remount the host disk to another path. Then create another Longhorn disk based on the migrated path (disk migration).
  16. Verify the following.
    1. The disk added in step3 (before the migration) should be "unschedulable".
    2. The disk added in step9 (after the migration) should become "schedulable".
    3. The failed replica will be reused. And the replica DiskID as well as the disk path is updated.
    4. The 2-replica volume r/w works fine.
    5. The disk state in the backing image will become ready again.
  17. Do cleanup.
def test_backing_image_with_wrong_md5sum(bi_url, client)
def test_backup_labels_with_backing_image(set_random_backupstore, client, random_labels, volume_name)
def test_backup_with_backing_image(set_random_backupstore, client, volume_name, volume_size)
def test_csi_backup_with_backing_image(set_random_backupstore, client, core_api, csi_pv, pvc, pod_make)
def test_csi_io_with_backing_image(client, core_api, csi_pv_backingimage, pvc_backingimage, pod_make)
def test_csi_mount_with_backing_image(client, core_api, csi_pv_backingimage, pvc_backingimage, pod_make)
def test_engine_live_upgrade_rollback_with_backing_image(client, core_api, volume_name)
def test_engine_live_upgrade_with_backing_image(client, core_api, volume_name)
def test_engine_offline_upgrade_with_backing_image(client, core_api, volume_name)
def test_exporting_backing_image_from_volume(client, volume_name)
  1. Create and attach the 1st volume.
  2. Make a filesystem for the 1st volume.
  3. Export this volume to the 1st backing image via the backing image creation HTTP API. And the export type is qcow2.
  4. Create and attach the 2nd volume which uses the 1st backing image.
  5. Make sure the 2nd volume can be directly mount.
  6. Write random data to the mount point then get the checksum.
  7. Unmount and detach the 2nd volume.
  8. Export the 2nd volume as the 2nd backing image. Remember to set the export type to qcow2.
  9. Create and attach the 3rd volume which uses the 2nd backing image.
  10. Directly mount the 3rd volume. Then verify the data in the 3rd volume is the same as that of the 2nd volume.
  11. Do cleanup.
def test_ha_backup_deletion_recovery(set_random_backupstore, client, volume_name)
def test_ha_salvage_with_backing_image(client, core_api, disable_auto_salvage, volume_name)
def test_ha_simple_recovery_with_backing_image(client, volume_name)
def test_recurring_job_labels_with_backing_image(set_random_backupstore, client, random_labels, volume_name)
def test_snapshot_prune_and_coalesce_simultaneously_with_backing_image(client, volume_name)
def test_snapshot_prune_with_backing_image(client, volume_name)
def test_snapshot_with_backing_image(client, volume_name)
def test_volume_basic_with_backing_image(client, volume_name)
def test_volume_iscsi_basic_with_backing_image(client, volume_name)
def test_volume_wait_for_backing_image_condition(client)

Test the volume condition "WaitForBackingImage"

Given - Create a BackingImage

When - Creating the Volume with the BackingImage while it is still in progress

Then - The condition "WaitForBackingImage" of the Volume would be first True and then change to False when the BackingImage is ready and all the replicas are in running state.