1. Backup

Automation Tests

# Test name Description tag
1 test_backup Test basic backup

Setup:

1. Create a volume and attach to the current node
2. Run the test for all the available backupstores.

Steps:

1. Create a backup of volume
2. Restore the backup to a new volume
3. Attach the new volume and make sure the data is the same as the old one
4. Detach the volume and delete the backup.
5. Wait for the restored volume’s lastBackup to be cleaned (due to remove the backup)
6. Delete the volume
Backup
2 test_backup_labels Test that the proper Labels are applied when creating a Backup manually.

1. Create a volume
2. Run the following steps on all backupstores
3. Create a backup with some random labels
4. Get backup from backupstore, verify the labels are set on the backups
Backup
3 test_deleting_backup_volume Test deleting backup volumes

1. Create volume and create backup
2. Delete the backup and make sure it’s gone in the backupstore
Backup
4 test_listing_backup_volume Test listing backup volumes

1. Create three volumes: volume1/2/3
2. Setup NFS backupstore since we can manipulate the content easily
3. Create snapshots for all three volumes
4. Rename volume1‘s volume.cfg to volume.cfg.tmp in backupstore
5. List backup volumes. Make sure volume1 errors out but found other two
6. Restore volume1‘s volume.cfg.
7. Make sure now backup volume volume1 can be found and deleted
8. Delete backups for volume2/3, make sure they cannot be found later
Backup
5 test_ha_backup_deletion_recovery [HA] Test deleting the restored snapshot and rebuild

Backupstore: all

1. Create volume and attach it to the current node.
2. Write data to the volume and create snapshot snap2
3. Backup snap2 to create a backup.
4. Create volume res_volume from the backup. Check volume data.
5. Check snapshot chain, make sure backup_snapshot exists.
6. Delete the backup_snapshot and purge snapshots.
7. After purge complete, delete one replica to verify rebuild works.
Backup
6 test_backup_kubernetes_status Test that Backups have KubernetesStatus stored properly when there is an associated PersistentVolumeClaim and Pod.

1. Setup a random backupstore
2. Set settings Longhorn Static StorageClass to longhorn-static-test
3. Create a volume and PV/PVC. Verify the StorageClass of PVC
4. Create a Pod using the PVC.
5. Check volume’s Kubernetes status to reflect PV/PVC/Pod correctly.
6. Create a backup for the volume.
7. Verify the labels of created backup reflect PV/PVC/Pod status.
8. Restore the backup to a volume. Wait for restoration to complete.
9. Check the volume’s Kubernetes Status
1. Make sure the lastPodRefAt and lastPVCRefAt is snapshot created time

10. Delete the backup and restored volume.
11. Delete PV/PVC/Pod.
12. Verify volume’s Kubernetes Status updated to reflect history data.
13. Attach the volume and create another backup. Verify the labels
14. Verify the volume’s Kubernetes status.
15. Restore the previous backup to a new volume. Wait for restoration.
16. Verify the restored volume’s Kubernetes status.
1. Make sure lastPodRefAt and lastPVCRefAt matched volume on step 12
Backup
7 test_restore_inc Test restore from disaster recovery volume (incremental restore)

Run test against all the backupstores

1. Create a volume and attach to the current node
2. Generate data0, write to the volume, make a backup backup0
3. Create three DR(standby) volumes from the backup: sb_volume0/1/2
4. Wait for all three DR volumes to finish the initial restoration
5. Verify DR volumes’s lastBackup is backup0
6. Verify snapshot/pv/pvc/change backup target are not allowed as long as the DR volume exists
7. Activate standby sb_volume0 and attach it to check the volume data
8. Generate data1 and write to the original volume and create backup1
9. Make sure sb_volume1‘s lastBackup field has been updated to backup1
10. Wait for sb_volume1 to finish incremental restoration then activate
11. Attach and check sb_volume1’s data
12. Generate data2 and write to the original volume and create backup2
13. Make sure sb_volume2‘s lastBackup field has been updated to backup1
14. Wait for sb_volume2 to finish incremental restoration then activate
15. Attach and check sb_volume2’s data
16. Create PV, PVC and Pod to use sb_volume2, check PV/PVC/POD are good
Backup: Disaster Recovery
8 test_recurring_job Test recurring job

1. Setup a random backupstore
2. Create a volume.
3. Create two jobs 1 job 1: snapshot every one minute, retain 2 1 job 2: backup every two minutes, retain 1
4. Attach the volume.
5. Sleep for 5 minutes
6. Verify we have 4 snapshots total
1. 2 snapshots, 1 backup, 1 volume-head

7. Update jobs to replace the backup job
1. New backup job run every one minute, retain 2

8. Sleep for 5 minutes.
9. We should have 6 snapshots
1. 2 from job_snap, 1 from job_backup, 2 from job_backup2, 1 volume-head

10. Make sure we have no more than 5 backups.
1. old backup job may have at most 1 backups

2. new backup job may have at most 3 backups

11. Make sure we have no more than 2 backups in progress
Backup: Recurring Job
9 test_recurring_job_in_storageclass Test create volume with StorageClass contains recurring jobs

1. Create a StorageClass with recurring jobs
2. Create a StatefulSet with PVC template and StorageClass
3. Verify the recurring jobs run correctly.
Backup: Recurring Job

Kubernetes
10 test_recurring_job_in_volume_creation Test create volume with recurring jobs

1. Create volume with recurring jobs though Longhorn API
2. Verify the recurring jobs run correctly
Backup: Recurring Job
11 test_recurring_job_kubernetes_status Test RecurringJob properly backs up the KubernetesStatus

1. Setup a random backupstore.
2. Create a volume.
3. Create a PV from the volume, and verify the PV status.
4. Create a backup recurring job to run every 2 minutes.
5. Verify the recurring job runs correctly.
6. Verify the backup contains the Kubernetes Status labels
Backup: Recurring Job

Volume: Kubernetes Status
12 test_recurring_job_labels Test a RecurringJob with labels

1. Set a random backupstore
2. Create a backup recurring job with labels
3. Verify the recurring jobs runs correctly.
4. Verify the labels on the backup is correct
Backup: Recurring Job
13 test_recurring_jobs_maximum_retain Test recurring jobs’ maximum retain

1. Create two jobs, with retain 30 and 21.
2. Try to apply the jobs to a volume. It should fail.
3. Reduce retain to 30 and 20.
4. Now the jobs can be applied the volume
Backup: Recurring Job

Backup create operations test cases

# Test Case Test Instructions Expected Results
1 Create backup from existing snapshot Prerequisite:

* Backup target is set to NFS server, or S3 compatible target.

1. Create a workload using Longhorn volume
2. Write data to volume, compute it’s checksum (checksum#1)
3. Create a snapshot (snapshot#1)
4. Create a backup from (snapshot#1)
5. Restore backup to a different volume
6. Attach volume to a node and check it’s data, and compute it’s checksum
* Backup should be created
* Restored volume data checksum should match (checksum#1)
2 Create volume backup for a volume attached to a node Prerequisite:

* Backup target is set to NFS server, or S3 compatible target.

1. Create a volume, attach it to a node
2. Format volume using ext4/xfs filesystem and mount it to a directory on the node
3. Write data to volume, compute it’s checksum (checksum#1)
4. Create a backup
5. Restore backup to a different volume
6. Attach volume to a node and check it’s data, and compute it’s checksum
7. Check volume backup labels
* Backup should be created
* Restored volume data checksum should match (checksum#1)
* backup should have no backup labels
3 Create volume backup used by Kubernetes workload Prerequisite:

* Backup target is set to NFS server, or S3 compatible target.

1. Create a deployment workload with nReplicas = 1 using Longhorn volume
2. Write data to volume, compute it’s checksum (checksum#1)
3. Create a backup
4. Check backup labels
5. Scale down deployment nReplicas = 0
6. Delete Longhorn volume
7. Restore backup to a volume with the same deleted volume name
8. Scale back deployment nReplicas = 1
9. Check volume data checksum
* Backup labels should contain the following information about workload that was using the volume at time of backup.
* Namespace

* PV Name

* PVC Name

* PV Status

* Workloads Status

* Pod Name
Workload Name
Workload Type
Pod Status

* After volume restore, data checksum should match (checksum#1)
4 Create volume backup with customized labels Prerequisite:

* Backup target is set to NFS server, or S3 compatible target.

1. Create a volume, attach it to a node
2. Create a backup, add customized labels
key: K1 value: V1
3. Check volume backup labels
* Backup should be created with customized labels
5 Create recurring backups 1. Create a deployment workload with nReplicas = 1 using Longhorn volume
2. Write data to volume , compute it’s checksum (checksum#1)
3. Create a recurring backup every 5 minutes. and set retain count to 5
4. add customized labels key: K1 value: V1
5. Wait for recurring backup to triggered (backup#1, backup#2 )
6. Scale down deployment nReplicas = 0
7. Delete the volume.
8. Restore backup to a volume with the same deleted volume name
9. Scale back deployment nReplicas = 1
10. Check volume data checksum
* backups should be created with Kubernetes status labels and customized labels
* After volume restore, data checksum should match (checksum#1)
* after restoring the backup recurring backups should continue to be created
6 Backup created using Longhorn behind proxy Prerequisite:

* Setup a Proxy on an instance (Optional: use squid)
* Create a single node cluster in EC2
* Deploy Longhorn

1. Block outgoing traffic except for the proxy instance.
2. Create AWS secret in longhorn.
3. In UI Settings page, set backupstore target and backupstore credential secret
4. Create a volume, attach it to a node, format the volume, and mount it to a directory.
5. Write some data to the volume, and create a backup.
* Ensure backup is created
7 Backup created in a backup store supports Virtual Hosted Style 1. Create an OSS bucket in Alibaba Cloud(Aliyun)
2. Create a secret without VIRTUAL_HOSTED_STYLE for the OSS bucket.
3. Set backup target and the secret in Longhorn UI.
8 Backup created in a backup store supports both Virtual Hosted style and traditional 1. Create an S3 bucket in AWS.
2. Create a secret without VIRTUAL_HOSTED_STYLE for the S3 bucket.
3. Set backup target and the secret in Longhorn UI.
4. Verify backup list/create/delete/restore work fine without the configuration.

Backup restore operations test cases

# Test Case Test Instructions Expected Results
1 Filter backup using backup name Prerequisite:

* One or more backup is created for multiple volume.

1. Filter backups by volume name
* volumes should be filtered using full/partial volume names
2 Restore last backup with different name Prerequisite:

* Create a Volume, attach it to a node, write some data (300MB+), compute it’s checksum and create a backup (repeat for 3 times).
* Volume now has multiple backups (backup#1, backup#2, backup#3) respectively.

1. Restore latest volume backup using different name than it’s original
2. After restore complete, attach the volume to a node, and check data checksum
* New Volume should be created and attached to a node in maintenance mode
* Restore process should be triggered restoring latest backup content to the volume
* After restore is completed, volume is detached from the node
* data checksum should match data checksum for (backup#3)
3 Restore specific with different name Prerequisite:

* Create a Volume, attach it to a node, write some data (300MB+), compute it’s checksum and create a backup (repeat for 3 times).
* Volume now has multiple backups (backup#1, backup#2, backup#3) respectively.

1. Restore restore the second backup (backup#2) using different name than it’s original
2. After restore complete, attach the volume to a node, and check data checksum
* New Volume should be created and attached to a node in maintenance mode
* Restore process should be triggered restoring latest backup content to the volume
* After restore is completed, volume is detached from the node
* data checksum should match data checksum for (backup#2)
4 Volume backup URL Prerequisite:

* One or more backup is created for multiple volume.

1. get backup URL
* Backup URL should point to a link to backup on configured backupstore
5 Restore backup with different number of replicas Prerequisite:

* One or more backup is created for multiple volume.

1. Restore a backup and set different number of replicas
* Restored volume replica count should match the number in restore backup request
6 Restore backup with Different Node tags Prerequisite:

* One or more backup is created for multiple volume.
* Longhorn Nodes should have Node Tags

1. Restore a backup and set node tags
* Restored volume replicas should scheduled only to nodes have Node Tags match Tags specified in restore backup request
7 Restore backup with Different Disk Tags Prerequisite:

* One or more backup is created for multiple volume.
* Longhorn Nodes Disks should have Disk Tags

1. Restore a backup and set disk tags
* Restored volume replicas should scheduled only to disks have Disk Tags match Tags specified in restore backup request
8 Restore backup with both Node and Disk Tags Prerequisite:

* One or more backup is created for multiple volume.
* Longhorn Nodes should have Node Tags
* Longhorn Nodes Disks should have Disk Tags

1. Restore a backup and set both Node and Disk tags
* Restored volume replicas should scheduled only to nodes that have both Node and Disk tags specified in restore backup request.
9 Restore last backup with same previous name (Volume already exists) Prerequisite:

* Create a Volume, attach it to a node, write some data (300MB+), compute it’s checksum and create a backup (repeat for 3 times).
* Volume now has multiple backups (backup#1, backup#2, backup#3) respectively.

1. Restore latest volume backup using same original volume name
* Volume can’t be restored
10 Restore last backup with same previous name Prerequisite:

* Create a Volume, attach it to a node, write some data (300MB+), compute it’s checksum and create a backup (repeat for 3 times).
* Volume now has multiple backups (backup#1, backup#2, backup#3) respectively.
* Detach and delete volume

1. Restore latest volume backup using same original volume name
2. After restore complete, attach the volume to a node, and check data checksum
* New Volume with same old name should be created and attached to a node in maintenance mode
* Restore process should be triggered restoring latest backup content to the volume
* After restore is completed, volume is detached from the node
* data checksum should match data checksum for (backup#3)
11 Restore volume used by Kubernetes workload with same previous name Prerequisite:

* Create a deployment workload using a Longhorn volume, write some data (300MB+), compute it’s checksum and create a backup (repeat for 3 times).
* Volume now has multiple backups (backup#1, backup#2, backup#3) respectively.
* Scale down the deployment to zero
* Delete volume

1. Restore latest volume backup using same original volume name
2. After restore complete, scale up the deployment
* New Volume with same old name should be created and attached to a node in maintenance mode
* Restore process should be triggered restoring latest backup content to the volume
* After restore is completed, volume is detached from the node
* Old PV/PVC , Namespace & Attached To information should be restored
* Volume should be accessible from the deployment pod
* Data checksum should match data checksum for (backup#3)
12 Restore volume used by Kubernetes workload with different name Prerequisite:

* Create a deployment workload using a Longhorn volume, write some data (300MB+), compute it’s checksum and create a backup (repeat for 3 times).
* Volume now has multiple backups (backup#1, backup#2, backup#3) respectively.
* Scale down the deployment to zero
* Delete volume

1. Restore latest volume backup using different name than its original
2. After restore complete
1. Delete old PVC

2. Create a new PV for volume

3. Create a new PVC with same old PVC name

3. scale up the deployment
* New Volume with same old name should be created and attached to a node in maintenance mode
* Restore process should be triggered restoring latest backup content to the volume
* After restore is completed, volume is detached from the node
* Old Namespace & Attached To information should be restored
* PV/PVC information should be empty after restore completed
old PV spec.csi.volumeHandlewill not match the new volume name
* After New PV/PVC is created, deployment pod should be able to claim the new PVC and access volume with new name.
* Data checksum should match data checksum for (backup#3)
13 Restore last backup (batch operation) Prerequisite:

* One or more backup is created for multiple volume.

1. select multiple volumes, restore the latest backup for all of them
* New volumes with same old volume names should be created, attached to nodes and restore process should be triggered
* PV/PVC information should be restored for volumes that had PV/PVC created
* Namespace & Attached To information should be restored for volumes that had been used by kubnernetes workload at the time of backup
14 Delete All Volume Backups Prerequisite:

* One or more backup is created for multiple volume.

1. Delete All backups for a volume
2. Check backupstore, and confirm backups has been deleted
* Backups should not be delete from Longhorn UI, and also from backupstore.
15 Restore backup created using Longhorn behind proxy. Prerequisite:

* Setup a Proxy on an instance (Optional: use squid)
* Create a single node cluster in EC2
* Deploy Longhorn

1. Block outgoing traffic except for the proxy instance.
2. Create AWS secret in longhorn as follows:
3. In UI Settings page, set backupstore target and backupstore credential secret
4. Create a volume, attach it to a node, format the volume, and mount it to a directory.
5. Write some data to the volume, and create a backup.
6. Wait for backup to complete, and the try to restore the backup to a volume with different name.
* Volume should get restored successfully.

Disaster Recovery test cases

Tests Prerequisite

  • One Kubernetes cluster.

  • Backup Target set to internal Minio or NFS

Test Case Test Instructions Expected Results
Last Backup #1 * Create a new volume
* Attach the volume
* Create a backup of the volume
* Volume’s LastBackup and LastBackupAt should be updated
* Backups can be seen from `volume->backups` in the volume list page, action menu
* Backups can be seen from `volume->backups` in the volume detail page, action menu
Last Backup #2 [follow Last Backup #1]

* Create another backup
* Volume’s LastBackup and LastBackupAt should be updated
* Backups can be seen from `volume->backups` in the volume list page, action menu
* Backups can be seen from `volume->backups` in the volume detail page, action menu
Last Backup #3 [follow Last Backup #2]

* Delete the last backup in the backup list
* Volume’s LastBackup and LastBackupAt should be updated to empty
* Backups can be seen from `volume->backups` in the volume list page, action menu
* Backups can be seen from `volume->backups` in the volume detail page, action menu
Last Backup #4 [follow Last Backup #3]

* Create a new backup for the volume
* Volume’s LastBackup and LastBackupAt should be updated to the last backup
* Backups can be seen from `volume->backups` in the volume list page, action menu
* Backups can be seen from `volume->backups` in the volume detail page, action menu
DR volume #1 * Create volume X
* Attach the volume X
* Create a backup of X
* Backup Volume list page, click `Create Disaster Recovery Volume` from volume dropdown
* Create the DR volume Xdr
* Attach the volume to any node
* DR volume should be successfully created and attached
* DR volume.LastBackup should be updated
* Cannot create backup with Xdr.
* Cannot create snapshot with Xdr.
* Cannot change backup target when DR volume exists with tooltip ‘Disaster Recovery volume’
* DR icon shows next to the volume name
DR volume #2 [Follow #1]

* Format volume X on the attached node
* Mount the volume on the node, write a empty file to it
* Make a backup of Volume X
* DR volume’s last backup should be updated automatically
* DR volume.LastBackup should be different from DR volume’s controller[0].LastRestoredBackup temporarily (it’s restoring the last backup)
* During the restoration, DR volume cannot be activated.
* Eventually, DR volume.LastBackup should equal to controller[0].LastRestoredBackup.
DR volume #3 [Follow #2]

* Activate the volume Xdr
* Volume Xdr should be detached automatically
DR volume #4 [Follow #3]

* Attach the volume to a node
* Mount the volume to a local directory
* Check the file
* Mount should be successful
* File should exist
DR volume #5 * Create volume Y
* Attach the volume Y
* Create a backup of Y
* Backup Volume list page, click `Create Disaster Recovery Volume` from volume dropdown
* Create two DR volumes Ydr1 and Ydr2.
* Mount the volume Y on the node
* Write a file of 10Mb into it, use `/dev/urandom` to generate the file
* Calculate the checksum of the file
* Make a Backup
* Attach Ydr1 and Ydr2 to any nodes
* DR volume’s last backup should be updated automatically
* DR volume.LastBackup should be different from DR volume’s controller[0].LastRestoredBackup temporarily (it’s restoring the last backup)
* During the restoration, DR volume cannot be activated.
* Eventually, DR volume.LastBackup should equal to controller[0].LastRestoredBackup.
DR volume #6 [follow #5]

* In the directory mounted volume Y, write a new file of 100Mb.
* Record the checksum of the file
* Create a backup of volume Y
* Wait for restoration of volume Ydr1 and Ydr2 to complete
* Activate Ydr1
* Attach it to one node and verify the content
* DR volume’s last backup should be updated automatically
* Eventually, DR volume.LastBackup should equal to controller[0].LastRestoredBackup.
* Ydr1 should have the same file checksum of volume Y
DR volume #7 [follow #6]

* In the directory mounted volume Y, remove all the files. Write a file of 50Mb
* Record the checksum of the file
* Create a backup of volume Y
* Activate Ydr2
* Attach it to one node and verify the content
* Both Ydr1 and Ydr2 volume’s last backup should be updated automatically
* Eventually, Ydr2’s volume.LastBackup should equal to controller[0].LastRestoredBackup.
* Ydr2 should have the same file checksum of volume Y
DR volume #8 * Create volume Z
* Attach the volume Z
* Create a backup (z1) of Z
* Backup Volume list page, click `Create Disaster Recovery Volume` from volume dropdown
* Create a DR volume Zdr
* Mount the volume Z on the node
* Write a file of 10Mb into it
* Make a Backup (z2)
* Attach Zdr to any node
* Confirm that Zdr complete the restoration (by observing the last restored backup to z2)
* Delete the backup z2 from the backup list
* Create a backup z3
* Delete all the files before. Write another file of 10Mb into Z, use `/dev/urandom` to generate the file. Record the checksum
* Confirm that Zdr complete the restoration (by observing the last restored backup to z3)
* Activate Zdr and attach it
* Verify the file content
* File content checksum with Zdr should be the same as Z

Tests Prerequisite

  • Two Kubernetes clusters, cluster A and cluster B

  • Backup Target set to Amazon S3

Test Case Test Instructions Expected Results
Backup Poll Interval #1 * Change the setting.BackupPoolInterval to -1 Change shouldn’t be allowed
Backup Poll Interval #2 * Change the setting.BackupPoolInterval to 0 Change should be allowed
DR volume across the cluster #1 Cluster A

* Create volume XA
* Attach the volume XA
* Create a backup of XA

Cluster B

* Backup Volume list page, click `Create Disaster Recovery Volume` from volume dropdown
* Create the DR volume XB (which should be the same name as XA)
* Attach the volume to any node
* DR volume should be successfully created and attached
* DR volume.LastBackup should be updated, after settings.BackupPollInterval passed.
* Cannot create backup with XB
* Cannot create snapshot with XB.
* DR icon shows next to the volume name
DR volume across the cluster #2 [Follow #1]
Cluster A

* Format volume XA on the attached node
* Mount the volume on the node, write a empty file to it
* Make a backup of Volume XA
Cluster B

* DR volume’s last backup should be updated automatically, after settings.BackupPollInterval passed.
* DR volume.LastBackup should be different from DR volume’s controller[0].LastRestoredBackup temporarily (it’s restoring the last backup)
* During the restoration, DR volume cannot be activated.
* Eventually, DR volume.LastBackup should equal to controller[0].LastRestoredBackup.
DR volume across the cluster #3 [Follow #2]

* Activate the volume XB
* Volume XB should be detached automatically
DR volume across the cluster #4 [Follow #3]
Cluster B:

* Attach the volume XB to a node
* Mount the volume XB to a local directory
* Check the file on XB
* Mount should be successful
* File should exist
DR volume across the cluster #5 Cluster A:

* Create volume Y
* Attach the volume Y
* Create a backup of Y

Cluster B:

* Backup Volume list page, click `Create Disaster Recovery Volume` from volume dropdown
* Create two DR volumes Ydr1 and Ydr2.
* Attach the volume Y to any node
* Mount the volume Y on the node
* Write a file of 10Mb into it, use `/dev/urandom` to generate the file
* Calculate the checksum of the file
* Make a Backup
* Attach Ydr1 and Ydr2 to any nodes
* DR volume’s last backup should be updated automatically, after settings.BackupPollInterval passed.
* DR volume.LastBackup should be different from DR volume’s controller[0].LastRestoredBackup temporarily (it’s restoring the last backup)
* During the restoration, DR volume cannot be activated.
* Eventually, DR volume.LastBackup should equal to controller[0].LastRestoredBackup.
DR volume across the cluster #6 [follow #5]
Cluster A:

* In the directory mounted volume Y, write a new file of 100Mb.
* Record the checksum of the file
* Create a backup of volume Y

Cluster B:

* Wait for restoration of volume Ydr1 and Ydr2 to complete
* Activate Ydr1
* Attach it to one node and verify the content
* DR volume’s last backup should be updated automatically, after settings.BackupPollInterval passed.
* Eventually, DR volume.LastBackup should equal to controller[0].LastRestoredBackup.
* Ydr1 should have the same file checksum of volume Y
DR volume across the cluster #7 [follow #6]
Cluster A

* In the directory mounted volume Y, remove all the files. Write a file of 50Mb
* Record the checksum of the file

Cluster B

* Change setting.BackupPollInterval to longer e.g. 1h

Cluster A

* Create a backup of volume Y

Cluster B
[DO NOT CLICK BACKUP PAGE, which will update last backup as a side effect]

* Before Ydr2’s last backup updated, activate Ydr2
* YBdr2’s last backup should be immediately updated to the last backup of volume Y
* Activate should fail due to restoration is in progress
DR volume across the cluster #8 Cluster A

* Create volume Z
* Attach the volume Z
* Create a backup of Z

Cluster B

* Backup Volume list page, click `Create Disaster Recovery Volume` from volume dropdown
* Create DR volumes Zdr1, Zdr2 and Zdr3
* Attach the volume Zdr1, Zdr2 and Zdr3 to any node
* Change setting.BackupPollInterval to appropriate interval for multiple backups e.g. 15min
* Make sure LastBackup of Zdr is consistent with that of Z

Cluster A

* Create multiple backups for volume Z before Zdr’s last backup updated. For each backup, write or modify at least one file then record the checksum.

Cluster B

* Wait for restoration of volume Zdr1 to complete
* Activate Zdr1
* Attach it to one node and verify the content
* Zdr1’s last backup should be updated after settings.BackupPollInterval passed.
* Zdr1 should have the same files with the the same checksums of volume Z
DR volume across the cluster #9 [follow #8]
Cluster A

* Delete the latest backup of Volume Z
* Last backup of Zdr2 and Zdr3 should be empty after settings.BackupPollInterval passed. Field controller[0].LastRestoredBackup and controller[0].RequestedBackupRestore should retain.
DR volume across the cluster #10 [follow #9]
Cluster B

* Activate Zdr2
* Attach it to one node and verify the content
* Zdr2 should have the same files with the the same checksums of volume Z
DR volume across the cluster #11 [follow #10]
Cluster A

* Create one more backup with at least one file modified.

Cluster B

* Wait for restoration of volume Zdr3 to complete
* Activate Zdr3
* Attach it to one node and verify the content
* Zdr3 should have the same files with the the same checksums of volume Z

Additional tests

# Scenario Steps Expected result
1 Create backup from existing snapshot when multiple snapshots exist 1. Create a workload using Longhorn volume
2. Write data to volume, compute it’s checksum (checksum#1)
3. Create a snapshot S1
4. Write data to volume, compute it’s checksum (checksum#2)
5. Create a snapshot S2
6. Create a backup from (snapshot#2)
7. Restore backup to a different volume
8. Attach volume to a node and check it’s data, and compute it’s checksum
Verify the checksum of the restored volume is same as checksum#2
2 Create backups, after deleting snapshots 1. Create a workload using Longhorn volume
2. Write data to volume, compute it’s checksum (checksum#1)
3. Create a snapshot S1
4. Write data to volume, compute it’s checksum (checksum#2)
5. Create a snapshot S2
6. Write data to volume, compute it’s checksum (checksum#3)
7. Create a snapshot S3
8. Delete S2
9. Create a backup b1 from S3
10. Restore backup to a different volume
11. Attach volume to a node and check it’s data, and compute it’s checksum
Verify the checksum of the restored volume is same as checksum#3
3 Backup from Snapshots 1. Create a workload using Longhorn volume
2. Write data to volume, compute it’s checksum (checksum#1)
3. Create a snapshot S1
4. Write data to volume, compute it’s checksum (checksum#2)
5. Create a snapshot S2
6. Create a backup b1 from snapshot S2
7. Restore backup to a different volume
8. Attach volume to a node and check it’s data, and compute it’s checksum
Verify the checksum of the restored volume is same as checksum#2
4 Manual and recurring snapshots count - recurring backups should not delete any manual backups taken 1. Enable recurring backups on a volume - every minute, retain count = 5
2. After 2 minutes, after 2 recurring backups have been taken, create a couple of manual backups on the volume.
3. Verify the volumes in the backup page for volume.
4. After 5 minutes, verify 2 manual backups and 5 recurring backups are available in the backup page
5. After 6th minute, verify one of the recurring backups - the oldest one is removed and new one is available in the backup page.
Verify recurring backups should not delete any manual backups taken
5 Restore with invalid node tag/disk tag Volume v1 - with backups - b1, b2, b3 exist

1. Restore from b1 - specify an invalid node tag and click on OK
2. Verify volume is NOT restored and an error is seen - specified node tag <name> does not exist
3. Restore from b1 - specify an invalid disk tag and click on OK
4. Verify volume is NOT restored and an error is seen - specified disk tag <name> does not exist
Volume should NOT be restored.

Error should be seen on the UI
6 Use Volume backup URL in a storage class 1. Get backup URL from a backup created for a volume.
2. Use the URL StorageClass fromBackup
3. Create a PVC from the storage class and attach to volume on a workload
4. workload should be deployed successfully/
7 Recurring snapshots/backups of volume in “Detached” mode 1. Create a volume/PV/PVC
2. Deploy it to a workload.
3. Enable recurring backups for every minute
4. Verify recurring backups happen on the volume.
5. From the longhorn UI, detach the volume.
6. Verify the Recurring snapshots/backups do not happen
8 Disabling recurring backups Precondition:

* Volume is created and deployed to a workload

Steps:

1. Enable recurring backups ex - every minute
2. Wait for a couple of minutes. 2 backups should be available for the volume.
3. Disable recurring backups. for this volume
4. Verify that the recurring backups should stop happening for the volume
* Recurring backups should be STOPPED for the volume.
9 Backup corruption in S3/nfs backup store - Rename config file for the volume Pre condition:

* Volume v1, v2 exists
* v1 has backups - b1, b2, b3 and v2 has backups - b4,b5

Steps:

1. Rename config file for the v1 in S3 volume.cfg file to volume.cfg.tmp
2. Verify that the backup list does not list the backups for that volume
3. Verify an error message is displayed on the UI
4. Verify user is able to list the backups of v2
* User should be able to see an error message in the UI when it failed to fetch the backups for the backup for volume v1
* User should be able to list the backups of volume v2
10 Delete a volume backup when it is corrupted in S3/nfs backup store Pre condition:

* Volume v1, v2 exists
* v1 has backups - b1, b2, b3 and v2 has backups - b4,b5

Steps:

1. Rename config file for the v1 volume.cfg file to volume.cfg.tmp
2. Verify that the backup list does not list the backups for that volume
3. Verify an error message is displayed on the UI
4. Verify user is able to delete the backup of v1 from the backup page
* User should be able to delete the corrupted backup for volume v1 by clicking on delete all backups.
* User should be able to list the backups of volume v2
11 Backup corruption in S3/nfs backup store - Rename config file of the Pre condition:

* Volume v1, v2 exists
* v1 has backups - b1, b2, b3 and v2 has backups - b4,b5

Steps:

1. Rename the backup b1.cfg to b1.cfg.tmp
2. Verify the backup page lists the volumes v1 and v2
3. Verify backup list page for v1 is able to fetch all the backups except the b1 - b1 SHOULD NOT be listed
4. verify user is able to restore from b2 and b3
5. Verify the data after restoration is correct
6. Verify user is able to list b4 and b5 for volume v2 also.
* User should be able to list b2 and b3.
* User should be able to restore from b2 and b3
* User should be able to see an error for b1 - b1 SHOULD NOT be listed
* User should be able to list the backups of volume v2
12 Backup corruption in S3/nfs backup store - Edit data of a backup Pre condition:

* Volume v1, v2 exists
* v1 has backups - b1, b2, b3 and v2 has backups - b4,b5

Steps:

1. Edit data (remove some checksum value) of back up b1 and upload to S3
2. Verify the backup page lists the volumes v1 and v2 and all the backups
3. Verify user is able to restore from b1
4. Verify the restored data is not the same as the original data (check the checksums)
5. Take b4, b5 for v1
6. User should be able to restore from b4 and b5
User should be able to list b1

User should be able to restore from b1

Other backups for v1 - b2 and b3 should be available

Backups for v2 - b4 and b5 should be available
13 Delete all backups and create backups for same volumes 1. Create vol-1, use it to a workload and write data to the volume
2. Take backups b1, b2
3. Delete all backups for the volume
4. verify volume is deleted from the S3 backup store and the Longhorn UI in backup page.
5. Take a backup for volume vol-1 b3
6. Verify b3 is saved in S3 and is available in abckup list page for the volume vol-1
14 Delete Backup verify blocks deleted from backupstore 1. Create a volume, attach to a pod and write into it.
2. Set up a S3 backup store.
3. Take a backup. Wait for it to complete.
4. Check the size of backup in backup store.
5. Delete the backup.
6. Check the size in the backup storage. It is same as earlier.
7. Blocks should be deleted.

Additional UI test cases

# Scenario Steps Expected Results
1 Column sort 1. Navigate to Backup page
2. Verify column sort works for all the columns
Column sorting should. work
2 Column sort 1. Navigate to Backup page
2. Click on a volume v1
3. User will be navigate to Backup/v1 page
4. Verify column sort works for all the columns
Column sorting should. work
3 Workload Pod status 1. Navigate to Backup page
2. Click on a volume v1
3. User will be navigate to Backup/v1 page
4. In the workload/Pod column, click on a pod for a backup
5. Verify a window pops up with the pod details
Pod details should be available
4 Labels 1. Navigate to Backup page
2. Click on a volume v1
3. User will be navigate to Backup/v1 page
4. For a backup click on the labels icon
5. Verify labels should be present for the backup
1. Related labels should be available for the backup

CSI Snapshot Support Test cases

The setup requirements:

  1. Deploy the snapshotter crds https://github.com/kubernetes-csi/external-snapshotter/tree/release-4.0/client/config/crd
  2. Deploy the snapshot controller https://github.com/kubernetes-csi/external-snapshotter/tree/release-4.0/deploy/kubernetes/snapshot-controller
  3. Deploy the volumeSnapshotClass.
    kind: VolumeSnapshotClass
    apiVersion:
    snapshot.storage.k8s.io/v1beta1
    metadata:
    name: longhorn
    driver: driver.longhorn.io
    deletionPolicy: Delete
# Test Scenario Test Steps Expected Results
1 Create a snapshot using VolumeSnapshot 1. Create a volume test-vol and write into it.
1. Compute the md5sum

2. Create the below VolumeSnapshot object
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
name: test
vol-snapshot
spec:
volumeSnapshotClassName: longhorn
source:
persistentVolumeClaimName: test-vol
1. A longhorn snapshot should be created.
2. A backup of that snapshot should be available on the backup store.
3. A volumesnapshotContent should also get created referring to test-snapshot-pvc
2 Restore a backup from a snapshot 1. Create a volume and take backup following the steps from test scenario 1.
2. Create a PVC where datasource is referring to the VolumeSnapshot
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-vol-restore
spec:
storageClassName: longhorn
dataSource:
name: test-vol-snapshot
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
3. Attach the PVC to a pod.
4. Verify the data
1. The PVC should be created successfully.
2. A volume should be created bound to the PVC created.
3. The data should be the same as created in test scenario 1
3 Restore a backup from longhorn. 1. Create a volume and attach it to a pod.
Compute the md5sum of the data.
2. Take a backup in the longhorn.
3. Create the below VolumeSnapshotterContent
Change the snapshotHandle to point to the backup to restore
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshotContent
metadata:
name: test-existing-backup
spec:
volumeSnapshotClassName: longhorn
driver: driver.longhorn.io
deletionPolicy: Delete
source:
# NOTE: change this to point to an existing backup on the backupstore
snapshotHandle: bs://test-vol
backup-625159fb469e492e
volumeSnapshotRef:
name: test-snapshot-existing-backup
namespace: default
4. Create the below VolumeSnapshot referring to the above VolumeSnapshotContent
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
name: test-snapshot-existing-backup
spec:
volumeSnapshotClassName: longhorn
source:
volumeSnapshotContentName: test-existing-backup
5. Create the below PVC referring to the above VolumeSnapshot
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-restore-existing-backup
spec:
storageClassName: longhorn
dataSource:
name: test-snapshot-existing-backup
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
6. Attach to a pod, verify the data.
Compute md5sum of the data.
1. The VolumeSnapshotterContent should reflect the size of the backup volume.
2. The data should be intact, compare the md5sum of step 1 and step 6.
4 Delete the backup with DeletionPolicy as delete 1. Repeat the steps from test scenario 1.
2. Delete the VolumeSnapshot using kubectl delete volumesnapshots test-snapshot-pvc
1. The VolumeSnapshot should be deleted.
2. By default the DeletionPolicy is delete, so the VolumeSnapshotContent should be deleted.
3. Verify in the backup store, the backup should be deleted.
5 Delete the backup with DeletionPolicy as retain 1. Create a VolumeSnapshotClass class with deletionPolicy as Retain
kind: VolumeSnapshotClass
apiVersion: snapshot.storage.k8s.io/v1beta1
metadata:
name: longhorn
driver: driver.longhorn.io
deletionPolicy: Retain
2. Repeat the steps from test scenario 1.
3. Delete the VolumeSnapshot using kubectl delete volumesnapshots test-snapshot-pvc
1. The VolumeSnapshot should be deleted.
2. VolumeSnapshotContent should NOT be deleted.
3. Verify in the backup store, the backup should NOT be deleted.
6 Take a backup from longhorn of a snapshot created by csi snapshotter. 1. Create a volume test-vol and write into it.
1. Compute the md5sum

2. Create the below VolumeSnapshot object
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
name: test-snapshot-pvc
spec:
volumeSnapshotClassName: longhorn
source:
persistentVolumeClaimName: test-vol
3. Go to longhorn UI and click on the snapshot created and take another backup
1. On creating a VolumeSnapshot, a backup should be created in the backup store.
2. On creating another backup from longhorn UI, one more backup should be created in backup store.
7 Delete the csi plugin while a backup is in progress. 1. Create a volume and write into it.
Compute the md5sum of the data.
2. Create the below VolumeSnapshot object
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:

name: test-snapshot-pvc
spec:
volumeSnapshotClassName: longhorn
source:
persistentVolumeClaimName: test-vol
3. While the backup is in progress, delete the csi plugin
On deleting csi plugin , a new pod of csi plugin should get created and the backup should continue to complete.
8 Take a backup using csi snapshotter with backup store as NFS server.
9 Restore from NFS backup store.
10 Delete from NFS backup store.
11 Parallel backups using csi snapshotter 1. Create a volume and write into it.
Compute the md5sum of the data.
2. Create two VolumeSnapshot object with different names
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
name: test-snapshot-pvc
spec:
volumeSnapshotClassName: longhorn
source:
persistentVolumeClaimName: test-vol
3. kubectl apply -f volumeSnapshot together.
4. Verify two parallel backups creation is triggered.
1. Parallel backup creation should start
2. Data should be intact.
3. Verify after restoring the data
12 Parallel deletion using csi snapshotter 1. Create multiple backups using csi snapshotter with Deletionpolicyis Delete
2. Delete two or more volumeSnapshot at once
3. Verify the backup store
1. All the backup pertaining to deleted volumesnapshot should get deleted.
13 Take backup and delete the VolumeSnapshot when the backup is in progress. 1. Create a volumesnapshot
2. Delete the same volumesnapshot when it is progress
3. Take another volumesnapshot
4. Let it get completed and verify the data.
1. The backup doesn’t get created after the step 2
2. The backup data should be intact.
14 Backup on a backup store with VIRTUAL_HOSTED_STYLE
15 Backup with invalid Backupstore 1. Give invalid backupstore details in the setting of longhorn.
2. Create a volume, write into it.
3. Create a volumesnapshot
4. Verify the longhorn UI
1. No backup should get triggered.
2. No snapshot should appear on the longhorn UI.
16 Restore from longhorn backup volume where there are multiple backups 1. Create a volume and attach it to a pod.
Compute the md5sum of the data.
2. Take a backup in the longhorn.
3. Write more in the volume and take backup.
4. Create the VolumeSnapshotterContent, the snapshotHandle should point to the 2nd backup to restore.
5. Create the referring volumesnapshot
6. Verify the data
17 Restore with invalid backup name 1. Create a volume and attach it to a pod.
Compute the md5sum of the data.
2. Create the VolumeSnapshotterContent, the snapshotHandle should point to an invalid backup to restore.
3. Create the referring volumesnapshot
4. Create the PVC and attach to a pod.
1. The volumesnapshot and volumesnapshotcontent should show False status in ReadytoUse.
2. Pvc should fail to attach to pod, it should not create the volume in longhorn.
18 Create a DR volume with the backup created using CSI snapshotter. 1. Give valid backupstore details in the setting of longhorn.
2. Create a volume, write into it.
3. Create a volumesnapshot
4. Create a DR volume of the backup which got created in step3.
5. Write more data into it.
6. Take backup using volumesnaphot
7. Activate the DR volume
The DR volume should have the latest data updated.
19 Same #uid from a prior snapshot -
with longhorn snapshot still present, but the backup deleted.
1. Create a volume, write into it. Compute the md5sum.
2. Create a volumesnapshot like below
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
name: test-snapshot-existing-backup
uid: # copy uid from a prior snapshot, that you created, since the uid is how the longhorn snapshot will be named.
spec:
volumeSnapshotClassName: longhorn
source:
persistentVolumeClaimName: test-vol
3. After the backup is completed, delete the snapshot from longhorn UI.
4. Create a volumesnapshot
1. A new snapshot gets created overriding the uid given in the metadata.
2. A new backup gets created.
20 Same #uid from a prior snapshot -
with backup still present but longhorn snapshot deleted
1. Create a volume, write into it. Compute the md5sum.
2. Create a volumesnapshot like below
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
name: test-snapshot-existing-backup
uid: # copy uid from a prior snapshot, that you created, since the uid is how the longhorn snapshot will be named.
spec:
volumeSnapshotClassName: longhorn
source:
persistentVolumeClaimName: test-vol
3. After the backup is completed, delete the backup from backup store.
4. Create a volumesnapshot
1. A new snapshot gets created overriding the uid given in the metadata.
2. A new backup gets created.
[Edit]