1. Node

UI specific test cases

# Test Case Test Instructions Expected Results
1 Storage details * Prerequisites
* Longhorn Installed


1. Verify the allocated/used storage show the right data in node details page.
2. Create a volume of 20 GB and attach to a pod and verify the storage allocated/used is shown correctly.
Without any volume, allocated should be 0 and on creating new volume it should be updated as per volume present.
2 Filters applied to node list * Prerequisites
* Longhorn Installed


1. In longhorn UI node tab- Change the filter based on name/status etc. Verify the nodes are appearing properly.
Nodes satisfying filter should only get displayed on page.
3 Sort the nodes view * Prerequisites
* Longhorn Installed


1. In longhorn UI node tab- Click title to sort the nodes appearing based on status/name etc
Nodes list should get sorted ascending/descending based on status/name stc
4 Expand All * Prerequisites
* Longhorn Installed


1. In longhorn UI node tab- Click ‘expand all’ button.
All nodes should get expanded and show disks details

Additional Tests for UI

# Scenario Steps Expected Results
1 Readiness column 1. Click On a node’s readiness state say “Ready”
2. Verify components window opens
3. Verify Engine image and instance manager details are seen
Components window should open with details of Engine image and instance manager
2 Replicas column 1. Click on the count (number of replicas) for a node in the replica column
2. Verify the list of replicas on the node is available
3. Verify user is able to delete a replica on the node by selecting the replica and clicking on delete
4. Verify the replica is deleted by navigating to the specific Volume → Volume details page → replicas
* User should be able to view all the replicas on the node
* User should be able to delete replica on the node
* User should be able to see the replica deleted in the volume → Volume details page → replicas page

Test cases

Test Case Test Instructions Expected Results
1 Node scheduling * Prerequisites:
* Longhorn Deployed with 3 nodes


1. Disable Node Scheduling on a node
2. Create a volume with 3 replicas, and attach it to a node
3. Re-enabled node scheduling on the node
* Volume should be created and attached
* Volume replicas should be scheduled to Schedulable nodes only
* Re-enabling node scheduling will not affect existing scheduled replicas, it will only affect new replicas being created, or rebuilt.
2 Disk Scheduling * Prerequisites:
* Longhorn Deployed with 3 nodes

* Add additional disk (Disk#1) ,attach it and mounted to Node-01.


1. Create a New Disk, Keep Disk Scheduling disabled
2. Create a volume (vol#1), set replica count to 4 and attach it to a node
3. Check (vol#1) replica paths
4. Enable Scheduling on (disk#1)
5. Create a volume (vol#2), set replica count to 4 and attach it to a node
6. Check (vol#2) replica paths
* (vol#1) replicas should be scheduled only to Disks with Scheduling enabled, no replicas should be scheduled to (disk#1)
* One of (vol#2) replica paths will be scheduled to (disk#1)
3 Volume Created with Node Tags * Prerequisites:
* Longhorn Deployed with 3 nodes


1. Create Node tags as follows:
1. Node-01: fast

2. Node-02: slow

3. Node-02: fast

2. Create a volume (vol#1), set Node tags to slow
3. Create a volume (vol#2), set Node tags to fast
4. Check Volumes replicas paths
5. Check Volume detail Node Tags
* vol#1 replicas should only be scheduled to Node-02
* vol#2 replicas should only be scheduled to Node-01 and Node-03
* Node Tag volume detail should contain Node tag specified in volume creation request.
4 Volumes created with Disk Tags * Prerequisites:
* Longhorn Deployed with 3 nodes, with default disks (disk#01-1, disk#02-1, disk#03-1)

* disk#0X-Y indicate that disk is attached to Node-0X , and it is disk number Y on that node.

* Create 3 additional disks (disk#01-2, disk#02-2, disk#03-2), attach each one to a different node, and mount it to a directory on that node.

1. Create Disk tags as follows:
1. disk#01-1: fast

2. disk#01-2: fast

3. disk#02-1: slow

4. disk#02-2: slow

5. disk#03-1: fast

6. disk#01-2: fast

2. Create a volume (vol#1), set Disk tags to slow
3. Create a volume (vol#2), set Disk tags to fast
4. Check Volumes replicas paths
5. Check Volume detail Disk Tags
* vol#1 replicas should only be scheduled to disks have slow tag (disk#02-1 and disk#02-2)
* vol#2 replicas should can be scheduled to disks have fast Tag
(disk#01-1, disk#01-2, disk#03-1, disk#03-2)
* Disk Tag volume detail should contain Disk tag specified in volume creation request.
5 Volumes created with both DIsk and Node Tags * Create a volume, set Disk and node tags, and attach it to a node * Volume replicas should be scheduled only to node that have Node tags, and only on disks that have Disk tags specified on volume creation request
* If No Node match both Node and Disk tags, volume replicas will not be created.
6 Remove Disk From Node * Prerequisites:
* Longhorn Deployed with 3 nodes

* Add additional disk (Disk#1) ,attach it and mounted to Node-01.

* Some replicas should be scheduled to Disk#1


1. Disable Scheduling on disk#1
2. Delete all replicas scheduled to disk#1, replicas should start to rebuild on other disks
3. Delete disk from node
* Stopping Disk scheduling will prevent replicas to be scheduled on it
* Disk can’t be deleted if at least one replicas is still scheduled to it.
* Disk can be delete only after all replica have been rescheduled to other disks.
7 Power off a node 1. Power off a node * Node should report down on Node page
8 Delete Longhorn Node 1. Disable Scheduling on the node
2. Delete all replicas on the node to be rescheduled to another nodes
3. Detach all volume attached to the node, re-attach them on other nodes
4. Delete Node from Kubernetes
5. Delete Node From Longhorn
* Node can’t be deleted if Node Scheduling is enabled on that node
* Node can’t be deleted unless it all replicas are deleted from that node
* Node can’t be deleted unless it all attached volumes get detached from that node
* Node can’t be deleted unless it has been deleted from Kubernetes first
* After node is deleted from Kubernetes, node should report down on Longhorn
* Node should be deleted from Longhorn
9 Default Disk on Labeled Nodes * Prerequisites:
* Create 3 node k8s cluster

* Create /home/longhorn directory on all 3 nodes

* Add new disk to each node, format it with ext4, and mount it to /mnt/disk

* Use the following label and annotations for nodes

Node-01 & Node-03

labels:

node.longhorn.io/create-``default``-disk: "config"

annotations:

node.longhorn.io/``default``-disks-config:

'[{``"path"``:``"/home/longhorn"``,``"allowScheduling"``:``true``, "tags"``:[``"ssd"``, "fast"``]},

{``"path"``:``"/mnt/disk"``,``"allowScheduling"``:``true``,``"storageReserved"``:``1024``,``"tags"``:[``"ssd"``,``"fast"``]}]'

node.longhorn.io/``default``-node-tags: '["fast", "storage"]'

Node-02

labels:

node.longhorn.io/create-``default``-disk: "config"

annotations:

node.longhorn.io/``default``-disks-config:

'[{``"path"``:``"/home/longhorn"``,``"allowScheduling"``:``true``, "tags"``:[``"hdd"``, "slow"``]},

{``"path"``:``"/mnt/disk"``,``"allowScheduling"``:``true``,``"storageReserved"``:``1024``,``"tags"``:[``"hdd"``,``"slow"``]}]'

node.longhorn.io/``default``-node-tags: '["slow", "storage"]'

1. Set create-default-disk-labeled-nodes: True in longhorn-default-setting config map
2. Deploy Longhorn
* Longhorn Should be deployed successfully
* Node-01 & Node-03
* should be tagged with fast and storage tags

* Disk scheduling should be allowed on both disks

* Disks should be tagged with ssd and fast tags

* 1024 MB is reserved storage on /mnt/disk

* Node-02
* should be tagged with Slow and storage tags

* should be tagged with slow and storage tags

* Disk scheduling should be allowed on both disks

* Disks should be tagged with hdd and slow tags

* 1024 MB is reserved storage on /mnt/disk
10 Default Data Path Prerequisites:

* Create 3 node k8s cluster
* Create /home/longhorn directory on all 3 nodes



1. Set defaultDataPath to /home/longhorn/ in longhorn-default-setting ConfigMap
2. Deploy Longhorn
3. Create a volume, attach it to a node
* In Longhorn Setting, Default Data Path should be /home/longhorn
* All volumes replicas paths should begin with /home/longhorn prefix
11 Update Taint Toleration Setting Prerequisites

* All Longhorn volumes should be detached then Longhorn components will be restarted to apply new tolerations.
* Notice that “kubernetes.io” is used as the key of all Kubernetes default tolerations, please do not contain this substring in your toleration setting.
* In Longhorn taint tolerations are column separated

1. Using Kubernetes, taint Some nodes
For example, key1=value1:NoSchedule key2=value2:NoExecute
2. Update Taint Toleration Setting with key1=value1:NoSchedule;key2:NoExecute
* Longhorn Components will be restarted
* Longhorn Components should be rescheduled to tainted nodes nodes only.
12 Default Taint Toleration Setting Prerequisites

* Create 3 node k8s cluster
* Using Kubernetes, taint Some nodes
For example, key1=value1:NoSchedule key2=value2:NoExecute

1. Set taint-toleration to key1=value1:NoSchedule;key2:NoExecute in longhorn-default-setting ConfigMap
2. Deploy Longhorn
* Longhorn components should be only deployed to tainted nodes
13 Node Readiness * Delete the following Longhorn components pods
* Engine image

* Instance Manager (engine)

* Instance Manager (replica)
* Deleting any components should be reflected on node Readiness
* Deleted component must be redeployed
14 Storage Minimal Available Percentage Setting * Prerequisites
* Longhorn Installed


1. Change Storage Minimal Available Percentage to 50%
2. Fill up node disk up to 55% of it’s capacity
* Storage Minimal Available Percentage default value is 25%
* Filled Disk Should be Unschedulable
* If Node has only one disk, Node also should be Unschedulable
15 Storage Over Provisioning Percentage * Prerequisites
* Longhorn Installed

* Assume Nodes has disks with 100GB size


1. Change Storage Over Provisioning Percentage to 300
2. Check Node Disks available size for allocation
* Storage Over Provisioning Percentage default value is 200
* Disk storage that can be allocated relative to the hard drive’s capacity should be 3x disk size == 300 GB

Additional Tests

# Scenario Steps Expected Results
1 Create Default Disk on Labeled Nodes - True 1. In Longhorn Setting - set Create Default Disk on Labeled Nodes - True
2. Scale up the number of worker nodes in Rancher
3. The node is displayed in Longhorn UI when it comes up “Active” in rancher.
4. Verify the node’s status is “Disabled”
5. Verify the default disk is NOT created in → Node → Edit Node and Disk
6. Add label node.longhorn.io/create-default-disk=true on the node
7. Verify the node is seen as “Schedulable” on longhorn UI. Verify Node → Edit Node and Disk, the default disk is created
8. Add a node tag to this node n1
9. Create a volume - volume-1, add node tag n1
10. Attach it to the same node
11. Verify the replica is running successfully.
1. Create Default Disk on Labeled Nodes should be set to True
2. When a new node is added, the node should show up as disabled on Longhorn UI
3. The node should NOT have any default disk
4. Label should be added on the node.
5. The node status changes to “Schedulable” and default disk is created on the node.

Node/Disk Eviction Test cases

# Scenario Test Steps Expected Results
1 Attached volume replica evict from node Pre-requisite:
1. Longhorn installed in set up of 3 worker nodes and 1 etc/control plane

Test Steps:
1. Create a volume (3 replicas), attach it to a pod.
2. Write data to it and compute md5sum.
3. Enable soft anti-affinity to True.
4. Evict replicas from one node
5. Verify the data after the replica is evited.
1. A replica should be rebuilt in any other node except the evicted node.
2. The replica from the evicted node should be removed.
3. Data should be intact.
2 Interrupt the rebuild after the eviction of replica from node Pre-requisite:
1. Longhorn installed in set up of 3 worker nodes and 1 etc/control plane

Test Steps:
1. Create a volume (3 replicas), attach it to a pod.
2. Write data to it and compute md5sum.
3. Enable soft anti-affinity to True.
4. Evict replicas from one node
5. When the replica is rebuilding, delete it.
1. A replica should be rebuilt in any other node except the evicted node.
2. On the deletion of the replica, the system should start to rebuild a new replica.
3. The replica from the evicted node will be removed.
4. Data should be intact.
3 Node evicted while restore rebuilding is in progress Pre-requisite:
1. Longhorn installed in set up of 3 worker nodes and 1 etc/control plane

Test Steps:
1. Restore a volume (3 replicas).
2. While it is restoring, evict replicas from one node.
3. When the replica is rebuilding, delete it.
1. The rebuilding replica should be completed and then a new replica should get created on another node.
2. The replica from the evicted node should be removed.
3. Data should be intact.
4 Evict node with multiple replicas of the same volume Pre-requisite:
1. Longhorn installed in set up of 3 worker nodes and 1 etc/control plane

Test Steps:
1. Enable soft anti-affinity to True.
2. Create a volume (3 replicas) and make sure two replicas exist on the same node.
3. Write data to it and compute md5sum.
4. Evict replicas from the node where the two replicas exist.
5. Verify the data after the replicas are evicted.
1. Two replicas should be rebuilt in any other node except the evicted node.
2. The replica from the evicted node should be removed.
3. Data should be intact
5 Evict node with soft anti-affinity as false Pre-requisite:
1. Longhorn installed in set up of 3 worker nodes and 1 etc/control plane

Test Steps:
1. Create a volume (3 replicas), attach it to a pod.
2. Write data to it and compute md5sum.
3. Evict replicas from one node
4. Verify the data after the replica is evited.
1. Eviction should stuck.
2. Volume scheduling should fail.
3. There should be logs like
[longhorn-manager-dkn9c] time=``"2020-09-09T00:23:56Z" level=debug msg=``"Creating one more replica for eviction"
[longhorn-manager-dkn9c] time=``"2020-09-09T00:23:56Z" level=error msg=``"There's no available disk for replica vol-1-r-6268393a, size 2147483648"
[longhorn-manager-dkn9c] time=``"2020-09-09T00:23:56Z" level=error msg=``"unable to schedule replica vol-1-r-6268393a of volume vol-1"
6 Add node after evicting node with soft anti-affinity as false Pre-requisite:
1. Longhorn installed in set up of 3 worker nodes and 1 etc/control plane

Test Steps:
1. Create a volume (3 replicas), attach it to a pod.
2. Write data to it and compute md5sum.
3. Evict replicas from one node
4. Add a worker node to the cluster.
1. Eviction should stuck but recover after the additional node is available for scheduling.
2. Volume scheduling should fail initially but should be successful once the additional node is available.
7 Multi operation Pre-requisite:
1. Longhorn installed in set up of 3 worker nodes and 1 etc/control plane

Test Steps:

1. Create a volume (3 replicas), attach it to a pod.
2. Write data to it and compute md5sum.
3. Enable soft anti-affinity to True.
4. Select two nodes and evict replicas from them.
1. Replica eviction should happen one by one from nodes
8 Replica eviction from disk with soft anti-affinity as True. Pre-requisite:

1. Longhorn installed in set up of 3 worker nodes and 1 etc/control plane

Test Steps:

1. Add an additional disk to a node.
2. Create a volume (3 replicas), attach it to a pod.
3. Write data to it and compute md5sum.
4. Enable soft anti-affinity to True.
5. Evict from the additional disk.
1. A replica should be rebuilt in any other node except the evicted node.
2. The replica from the evicted node should be removed.
3. Data should be intact
9 Replica eviction from disk with soft anti-affinity as False. Pre-requisite:

1. Longhorn installed in set up of 3 worker nodes and 1 etc/control plane

Test Steps:

1. Add an additional disk to a node.
2. Create a volume (3 replicas), attach it to a pod.
3. Write data to it and compute md5sum.
4. Enable soft anti-affinity to True.
5. Evict from the additional disk.
1. A replica should be rebuilt in another disk on the same node.
2. The replica from the evicted node should be removed.
3. Data should be intact
10 Interrupt eviction Pre-requisite:

1. Longhorn installed in set up of 3 worker nodes and 1 etc/control plane

Test Steps:

1. Create a volume (3 replicas), attach it to a pod.
2. Write data to it and compute md5sum.
3. Enable soft anti-affinity to True.
4. Evict replicas from one node
5. Stop eviction.
1. The replicas should evict one by one and once the eviction
11 Evict replica of volume with DR volume Pre-requisite:

1. Longhorn installed in set up of 3 worker nodes and 1 etc/control plane

Test Steps:

1. Create a DR volume (3 replicas)
2. Write data to it and compute md5sum.
3. Enable soft anti-affinity to True.
4. Evict replicas from one node
5. Verify the data after the replica is evited.
1. A replica should be rebuilt in any other node except the evicted node.
2. The replica from the evicted node should be removed.
3. Data should be intact.
12 Evict replica of volume with the restored volume Pre-requisite:

1. Longhorn installed in set up of 3 worker nodes and 1 etc/control plane

Test Steps:

1. Restore a volume (3 replicas) from backup, attach it to a pod.
2. Enable soft anti-affinity to True.
3. Evict replicas from one node
4. Verify the data after the replica is evited.
1. A replica should be rebuilt in any other node except the evicted node.
2. The replica from the evicted node should be removed.
3. Data should be intact.
13 Evict replica of volume with the detached volume Pre-requisite:

1. Longhorn installed in set up of 3 worker nodes and 1 etc/control plane

Test Steps:

1. Restore a volume (3 replicas) from backup.
2. Write data to it and compute md5sum.
3. Enable soft anti-affinity to True.
4. Evict replicas from one node
5. Verify the data after the replica is evited.
1. The volume should get attached to a node.
2. A replica should be rebuilt in any other node except the evicted node.
3. The replica from the evicted node should be removed.
4. The volume should get detached.
5. Data should be intact.
14 On upgraded setup Pre-requisite:

1. Longhorn v1.0.2 installed in set up of 4 worker nodes and 1 etc/control plane
2. Create a volume, restored volume and DR volume

Test Steps:

1. Upgrade the longhorn to master.
2. Create a volume and attach it to a pod.
3. Write data to it and compute md5sum.
4. Evict replicas from one node
5. Verify the data after the replica is evited.
1. The replicas of volumes with v1.0.2 engine should also get evicted except the DR volume.
[Edit]