Related issue
https://github.com/longhorn/longhorn/issues/2120
Manual Tests:
Case 1: Existing Longhorn installation
- Install Longhorn master.
- Change toleration in UI setting
- Verify that
longhorn.io/last-applied-tolerations
annotation andtoleration
of manager, drive deployer, UI are not changed. - Verify that
longhorn.io/last-applied-tolerations
annotation andtoleration
for managed components (CSI components, IM pods, share manager pod, EI daemonset, backing-image-manager, cronjob) are updated correctly
Case 2: New installation by Helm
- Install Longhorn master, set tolerations like:
defaultSettings:
taintToleration: "key=value:NoSchedule"
longhornManager:
priorityClass: ~
tolerations:
- key: key
operator: Equal
value: value
effect: NoSchedule
longhornDriver:
priorityClass: ~
tolerations:
- key: key
operator: Equal
value: value
effect: NoSchedule
longhornUI:
priorityClass: ~
tolerations:
- key: key
operator: Equal
value: value
effect: NoSchedule
- Verify that the toleration is added for: IM pods, Share Manager pods, CSI deployments, CSI daemonset, the backup jobs, manager, drive deployer, UI
- Uninstall the Helm release. Verify that uninstalling job has the same toleration as Longhorn manager. Verify that the uninstallation success.
Case 3: Upgrading from Helm
- Install Longhorn v1.0.2 using Helm, set tolerations using Longhorn UI
- Upgrade Longhorn to master version, verify that
longhorn.io/managed-by: longhorn-manager
is not set for manager, driver deployer and UI. - Verify that
longhorn.io/managed-by: longhorn-manager
label is added for: IM CRs, EI CRs, Share Manager CRs, IM pods, Share Manager pods, CSI services, CSI deployments, CSI daemonset. - Verify that
longhorn.io/last-applied-tolerations
is set for: IM pods, Share Manager pods, CSI deployments, CSI daemonset - Edit the tolerations using Longhorn UI and verify the tolerations get updated for components other than Longhorn manager, driver deployer and UI only. Longhorn manager, driver deployer and UI pods should not get restarted.
- Upgrade the chart to specify toleration for manager, drive deployer, UI.
- Verify that the toleration get applied
- Repeat this test case with Longhorn v1.1.0 in step 1
Case 4: Upgrading from kubectl
- Install Longhorn v1.0.2 using kubectl, set tolerations using Longhorn UI
- Upgrade Longhorn to master version, verify that
longhorn.io/managed-by: longhorn-manager
is not set for manager, driver deployer and UI. - Verify that
longhorn.io/managed-by: longhorn-manager
label is added for: IM CRs, EI CRs, Share Manager CRs, IM pods, Share Manager pods, CSI services, CSI deployments, CSI daemonset. - Verify that
longhorn.io/last-applied-tolerations
is set for: IM pods, Share Manager pods, CSI deployments, CSI daemonset - Edit the tolerations using Longhorn UI and verify the tolerations get updated for components other than Longhorn manager, driver deployer and UI only. Longhorn manager, driver deployer and UI pods should not get restarted.
- Edit the Yaml to specify toleration for manager, drive deployer, UI and upgrade Longhorn using kubectl command.
- Verify that the toleration get applied
- Repeat this test case with Longhorn v1.1.0 in step 1
Case 5: Node with taints
- Add some taints to all node in the cluster, e.g.,
key=value:NoSchedule
- Repeate case 2, 3, 4
Case 6: Priority Class UI
- Change Priority Class setting in Longhorn UI
- Verify that Longhorn only updates the managed components
Case 7: Priority Class Helm
- Change Priority Class in Helm for manager, driver, UI
- Verify that only priority class name of manager, driver, UI get updated