You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It is possible for the cleanup job pod can be assigned to the incorrect node.
To summarize, the common PVC cleanup pod should be scheduled to the same node where a running workspace pod mounting the PVC is scheduled (if it exists).
This issue happens when the workspace pod is scheduled on a different node than the claim-devworkspace PVC's volume.kubernetes.io/selected-node .
Create multiple more devworkspaces named code-latest-2, code-latest-3, code-latest-4 etc. until there are two workspace pods that is scheduled on a node <node-name*> which is different than <node-name> from the previous step. These workspace pods will not be Running because of a PVC attach error.
For the sake of clarity, let's assume that code-latest-3 and code-latest-4 has it's pods scheduled on <node-name-*>
Stop all devworkspaces except for code-latest-3 and code-latest-4 . This will fix the pvc attach error, and the two workspace pods will have the Running state.
Delete the code-latest-3 devworkspace. Since the PVC's volume.kubernetes.io/selected-node is <node-name> instead of <node-name-*>, there will be a multi-attach error for the common PVC cleanup pod.
Expected behavior
The common PVC cleanup pod should be scheduled on <node-name-*>.
Additional context
Here is an example where the workspace pod is scheduled on ip-10-0-98-78.us-west-2.compute.internal, but the claim-devworkspace PVC's volume.kubernetes.io/selected-node is ip-10-0-101-28.us-west-2.compute.internal:
Description
It is possible for the cleanup job pod can be assigned to the incorrect node.
To summarize, the common PVC cleanup pod should be scheduled to the same node where a running workspace pod mounting the PVC is scheduled (if it exists).
This issue happens when the workspace pod is scheduled on a different node than the
claim-devworkspacePVC'svolume.kubernetes.io/selected-node.How To Reproduce
This will create a workspace pod, and a PVC named
claim-devworkspacewith:code-latest-2,code-latest-3,code-latest-4etc. until there are two workspace pods that is scheduled on a node<node-name*>which is different than<node-name>from the previous step. These workspace pods will not beRunningbecause of a PVC attach error.For the sake of clarity, let's assume that
code-latest-3andcode-latest-4has it's pods scheduled on<node-name-*>Stop all devworkspaces except for
code-latest-3andcode-latest-4. This will fix the pvc attach error, and the two workspace pods will have theRunningstate.Delete the
code-latest-3devworkspace. Since the PVC'svolume.kubernetes.io/selected-nodeis<node-name>instead of<node-name-*>, there will be a multi-attach error for the common PVC cleanup pod.Expected behavior
The common PVC cleanup pod should be scheduled on
<node-name-*>.Additional context
Here is an example where the workspace pod is scheduled on
ip-10-0-98-78.us-west-2.compute.internal, but theclaim-devworkspacePVC'svolume.kubernetes.io/selected-nodeisip-10-0-101-28.us-west-2.compute.internal:output.mp4