Is this a request for help?:
New azure managed disks are not mouting on the Kubernetes nodes, Azure API slow.
Is this an ISSUE or FEATURE REQUEST? (choose one):
It's an ISSUE
What version of acs-engine?:
ACS Engine version : 0.7.0
Kubernetes version : 1.7.5
kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.6", GitCommit:"4bc5e7f9a6c25dc4c03d4d656f2cefd21540e28c", GitTreeState:"clean", BuildDate:"2017-09-14T06:55:55Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T08:56:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
After the big update yesterday, i'm seeing a lot of errors about attaching or detaching Azure Managed Disks on kubernetes, espacially the new created ones.
I recreated all my stacks (Elasticsearch/kafka/Zookeeper/...) using a PersistentVolumeClaim with my statefullsets after the big update yesterday.
All my new Azure Disks are not able to be mounted on my workers, but actually it should :




I'm posting the logs of the kube-controller manager :
W0105 10:24:00.133401 1 reconciler.go:267] Multi-Attach error for volume "pvc-4c0b9e2b-a48d-11e7-a344-000d3ab7665a" (UniqueName: "kubernetes.io/azure-disk//subscriptions/3770e200-bb58-4d3b-afcb-66c7ce083c3f/resourceGroups/k8s-fleeters-cluster-preproduction/providers/Microsoft.Compute/disks/fleeters-59cd2dcd-dynamic-pvc-4c0b9e2b-a48d-11e7-a344-000d3ab7665a") from node "k8s-k8sworkers-20163042-3" Volume is already exclusively attached to one node and can't be attached to another
I0105 10:24:00.133439 1 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"integration", Name:"grafana-1826545959-8qw5q", UID:"20425f47-f1f7-11e7-92c5-000d3ab769d6", APIVersion:"v1", ResourceVersion:"22791080", FieldPath:""}): type: 'Warning' reason: 'FailedAttachVolume' Multi-Attach error for volume "pvc-4c0b9e2b-a48d-11e7-a344-000d3ab7665a" Volume is already exclusively attached to one node and can't be attached to another
I0105 10:24:00.133458 1 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"preproduction", Name:"mongo-0", UID:"5492929f-f1b4-11e7-92c5-000d3ab769d6", APIVersion:"v1", ResourceVersion:"22247369", FieldPath:""}): type: 'Warning' reason: 'FailedAttachVolume' Multi-Attach error for volume "pvc-8743f77c-a472-11e7-b780-000d3ab769d6" Volume is already exclusively attached to one node and can't be attached to another
W0105 10:24:00.233803 1 reconciler.go:267] Multi-Attach error for volume "pvc-8743f77c-a472-11e7-b780-000d3ab769d6" (UniqueName: "kubernetes.io/azure-disk//subscriptions/3770e200-bb58-4d3b-afcb-66c7ce083c3f/resourceGroups/k8s-fleeters-cluster-preproduction/providers/Microsoft.Compute/disks/fleeters-59cd2dcd-dynamic-pvc-8743f77c-a472-11e7-b780-000d3ab769d6") from node "k8s-k8sworkers-20163042-2" Volume is already exclusively attached to one node and can't be attached to another
I0105 10:24:00.233910 1 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"preproduction", Name:"mongo-0", UID:"5492929f-f1b4-11e7-92c5-000d3ab769d6", APIVersion:"v1", ResourceVersion:"22247369", FieldPath:""}): type: 'Warning' reason: 'FailedAttachVolume' Multi-Attach error for volume "pvc-8743f77c-a472-11e7-b780-000d3ab769d6" Volume is already exclusively attached to one node and can't be attached to another
W0105 10:24:00.234034 1 reconciler.go:267] Multi-Attach error for volume "pvc-4c0b9e2b-a48d-11e7-a344-000d3ab7665a" (UniqueName: "kubernetes.io/azure-disk//subscriptions/3770e200-bb58-4d3b-afcb-66c7ce083c3f/resourceGroups/k8s-fleeters-cluster-preproduction/providers/Microsoft.Compute/disks/fleeters-59cd2dcd-dynamic-pvc-4c0b9e2b-a48d-11e7-a344-000d3ab7665a") from node "k8s-k8sworkers-20163042-3" Volume is already exclusively attached to one node and can't be attached to another
I0105 10:24:00.234063 1 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"integration", Name:"grafana-1826545959-8qw5q", UID:"20425f47-f1f7-11e7-92c5-000d3ab769d6", APIVersion:"v1", ResourceVersion:"22791080", FieldPath:""}): type: 'Warning' reason: 'FailedAttachVolume' Multi-Attach error for volume "pvc-4c0b9e2b-a48d-11e7-a344-000d3ab7665a" Volume is already exclusively attached to one node and can't be attached to another
W0105 10:24:00.334429 1 reconciler.go:267] Multi-Attach error for volume "pvc-8743f77c-a472-11e7-b780-000d3ab769d6" (UniqueName: "kubernetes.io/azure-disk//subscriptions/3770e200-bb58-4d3b-afcb-66c7ce083c3f/resourceGroups/k8s-fleeters-cluster-preproduction/providers/Microsoft.Compute/disks/fleeters-59cd2dcd-dynamic-pvc-8743f77c-a472-11e7-b780-000d3ab769d6") from node "k8s-k8sworkers-20163042-2" Volume is already exclusively attached to one node and can't be attached to another
W0105 10:24:00.334518 1 reconciler.go:267] Multi-Attach error for volume "pvc-4c0b9e2b-a48d-11e7-a344-000d3ab7665a" (UniqueName: "kubernetes.io/azure-disk//subscriptions/3770e200-bb58-4d3b-afcb-66c7ce083c3f/resourceGroups/k8s-fleeters-cluster-preproduction/providers/Microsoft.Compute/disks/fleeters-59cd2dcd-dynamic-pvc-4c0b9e2b-a48d-11e7-a344-000d3ab7665a") from node "k8s-k8sworkers-20163042-3" Volume is already exclusively attached to one node and can't be attached to another
I0105 10:24:00.334462 1 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"preproduction", Name:"mongo-0", UID:"5492929f-f1b4-11e7-92c5-000d3ab769d6", APIVersion:"v1", ResourceVersion:"22247369", FieldPath:""}): type: 'Warning' reason: 'FailedAttachVolume' Multi-Attach error for volume "pvc-8743f77c-a472-11e7-b780-000d3ab769d6" Volume is already exclusively attached to one node and can't be attached to another
I0105 10:24:00.334586 1 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"integration", Name:"grafana-1826545959-8qw5q", UID:"20425f47-f1f7-11e7-92c5-000d3ab769d6", APIVersion:"v1", ResourceVersion:"22791080", FieldPath:""}): type: 'Warning' reason: 'FailedAttachVolume' Multi-Attach error for volume "pvc-4c0b9e2b-a48d-11e7-a344-000d3ab7665a" Volume is already exclusively attached to one node and can't be attached to another
W0105 10:24:00.434953 1 reconciler.go:267] Multi-Attach error for volume "pvc-8743f77c-a472-11e7-b780-000d3ab769d6" (UniqueName: "kubernetes.io/azure-disk//subscriptions/3770e200-bb58-4d3b-afcb-66c7ce083c3f/resourceGroups/k8s-fleeters-cluster-preproduction/providers/Microsoft.Compute/disks/fleeters-59cd2dcd-dynamic-pvc-8743f77c-a472-11e7-b780-000d3ab769d6") from node "k8s-k8sworkers-20163042-2" Volume is already exclusively attached to one node and can't be attached to another
I0105 10:24:00.434987 1 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"preproduction", Name:"mongo-0", UID:"5492929f-f1b4-11e7-92c5-000d3ab769d6", APIVersion:"v1", ResourceVersion:"22247369", FieldPath:""}): type: 'Warning' reason: 'FailedAttachVolume' Multi-Attach error for volume "pvc-8743f77c-a472-11e7-b780-000d3ab769d6" Volume is already exclusively attached to one node and can't be attached to another
W0105 10:24:00.435103 1 reconciler.go:267] Multi-Attach error for volume "pvc-4c0b9e2b-a48d-11e7-a344-000d3ab7665a" (UniqueName: "kubernetes.io/azure-disk//subscriptions/3770e200-bb58-4d3b-afcb-66c7ce083c3f/resourceGroups/k8s-fleeters-cluster-preproduction/providers/Microsoft.Compute/disks/fleeters-59cd2dcd-dynamic-pvc-4c0b9e2b-a48d-11e7-a344-000d3ab7665a") from node "k8s-k8sworkers-20163042-3" Volume is already exclusively attached to one node and can't be attached to another
I0105 10:24:00.435139 1 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"integration", Name:"grafana-1826545959-8qw5q", UID:"20425f47-f1f7-11e7-92c5-000d3ab769d6", APIVersion:"v1", ResourceVersion:"22791080", FieldPath:""}): type: 'Warning' reason: 'FailedAttachVolume' Multi-Attach error for volume "pvc-4c0b9e2b-a48d-11e7-a344-000d3ab7665a" Volume is already exclusively attached to one node and can't be attached to another
W0105 10:24:00.535432 1 reconciler.go:267] Multi-Attach error for volume "pvc-4c0b9e2b-a48d-11e7-a344-000d3ab7665a" (UniqueName: "kubernetes.io/azure-disk//subscriptions/3770e200-bb58-4d3b-afcb-66c7ce083c3f/resourceGroups/k8s-fleeters-cluster-preproduction/providers/Microsoft.Compute/disks/fleeters-59cd2dcd-dynamic-pvc-4c0b9e2b-a48d-11e7-a344-000d3ab7665a") from node "k8s-k8sworkers-20163042-3" Volume is already exclusively attached to one node and can't be attached to another
I0105 10:24:00.535472 1 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"integration", Name:"grafana-1826545959-8qw5q", UID:"20425f47-f1f7-11e7-92c5-000d3ab769d6", APIVersion:"v1", ResourceVersion:"22791080", FieldPath:""}): type: 'Warning' reason: 'FailedAttachVolume' Multi-Attach error for volume "pvc-4c0b9e2b-a48d-11e7-a344-000d3ab7665a" Volume is already exclusively attached to one node and can't be attached to another
W0105 10:24:00.535692 1 reconciler.go:267] Multi-Attach error for volume "pvc-8743f77c-a472-11e7-b780-000d3ab769d6" (UniqueName: "kubernetes.io/azure-disk//subscriptions/3770e200-bb58-4d3b-afcb-66c7ce083c3f/resourceGroups/k8s-fleeters-cluster-preproduction/providers/Microsoft.Compute/disks/fleeters-59cd2dcd-dynamic-pvc-8743f77c-a472-11e7-b780-000d3ab769d6") from node "k8s-k8sworkers-20163042-2" Volume is already exclusively attached to one node and can't be attached to another
I0105 10:24:00.535715 1 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"preproduction", Name:"mongo-0", UID:"5492929f-f1b4-11e7-92c5-000d3ab769d6", APIVersion:"v1", ResourceVersion:"22247369", FieldPath:""}): type: 'Warning' reason: 'FailedAttachVolume' Multi-Attach error for volume "pvc-8743f77c-a472-11e7-b780-000d3ab769d6" Volume is already exclusively attached to one node and can't be attached to another
W0105 10:24:00.636100 1 reconciler.go:267] Multi-Attach error for volume "pvc-8743f77c-a472-11e7-b780-000d3ab769d6" (UniqueName: "kubernetes.io/azure-disk//subscriptions/3770e200-bb58-4d3b-afcb-66c7ce083c3f/resourceGroups/k8s-fleeters-cluster-preproduction/providers/Microsoft.Compute/disks/fleeters-59cd2dcd-dynamic-pvc-8743f77c-a472-11e7-b780-000d3ab769d6") from node "k8s-k8sworkers-20163042-2" Volume is already exclusively attached to one node and can't be attached to another
W0105 10:24:00.636142 1 reconciler.go:267] Multi-Attach error for volume "pvc-4c0b9e2b-a48d-11e7-a344-000d3ab7665a" (UniqueName: "kubernetes.io/azure-disk//subscriptions/3770e200-bb58-4d3b-afcb-66c7ce083c3f/resourceGroups/k8s-fleeters-cluster-preproduction/providers/Microsoft.Compute/disks/fleeters-59cd2dcd-dynamic-pvc-4c0b9e2b-a48d-11e7-a344-000d3ab7665a") from node "k8s-k8sworkers-20163042-3" Volume is already exclusively attached to one node and can't be attached to another
I0105 10:24:00.636168 1 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"integration", Name:"grafana-1826545959-8qw5q", UID:"20425f47-f1f7-11e7-92c5-000d3ab769d6", APIVersion:"v1", ResourceVersion:"22791080", FieldPath:""}): type: 'Warning' reason: 'FailedAttachVolume' Multi-Attach error for volume "pvc-4c0b9e2b-a48d-11e7-a344-000d3ab7665a" Volume is already exclusively attached to one node and can't be attached to another
I0105 10:24:00.636183 1 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"preproduction", Name:"mongo-0", UID:"5492929f-f1b4-11e7-92c5-000d3ab769d6", APIVersion:"v1", ResourceVersion:"22247369", FieldPath:""}): type: 'Warning' reason: 'FailedAttachVolume' Multi-Attach error for volume "pvc-8743f77c-a472-11e7-b780-000d3ab769d6" Volume is already exclusively attached to one node and can't be attached to another
W0105 10:24:00.736504 1 reconciler.go:267] Multi-Attach error for volume "pvc-8743f77c-a472-11e7-b780-000d3ab769d6" (UniqueName: "kubernetes.io/azure-disk//subscriptions/3770e200-bb58-4d3b-afcb-66c7ce083c3f/resourceGroups/k8s-fleeters-cluster-preproduction/providers/Microsoft.Compute/disks/fleeters-59cd2dcd-dynamic-pvc-8743f77c-a472-11e7-b780-000d3ab769d6") from node "k8s-k8sworkers-20163042-2" Volume is already exclusively attached to one node and can't be attached to another
By the way it's an issue that the Azure community faced already and me aswell. Someone experienced the same kind of issues yesterday on an other Git thread. Azure/ACS#12
That's really critical, during the updates yesterday the Machines went down in a normal way but the Azure API was really slow and the mounting and detaching too.
Kubernetes was trying to balance my pods with their disks to another worker since the actual one was upgrading (more than 40min per machine), but it was impossible since the disks weren't able to detach from that worker.
I almost waited more than 1 hour to have my 3 mongoDb Azure Disks attached on my ReplicaSet after each machine reboot. I tought that Azure Disks where safe in production but basically it wasn't yesterday.
Conclusion, i'm not able to mount any disks on my statefullsets since yesterday on my Staging cluster.
Rebooted : kube-controller-manager, kubelet on the concerned workers, even restarted the virtual machines, it's always the same detach/attach error.
Please guys do you have any info on that, my Direct Professional Support plan mail is still not answered since 24h...
How to reproduce it (as minimally and precisely as possible):
Apparently someone else experienced that : Azure/ACS#12
Is this a request for help?:
New azure managed disks are not mouting on the Kubernetes nodes, Azure API slow.
Is this an ISSUE or FEATURE REQUEST? (choose one):
It's an ISSUE
What version of acs-engine?:
ACS Engine version : 0.7.0
Kubernetes version : 1.7.5
After the big update yesterday, i'm seeing a lot of errors about attaching or detaching Azure Managed Disks on kubernetes, espacially the new created ones.
I recreated all my stacks (Elasticsearch/kafka/Zookeeper/...) using a PersistentVolumeClaim with my statefullsets after the big update yesterday.
All my new Azure Disks are not able to be mounted on my workers, but actually it should :
I'm posting the logs of the kube-controller manager :
By the way it's an issue that the Azure community faced already and me aswell. Someone experienced the same kind of issues yesterday on an other Git thread. Azure/ACS#12
That's really critical, during the updates yesterday the Machines went down in a normal way but the Azure API was really slow and the mounting and detaching too.
Kubernetes was trying to balance my pods with their disks to another worker since the actual one was upgrading (more than 40min per machine), but it was impossible since the disks weren't able to detach from that worker.
I almost waited more than 1 hour to have my 3 mongoDb Azure Disks attached on my ReplicaSet after each machine reboot. I tought that Azure Disks where safe in production but basically it wasn't yesterday.
Conclusion, i'm not able to mount any disks on my statefullsets since yesterday on my Staging cluster.
Rebooted : kube-controller-manager, kubelet on the concerned workers, even restarted the virtual machines, it's always the same detach/attach error.
Please guys do you have any info on that, my Direct Professional Support plan mail is still not answered since 24h...
How to reproduce it (as minimally and precisely as possible):
Apparently someone else experienced that : Azure/ACS#12