-
Notifications
You must be signed in to change notification settings - Fork 470
DaemonSets not being correctly calculated when choosing a node #715
Copy link
Copy link
Open
Labels
help wantedDenotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.kind/bugCategorizes issue or PR as related to a bug.Categorizes issue or PR as related to a bug.priority/important-soonMust be staffed and worked on either currently, or very soon, ideally in time for the next release.Must be staffed and worked on either currently, or very soon, ideally in time for the next release.triage/acceptedIndicates an issue or PR is ready to be actively worked on.Indicates an issue or PR is ready to be actively worked on.v1.xIssues prioritized for post-1.0Issues prioritized for post-1.0
Metadata
Metadata
Assignees
Labels
help wantedDenotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.kind/bugCategorizes issue or PR as related to a bug.Categorizes issue or PR as related to a bug.priority/important-soonMust be staffed and worked on either currently, or very soon, ideally in time for the next release.Must be staffed and worked on either currently, or very soon, ideally in time for the next release.triage/acceptedIndicates an issue or PR is ready to be actively worked on.Indicates an issue or PR is ready to be actively worked on.v1.xIssues prioritized for post-1.0Issues prioritized for post-1.0
Version
Karpenter Version: v0.24.0
Kubernetes Version: v1.21.0
Context
Due to the significant resource usage of certain Daemonsets, particularly when operating on larger machines, we have chosen to divide these Daemonsets based on affinity rules that use Karpenter's labels such as
karpenter.k8s.aws/instance-cpuorkarpenter.k8s.aws/instance-size.Expected Behavior
When selecting a node for provisioning, Karpenter must only consider the appropriate Daemonsets that will run on that node.
Actual Behavior
It appears that Karpenter is wrongly including all of the split Daemonsets instead of only the appropriate one, which can result in poor instance selection when provisioning new nodes or inaccurate consolidation actions.
Steps to Reproduce the Problem
Create a simple Pod with 1 CPU request, Karpenter Should provision a 2 or max 4 cpu Instance but will instead provision a large >10 cpu machine due wrongly include the bigger Daemonset in the 2\4\8 cpu evaluation.
Same behavior when using
karpenter.k8s.aws/instance-sizeor evenpodAntiAffinityrules in the Daemonset affinities.Thank you for your help in addressing this issue.
Community Note