Conversation
ipamd/ipamd.go
Outdated
| err := c.awsClient.AllocAllIPAddress(eni.ID) | ||
| var err error | ||
| if warmIPTargetDefined { | ||
| err = c.awsClient.AllocIPAddresses(eni.ID, curIPTarget) |
There was a problem hiding this comment.
Do these need to be separate functions?
| } | ||
|
|
||
| func getWarmIPTarget() int { | ||
| inputStr, found := os.LookupEnv("WARM_IP_TARGET") |
There was a problem hiding this comment.
Instead of WARM_IP_TARGET which I don't think is very descriptive, I propose NUM_WARM_IPS, or MAX_WARM_IPS each of which signifies a number rather than a resource.
There was a problem hiding this comment.
Out of curiosity how is this supposed to work if you assigned a value of 5? Right from the start there will be 5 secondary IPs available for pods to use, but let's say 3 pods are started on that particular machine... Does the pool of secondary IPs get expanded back out to 5, if so does it allocate three new addresses one at a time? Alternatively does the pool of available IPs become 2, then when those 2 IPs are eventually used then a new block of 5 IPs will be attached when needed. The naming for the former might be NUM_WARM_IPS while the later might be IP_ALLOCATION_BLOCK_COUNT or something along those lines.
There was a problem hiding this comment.
if WARM_IP_TARGET is 5,
- at beginning, the available IPs is 5
- after 3 Pods assigned, ipamD will try to allocate 3 more IPs in the ipamD controller loops. ipamD will try to use 1 EC2 API if it is possible. If it requires allocating 1 more ENIs, or EC2 control plane can not allocate 3 IP address at that time, ipamD controller loops will continuously retry to reach the
targetof 5 IPs
There are 2 reasons I choose WARM_IP_TARGET
-
There is another configure knob
WARM_ENI_TARGETwhich works similar but at ENI level.- the default is 1, e.g. if it is configured at 2, ipamD will try to make 2 ENI available. PR Add config option for number of ENIs get preallocated #68
-
...TARGETmeans thedesiredvalue, and ipamD controller loops will try continuously to reach thisTARGET.
There was a problem hiding this comment.
Ok, I see ipPoolMonitorInterval is set to run every 5 seconds to check to see if it needs to increase the IP pool. Is that value perhaps set too low? Or if many pods are launched simultaneously will it all fall within the same time window to not cause excessive EC2 API calls.
There was a problem hiding this comment.
If it is default behavior (no WARM_IP_TARGET is configured), in worst case, in 5 second interval, there are 2 EC2 APIs, one for ENI creation and one for allocating all IP address on the ENI. If maximum pods are scheduled, the number API calls is 2 * num-of-ENIs.
If WARM_IP_TARGET is configured too low, e.g. 1 You are right, it can cause excessive EC2 API call. For example, for a m4.4xlarge, it is 240 APIs compare to 16 APIs using default behavior.
pkg/awsutils/awsutils.go
Outdated
| } | ||
|
|
||
| // Allocate alloactes numIPs of IP address on a eni | ||
| func (cache *EC2InstanceMetadataCache) AllocIPAddresses(eniID string, numIPs int) error { |
There was a problem hiding this comment.
This function should be combined with AllocAllIPAddress().
3939ff4 to
9c94555
Compare
9c94555 to
f2beafa
Compare
|
It would be nice to have any kind of documentation for that feature. |
Issue #114:
Description of changes
This PR allows user to use a OS environment variable
WARM_IP_TARGETto configure the number of pre-warming IP addresses.You can modify amazon-vpc-cni.yaml to include
WARM_IP_TARGETvalueIf
WARM_IP_TARGETis NOT defined, ipamD will fallback to current allocation behavior.Today, by-default, if number of available IP addresses is below ENI's IP address, ipamD will allocate a new ENI. For example, for a t2.medium worker node, if the number of available IP address is below 5, ipamD will allocate a new ENI, allocate IP addresses(5 addresses) on this new ENI.
Caveat
If
WARM_IP_TARGETis set too low and more Pods are expected to be scheduled on the node, ipamD will invoke more EC2 API calls to allocate IP addresses. For example, for a m4.4xlarge worker node, each ENI can have up to 30 addresses. If "WARM_IP_TARGET" is set to 1 and if there are more than 30 pods scheduled on the node, there will be 30 EC2 AssignPrivateIpAddresses() API call for each ENI compare to 1 EC2 AssignPrivateIpAddresses() API call using default behavior (as today), and in worst case for a node, 240 EC2 AssignPrivateIpAddresses() API calls compare to 8 EC2 AssignPrivateIpAddresses() API calls using default behavior(as today). If the cluster is large and contains lots of worker nodes, this can cause ipamD running into EC2 AssignPrivateIpAddresses() API call throttling problems.Tests
define WARM_IP_TARGET=1 and uses t2.medium worker node where each ENI can have up to 5 Pods IP addresses
regression test and make sure default behavior still works. Do NOT define WARM_IP_TARGET, uses t2.medium worker
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.