-
-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot set --max-pods in the eks configuration #2551
Comments
When I delete a few instances from instance type and left instance with a bigger number of pods, I have limits of 29 pods per node now. But still cannot reach goal with 110 pods. So the question still actually, how to set max-pod to 110. |
It looks like I have the same issue over here. Any updates from your side @insider89 ? |
@Pionerd I didn't find a way how to set |
I hate to say this, but I recreated my environment from scratch and now my max_pods are 110... The following is sufficient, no need for
|
@Pionerd I've this flag enabled as well for cni plugin, but still have max pod per node depends from the instance type I provide in the
|
Hi @insider89 Just ran into the issue again with exactly the same code as before. Looks like some kind of timing issue still. What worked for me (this time, no guarantees) is leaving the cluster intact, removing the existing node group only and recreating it. |
Hello guys. For me it looks like problem not in terraform itself but in aws. Looks like amazon's bootstrap overrides provided values.
It is not elegant solution, but it works. It replaces on the fly line in bootstrap script which responsible for As a result:
Edit |
For now though, closing out since there are no further actions (that I am aware of) that the module can take to improve upon this area |
I tried your workarround, but I get tf error: Any idea? |
I think you need to escape all the $? |
@CostinaDamir
Also, you can replace |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Description
Cannot override
max-pods
with latest19.12
module. I've cluster provision with m2.large instance, which set 17 pods per node by default. I've setENABLE_PREFIX_DELEGATION = "true"
andWARM_PREFIX_TARGET = "1"
for vpc-cni addons, but it doesn't help, still have 17 pods per node. In theLaunch templates
I see following:I tried to provide the following part to my managed group configuration, but module just ignore it:
Before you submit an issue, please perform the following first:
.terraform
directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!):rm -rf .terraform/
terraform init
Versions
Module version [Required]: 19.12
Terraform version:
Reproduction Code [Required]
Expected behavior
Have 50 pods per node
Actual behavior
Have 17 pod per node
Additional context
I am going through different issue, but didn't find how to change the max-pod. This suggestion doesn't work.
The text was updated successfully, but these errors were encountered: