We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I'm running case1 in #399
Non-terminated Pods: (6 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- kube-system calico-node-qzn36 250m (2%) 0 (0%) 0 (0%) 0 (0%) wuyi05-baidu-com mnist14-trainer-k22dp 2 (16%) 3 (25%) 8Gi (8%) 8Gi (8%) wuyi05-baidu-com mnist15-trainer-z4s0r 2 (16%) 3 (25%) 8Gi (8%) 8Gi (8%) wuyi05-baidu-com mnist17-trainer-wqrpz 2 (16%) 3 (25%) 8Gi (8%) 8Gi (8%) wuyi05-baidu-com mnist5-pserver-fk552 2 (16%) 3 (25%) 5Gi (5%) 5Gi (5%) wuyi05-baidu-com mnist9-pserver-vb1nz 2 (16%) 3 (25%) 5Gi (5%) 5Gi (5%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) CPU Requests CPU Limits Memory Requests Memory Limits ------------ ---------- --------------- ------------- 10250m (85%) 15 (125%) 34Gi (36%) 34Gi (36%)
node CPU Request is 10250m and pod request 2 cores (2000m), autoscaler is still scaling up in this case.
The text was updated successfully, but these errors were encountered:
Thanks, I think autoscaler should compare with the idel resource and the job requests before scale up. Will fix this bug.
Sorry, something went wrong.
Yancey0623
No branches or pull requests
I'm running case1 in #399
node CPU Request is 10250m and pod request 2 cores (2000m), autoscaler is still scaling up in this case.
The text was updated successfully, but these errors were encountered: