-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
very slow performance due to excessive reconfiguration in modells #1921
Comments
maybe something as simple as this is good?: --- a/kubernetes/client/api_client.py
+++ b/kubernetes/client/api_client.py
@@ -638,6 +638,7 @@ class ApiClient(object):
value = data[klass.attribute_map[attr]]
kwargs[attr] = self.__deserialize(value, attr_type)
+ kwargs["local_vars_configuration"] = self.configuration
instance = klass(**kwargs)
if hasattr(instance, 'get_real_child_model'): the script runs i 8 seconds with it. |
any change to the file has to be done in the generator. |
/assign @yliaog |
I assume this file needs to be changed? though which version? the code here was generated with 4.3.0 and the latest is 6.2.1 if I interpret #1943 correctly. Is the used version still updated? what is the best way to get this fixed fast? this issue is quite severe |
please submit the fix in the openapi-generator, that is the best way to fix it. |
too which version? (in addition to master) |
to the latest, then backport it to the version this repo is using currently. |
which is? |
The if not passed the models create a new configuration object which configures logging and determines cpu count every time. This causes extreme performance issues when deserializing larger sets of items. See also kubernetes-client/python#1921
The if not passed the models create a new configuration object which configures logging and determines cpu count every time. This causes extreme performance issues when deserializing larger sets of items. See also kubernetes-client/python#1921
the generator has been updated on its main branch. I strongly recommend to patch this locally for the current release |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
@yliaog are there any plans to update the project to use the latest/fixed version openapi-generator? |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
this problem still exists in the latest version 28.1.0 if you have no plans to update the generator to the fixed version can you please add this patch locally? this is wasting loads of cpu cycles for every user. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
Thanks for fixing this in the OpenApi layer @juliantaylor - adding a +1 that we have applied a gross local change to our repo to apply the same fix and would be thrilled to be able to use the stock api client |
Has this problem been solved? I also have the same issue when switching context using |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
What happened (please include outputs or screenshots):
Running this simple script in a big cluster takes about 30 seconds to execute:
as you see it has very high system cpu usage
running this under a profiler shows following interesting things:
190594 10.803 0.000 10.803 0.000 {built-in method posix.cpu_count}
10 seconds are spent running multiprocessing.cpu_count which accounts for most of the system usage
381195 0.216 0.000 7.443 0.000 __init__.py:1448(setLevel)
7 seconds are spent configuring logging
looking at what causes this appears to be following line in every model:
This runs the configuration function which sets up logging and calls multiprocessing.cpu_count
commenting the multiprocessing call confirms this, i then runs significantly faster:
Is there a way to avoid calling Configuration on every model init?
The text was updated successfully, but these errors were encountered: