Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

very slow performance due to excessive reconfiguration in modells #1921

Open
juliantaylor opened this issue Oct 13, 2022 · 25 comments · May be fixed by #2346
Open

very slow performance due to excessive reconfiguration in modells #1921

juliantaylor opened this issue Oct 13, 2022 · 25 comments · May be fixed by #2346
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@juliantaylor
Copy link
Contributor

juliantaylor commented Oct 13, 2022

What happened (please include outputs or screenshots):

Running this simple script in a big cluster takes about 30 seconds to execute:

import kubernetes as k8s
k8s.config.load_kube_config()
apps = k8s.client.AppsV1Api()
print(k8s.__version__)
print(len(apps.list_replica_set_for_all_namespaces().items))
time python3 script:
24.2.0
3661

real	0m32.394s
user	0m17.764s
sys	0m11.235s

as you see it has very high system cpu usage
running this under a profiler shows following interesting things:

    957/1    0.002    0.000   32.177   32.177 {built-in method builtins.exec}
        1    0.173    0.173   32.177   32.177 test.py:1(<module>)
        1    0.000    0.000   31.563   31.563 apps_v1_api.py:3804(list_replica_set_for_all_namespaces)
        1    0.000    0.000   31.563   31.563 apps_v1_api.py:3838(list_replica_set_for_all_namespaces_with_http_info)
        1    0.000    0.000   31.563   31.563 api_client.py:305(call_api)
        1    0.018    0.018   31.563   31.563 api_client.py:120(__call_api)
        1    0.000    0.000   28.090   28.090 api_client.py:244(deserialize)
 780711/1    0.997    0.000   27.281   27.281 api_client.py:266(__deserialize)
 190593/1    0.934    0.000   27.281   27.281 api_client.py:620(__deserialize_model)
  36658/1    0.078    0.000   27.281   27.281 api_client.py:280(<listcomp>)
   190594    0.839    0.000   22.939    0.000 configuration.py:75(__init__)
   190594    0.108    0.000   10.911    0.000 context.py:41(cpu_count)
   190594   10.803    0.000   10.803    0.000 {built-in method posix.cpu_count}
   190597    0.300    0.000    8.327    0.000 configuration.py:253(debug)
   381195    0.216    0.000    7.443    0.000 __init__.py:1448(setLevel)
   381195    4.364    0.000    7.117    0.000 __init__.py:1403(_clear_cache)
...

190594 10.803 0.000 10.803 0.000 {built-in method posix.cpu_count}
10 seconds are spent running multiprocessing.cpu_count which accounts for most of the system usage

381195 0.216 0.000 7.443 0.000 __init__.py:1448(setLevel)
7 seconds are spent configuring logging

looking at what causes this appears to be following line in every model:

        if local_vars_configuration is None:
            local_vars_configuration = Configuration()

This runs the configuration function which sets up logging and calls multiprocessing.cpu_count

commenting the multiprocessing call confirms this, i then runs significantly faster:

real	0m11.964s
user	0m8.162s
sys	0m0.338s

Is there a way to avoid calling Configuration on every model init?

@juliantaylor juliantaylor added the kind/bug Categorizes issue or PR as related to a bug. label Oct 13, 2022
@juliantaylor
Copy link
Contributor Author

maybe something as simple as this is good?:

--- a/kubernetes/client/api_client.py
+++ b/kubernetes/client/api_client.py
@@ -638,6 +638,7 @@ class ApiClient(object):
                     value = data[klass.attribute_map[attr]]
                     kwargs[attr] = self.__deserialize(value, attr_type)
 
+        kwargs["local_vars_configuration"] = self.configuration
         instance = klass(**kwargs)
 
         if hasattr(instance, 'get_real_child_model'):

the script runs i 8 seconds with it.

@yliaog
Copy link
Contributor

yliaog commented Oct 24, 2022

Generated by: https://openapi-generator.tech

Generated by: https://openapi-generator.tech

any change to the file has to be done in the generator.

@yliaog
Copy link
Contributor

yliaog commented Oct 24, 2022

/assign @yliaog

@juliantaylor
Copy link
Contributor Author

I assume this file needs to be changed?
https://github.com/OpenAPITools/openapi-generator/blob/master/modules/openapi-generator/src/main/resources/python-legacy/api_client.mustache

though which version? the code here was generated with 4.3.0 and the latest is 6.2.1 if I interpret #1943 correctly. Is the used version still updated?

what is the best way to get this fixed fast? this issue is quite severe

@yliaog
Copy link
Contributor

yliaog commented Nov 4, 2022

please submit the fix in the openapi-generator, that is the best way to fix it.

@juliantaylor
Copy link
Contributor Author

juliantaylor commented Nov 4, 2022

too which version? (in addition to master)

@yliaog
Copy link
Contributor

yliaog commented Nov 4, 2022

to the latest, then backport it to the version this repo is using currently.

@juliantaylor
Copy link
Contributor Author

which is?

juliantaylor added a commit to juliantaylor/openapi-generator that referenced this issue Nov 6, 2022
The if not passed the models create a new configuration object which
configures logging and determines cpu count every time.
This causes extreme performance issues when deserializing larger sets of
items.

See also
kubernetes-client/python#1921
spacether pushed a commit to OpenAPITools/openapi-generator that referenced this issue Nov 7, 2022
The if not passed the models create a new configuration object which
configures logging and determines cpu count every time.
This causes extreme performance issues when deserializing larger sets of
items.

See also
kubernetes-client/python#1921
@juliantaylor
Copy link
Contributor Author

juliantaylor commented Nov 7, 2022

the generator has been updated on its main branch.
the generator does not support old versions OpenAPITools/openapi-generator#13922 (comment)

I strongly recommend to patch this locally for the current release

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 5, 2023
@juliantaylor
Copy link
Contributor Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 5, 2023
@amelim84
Copy link

@yliaog are there any plans to update the project to use the latest/fixed version openapi-generator?
We've been facing the issue described by @juliantaylor and have applied locally a patch with their suggestion, but would be ideal to have it on a release soon 🥺 :)

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 17, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 16, 2023
@juliantaylor
Copy link
Contributor Author

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jun 16, 2023
@juliantaylor
Copy link
Contributor Author

this problem still exists in the latest version 28.1.0

if you have no plans to update the generator to the fixed version can you please add this patch locally? this is wasting loads of cpu cycles for every user.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 22, 2024
@juliantaylor
Copy link
Contributor Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 22, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 22, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 21, 2024
@juliantaylor
Copy link
Contributor Author

/remove-lifecycle rotten

@gibsondan
Copy link

Thanks for fixing this in the OpenApi layer @juliantaylor - adding a +1 that we have applied a gross local change to our repo to apply the same fix and would be thrilled to be able to use the stock api client

@zhuoqun-chen
Copy link

Has this problem been solved? I also have the same issue when switching context using k8s.config.load_kube_config(), which can even take 5mins at most. Any idea on how to fix this in the user level?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 20, 2025
@juliantaylor
Copy link
Contributor Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 20, 2025
@albertobruin albertobruin linked a pull request Feb 12, 2025 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants