-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes api client doesn't pass configuration object through to the models it creates, causing excessive logging calls #2284
Comments
Ah this may be a dupe of #1867 |
Unclear whether the benefits are worth the risk of breaking changes here, but this is a workaround for the signifcant lock contention issues with the k8s client surfaced in #23933 (reply in thread). This appears to be a known issue with the k8s python client - see kubernetes-client/python#2284. Test Plan: BK
Unclear whether the benefits are worth the risk of breaking changes here, but this is a workaround for the signifcant lock contention issues with the k8s client surfaced in #23933 (reply in thread). This appears to be a known issue with the k8s python client - see kubernetes-client/python#2284. Test Plan: BK
A workaround for this appears to be to patch the ApiClient as follows and use that everywhere that you might use an ApiClient (which makes me think the fix for this should be pretty straightforward, I'm just having some trouble ascertaining where exactly __deserialize_model is defined since it's generated via OpenAPI)
|
Here's the relevant code in the open api generator repo: https://github.com/OpenAPITools/openapi-generator/blob/d71b1cf49e4942234c1cea4f357b40046fa569b8/modules/openapi-generator/src/main/resources/python/api_client.mustache#L632-L640 |
Closing since this is a duplicate. |
#1921 appears to also be relevant and has a similar root cause (old version of the openapi generator being used) |
Unclear whether the benefits are worth the risk of breaking changes here, but this is a workaround for the signifcant lock contention issues with the k8s client surfaced in #23933 (reply in thread). This appears to be a known issue with the k8s python client - see kubernetes-client/python#2284. Test Plan: BK
Unclear whether the benefits are worth the risk of breaking changes here, but this is a workaround for the signifcant lock contention issues with the k8s client surfaced in #23933 (reply in thread). This appears to be a known issue with the k8s python client - see kubernetes-client/python#2284. Test Plan: BK
Unclear whether the benefits are worth the risk of breaking changes here, but this is a workaround for the signifcant lock contention issues with the k8s client surfaced in #23933 (reply in thread). This appears to be a known issue with the k8s python client - see kubernetes-client/python#2284. Test Plan: BK
Unclear whether the benefits are worth the risk of breaking changes here, but this is a workaround for the signifcant lock contention issues with the k8s client surfaced in #23933 (reply in thread). This appears to be a known issue with the k8s python client - see kubernetes-client/python#2284. Test Plan: BK
What happened (please include outputs or screenshots):
Making an API call with the kubernetes python client in an environment with a lot of multithreading and logging happening becomes very slow, and reveals that the bottleneck is each deserialized python model waiting to acquire the global python logging lock just to create the model object:
It appears that every time we create a model object here: https://github.com/kubernetes-client/python/blob/master/kubernetes/client/api_client.py#L620-L641
It results in a model being created without the "local_vars_configuration" argument set here (picking a model at random, but they all appear to have this parameter): https://github.com/kubernetes-client/python/blob/master/kubernetes/client/models/events_v1_event.py#L75-L78
Causing the model to create a fresh configuration object, which does several logger.setLevel calls in its constructor: https://github.com/kubernetes-client/python/blob/master/kubernetes/client/configuration.py#L257-L277
With enough threads doing this at once, serious contention and slowdown can result.
I believe an easy fix for this would be to pass through the Configuration object that the api client is already using through to each model object constructor in
__deserialize_model
, so that each model object does not need to create a new Configuration object.What you expected to happen:
No logging locks (or at least fewer logging locks) being needed to be acquired just to deserialize a k8s API response
How to reproduce it (as minimally and precisely as possible):
On a cluster with at least one k8s deployment, this script demonstrates that a couple logger.getLevel calls are being made within __deserialize_model:
I ran this on a cluster with 33 deployments and it told me that there were 2948 logger.setLevel calls while deserializing the API response.
Anything else we need to know?:
Environment:
Kubernetes version (
kubectl version
):Client Version: v1.29.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.7-eks-a18cd3a
OS (e.g., MacOS 10.13.6):
Python version (
python --version
)python 3.11.7
Python client version (
pip list | grep kubernetes
) 31.0.0The text was updated successfully, but these errors were encountered: