-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow RMI socket connection timeout to be configurable #279
Comments
Hello there, thanks for opening an issue and a fix proposal. |
Hi there, on-going work on RMI socket factory : #298 |
As #298 got merged, I think we can close this one. |
Problem
We are seeing issues where
jmxfetch
is having prolonged collection cycles when monitored apps are removed or restarted.For example, note the 68 second collection cycle when this kafka broker is restarted.
During the long collection cycle, I noticed the
MetricCollectionTask
hung up on the following stack trace:Digging in a bit more I saw that
TCPDirectSocketFactory
calls thenew Socket(host, port)
constructor which ultimately callsSocket.connect(endpoint)
with an infinite (0) timeout.Solution
I was able to work around this issue by forking jmxfetch and implementing a custom
RMISocketFactory
:And setting it using
With this change, I am able to get more reasonable collection cycle times when apps are restarted out from underneath jmxfetch. For this scenario, the problematic collection cycle time was reduced from ~70 seconds to ~20 seconds.
Suggestion
RMISocketFactory
in jmxfetchrmi_client_timeout
paramrmi_connection_timeout
The text was updated successfully, but these errors were encountered: