You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In our case, we are dealing with a large stream of big objects, which are serialized into json and written to a file. After upgrading to 1.11, we noticed higher execution time, an increase in cpu usage and more time spent in gc.
Below is a minimal code snippet to demonstrate the issue:
On my machine (MBP 16 2019, 2.6 GHz 6-Core Intel Core i7), the above code takes:
1.11: 81257 ms
1.10: 20909 ms
While this isn't a full-fledged analysis, we suspect that the regression is related to the new rendering in 1.11. We plan to stick with version 1.10 for now, but further insight would be appreciated.
Perhaps I'm missing something? Could this performance hit be expected under certain conditions, or is it likely a regression?
The text was updated successfully, but these errors were encountered:
Hi. We changed the rendering in 1.11, which is probably the cause. I am thinking of reverting the compact rendering to what it was in 1.10. The introduced change is too impatfull for this use case.
Unfortunately, it's not just the compact rendering that's been affected. In the same use case, when switching to render.prettyPrint, the process doesn't even complete after 5 minutes. It also produces numerous warnings, such as:
[WARNING] Your app's responsiveness to a new asynchronous event (such as a new connection, an upstream response, or a timer) was in excess of 100 milliseconds. Your CPU is probably starving. Consider increasing the granularity of your delays or adding more cedes. This may also be a sign that you are unintentionally running blocking I/O operations (such as File or InetAddress) without the blocking combinator.
On the other hand, using the deprecated render.pretty takes only 23 seconds and completes without any warnings at all.
I ran some benchmarks and identified several sources for slow downs. I am now working on fixing them. I will open a PR as soon as I have something usable, so that you can try it with your use case.
In our case, we are dealing with a large stream of big objects, which are serialized into json and written to a file. After upgrading to 1.11, we noticed higher execution time, an increase in cpu usage and more time spent in gc.
Below is a minimal code snippet to demonstrate the issue:
On my machine (MBP 16 2019, 2.6 GHz 6-Core Intel Core i7), the above code takes:
While this isn't a full-fledged analysis, we suspect that the regression is related to the new rendering in 1.11. We plan to stick with version 1.10 for now, but further insight would be appreciated.
Perhaps I'm missing something? Could this performance hit be expected under certain conditions, or is it likely a regression?
The text was updated successfully, but these errors were encountered: