Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Storage Calculations timeout with large buckets #286

Closed
novabyte opened this issue Sep 12, 2012 · 6 comments
Closed

Storage Calculations timeout with large buckets #286

novabyte opened this issue Sep 12, 2012 · 6 comments
Assignees
Milestone

Comments

@novabyte
Copy link

A client reported an issue here: https://help.basho.com/tickets/1927

Marcel was able to reproduce the issue locally with a bucket with 800K objects with about 100KB for each record. We're not sure what the minimum required object count is that causes the timeouts but this should certainly cause the same error messages in the logs.

For more information please see the ZD issue ticket.

@ghost ghost assigned kellymclaughlin Sep 12, 2012
@slfritchie
Copy link
Contributor

The Zendesk ticket https://help.basho.com/tickets/1927 has been closed. With recent work on riak_pipe backpressure and other fixes, is it feasible to unblock and/or resolve this ticket. @kellymclaughlin?

@kellymclaughlin
Copy link
Contributor

There is some further enhancements I wanted to make, but I think it's safe to do move that to a separate issue. I'll add that to the backlog and then close this.

@ksauzz
Copy link
Contributor

ksauzz commented Nov 26, 2013

Another user reported the same issue.

@kuenishi
Copy link
Contributor

c.f. #759 is another bugfix to handle timeout of storage calculation.

@kuenishi
Copy link
Contributor

I think it's time to close this issue, because we did most efforts to avoid single timeout to spoil the whole calculation. Further efforts could be #696 , which would be the right place for discussion. Any thoughts, @kellymclaughlin ?

@kuenishi
Copy link
Contributor

Now storage calculation of 800K 100KB objects is done in about 60 seconds.

# grep calculation /var/log/riak-cs/console.log | grep Finished | tail -n 5
2014-04-19 16:00:00.523 [info] <0.285.0>@riak_cs_storage_d:calculating:150 Finished storage calculation in 0 seconds.
2014-04-19 17:01:01.635 [info] <0.285.0>@riak_cs_storage_d:calculating:150 Finished storage calculation in 61 seconds.
2014-04-19 18:00:01.496 [info] <0.285.0>@riak_cs_storage_d:calculating:150 Finished storage calculation in 1 seconds.
2014-04-19 19:01:01.328 [info] <0.285.0>@riak_cs_storage_d:calculating:150 Finished storage calculation in 61 seconds.
2014-04-19 20:00:01.431 [info] <0.285.0>@riak_cs_storage_d:calculating:150 Finished storage calculation in 1 seconds.

This test was with Riak CS 1.4.5, Riak 1.4.8 , with two-node Riak cluster. Each node has 4 CPU cores and 16GB RAM, 256GB data partition. not such a big machine. So I think this issue is almost solved, and I'm closing this issue (to get close to 1.5).

And also, timeout error is now handled after #759 (included since 1.4.4) .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants