-
Notifications
You must be signed in to change notification settings - Fork 897
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Critical Exception Processing Transaction #4768
Comments
I ran into the same issue on v22.10.2 Setting --Xbonsai-use-snapshots to false and restarting the container resolved this on my end |
Thanks for reporting this! Have you tried what was recommended above? We have made some changes to snapshots that are now default as of 22.10.2 (and somewhat conflicted with |
Sorry, no, I started a resync on the latest release. Are my flags fine as is on latest version or do you recommend changes? |
We recommend removing that flag right now. It is still an experimental feature/flag and we are continuing to tweak Bonsai snapshots. |
I recently switched from erigon to besu, because the former was too unreliable. Version: 23.1.0-RC1 Stacktrace of when it first happened:
Stacktrace after restarting besu:
Interestingly, after rebooting my node the exceptions stopped appearing, but now I'm getting these:
|
It seems you have the same issue we are working on for our 23.1.0 release coming in a week or 2. Either you can wait for that release, where we will include a tool to heal existing nodes with this problem, or re-sync. @matkt - maybe we can work with this user's database to test your node healing PR prior to the release? |
I started a new sync, but I'm keeping the corrupt database around so I can help test the healing feature. |
Hi we have a new PR with a auto heal mechanism . it chould fix your problem. since your node is very old it will take time to heal (but possible in my test it was 2 hours, 27 minutes to heal a node that had been inconsistent for a week) and for the next time it will be very fast because it will be instant #4972. |
Thanks, I've built your branch and it's currently running on the corrupt database. I'll get back to you when it's done or something goes wrong |
Thanks for your help |
It managed to repair the database! |
Update: it's now failing with another error, which appears to be related to #4784 (still running your branch)
|
can you try this latest version ? Thanks for your help #5059 |
Have you managed to import blocks between the heal and this error? |
What is the commit you use to find out if you have Gary's fix? |
Yes, after the healing and syncing it did import new blocks sucessfully. Then I went to bed and in the next morning the log was full of the exceptions I've posted above. The version I ran is the one from this PR #4972 (commit 686a9cc) |
I added more update in the pr #5059 If you can try it otherwise I think we should switch to debug logs some packages to understand the problem |
It is a regression in the main. My Pr had the main branch. We try to fix this |
We just merged a fix for the second bug. Unfortunately no heal for this one. But your new node will have no problem if it contains these last two fixed |
Tracking in refactor for #5123 of Bonsai that will address this. |
hey @non-fungible-nelson @matkt is there a besu release which includes this fix? I couldn't find whether it was fixed or not in a release in github |
Yes - I might update the changelog specifically to reference these issues, but the bonsai refactor #5123, fixes these issues on release 23.1.2. |
Description
My Besu node seems to have become corrupt, possibly after a reorg?
I think this is the second time in a couple weeks this happened. I just re-sync a few weeks ago because of a similar or if not the same bug (I didn't look into it that hard, I figured it was fluke)
Here is the important part of my Docker-Compose:
I also have some of these in my logs from earlier:
The text was updated successfully, but these errors were encountered: