-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Comments
I'm getting similar errors (also solved by a restart), in my case the nodes were still partially working, sending telemetry data and responding to health RPC calls. |
in my case blocks/finalized blocks weren't synchronized with telemetry, so it stuck |
Yes, for me it was the same, new blocks were not being received, but telemetry data was still being sent. |
for me it was the same then, do you remember the block when that happened? i think mine was 131904 |
How much memory is polkadot using? It looks like it uses a lot of your available main memory? |
Don't know exactly, for me it started at 2019-08-28 19:33:56, so block ~77400 |
atm 40% of cpu and just 1 gm ram, i guess it happened when cpu was 100% |
Taking a look into this. |
@marcio-diaz can you reproduce this? |
@bkchr Yes, getting same log on master. |
Similar report from @drewstone, using substrate with Aura. https://gist.github.com/drewstone/314b70f2ae25a084fd520aac7023b712 |
Closed by #3989 |
I'm getting the error with 0.6.10 but not 'gav' folder and I think all other reported were - if that makes a difference... |
@crsCR How long has your node has been running for when this happened? UPDATE: Answered in Riot: Over 24 hours. |
Update - still getting with 0.6.16... |
So far I could not reproduce locally or on the test node. Would be great to catch this with additional logging:
|
I've finally caught this with a state trace against normal execution. Looks like this is not caching issue after all, but rather a consensus issue within wasm executor. |
So, I looked into this and traced every call by hand, here are the last calls before the mismatch:
The code that generates the mismatch was added here: paritytech/substrate@b0bc705#diff-9be02fd5e4de6c77f22d6327fdaa09afR166 I think, the trace you collected is from a node that has a native runtime that is not equal to the wasm blob. This is possible as we have forgotten to update the spec version in the latest release. So, the collected trace probably does not helps us with the other storage root mismatches. |
The two traces I diffed are from the same node. The block failed to import with storage root mismatch, and was re-downloaded and retried a second later, this time successfully. It is not clear then why the executed runtime was different then. |
What is the left and right side of the diff? Is the left the failed and the right the successful run? |
@bkchr Yes. The last line indicates result |
again it crashed yday, same error at block 694226 , check the lines around NotInFinalizedChain
2019-11-16 16:51:24 Idle (50 peers), best: #694225 (0x534f…767c), finalized #694225 (0x534f…767c), ⬇ 596.9kiB/s ⬆ 629.6kiB/s then same error: stack backtrace: Thread 'import-queue-worker-0' panicked at 'Storage root must match that calculated.', /home/sebytza05/.cargo/git/checkouts/substrate-7e08433d4c370a21/595f18e/srml/executive/src/lib.rs:271 This is a bug. Please report it at:
Hash not equal after restart it worked without problems |
Nov 21 04:35:27 kusama kusama[17890]: Version: 0.6.17-0929fe2-x86_64-linux-gnu |
Should be fixed in kusama 7.x releases. Please reopen if it happens again. |
@arkpar should this issue be closed? |
right |
@arkpar Is there any fixed pr links? |
Instead of `ensure` with dedicated errors use `panic` or `assert`. See for details paritytech#410 Closes paritytech#410 Co-authored-by: Gavin Wood <gavin@parity.io>
kusama, v0.5.1 ubuntu 18.04, after restart it worked again

The text was updated successfully, but these errors were encountered: