-
-
Notifications
You must be signed in to change notification settings - Fork 348
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: improve sync committee updates #7456
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## unstable #7456 +/- ##
============================================
+ Coverage 50.25% 50.44% +0.19%
============================================
Files 602 602
Lines 40401 40583 +182
Branches 2204 2229 +25
============================================
+ Hits 20305 20474 +169
- Misses 20056 20069 +13
Partials 40 40 |
Performance Report🚀🚀 Significant benchmark improvement detected
Full benchmark results
|
posting benchmark from my personal ubuntu node which is the same to
it also shows >1000x improvement for some tests, similar to my local environment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, appreciate the thorough analysis
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a few small things, lgtm
|
||
let i = 0; | ||
let cachedHash: Uint8Array | null = null; | ||
const cachedHashInput = Buffer.allocUnsafe(32 + 8); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added: reuse this buffer as the input for digest
below
@@ -120,6 +120,8 @@ export function computeProposerIndex( | |||
const shuffledResult = new Map<number, number>(); | |||
|
|||
let i = 0; | |||
const cachedHashInput = Buffer.allocUnsafe(32 + 8); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added: reuse this buffer as the input for digest below
@@ -402,7 +408,7 @@ export function getComputeShuffledIndexFn(indexCount: number, seed: Bytes32): Co | |||
// bytesToBigInt(digest(Buffer.concat([_seed, intToBytes(i, 1)])).slice(0, 8)) % BigInt(indexCount) | |||
// ); | |||
pivotBuffer[32] = i % 256; | |||
pivot = Number(bytesToBigInt(digest(pivotBuffer).slice(0, 8)) % BigInt(indexCount)); | |||
pivot = Number(bytesToBigInt(digest(pivotBuffer).subarray(0, 8)) % BigInt(indexCount)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added: use subarray instead of slice
🎉 This PR is included in v1.27.0 🎉 |
GG 👏 |
Motivation
processSyncCommitteeUpdates()
could be >15s according to the devnetDescription
computeShuffledIndex
where we can cache pivot and source computation therebytesToInt
in order not to use BigIntI guess if we use
hashtree
we can improve more but the diff is a lot already and the main optimization is incomputeShuffledIndex()
, not the hash function. We can consider that in the future.We can also improve pre-electra but I think it's been not that bad for a long time, so only focus on electra in this PR
Closes #7366
Tests
naiveGetNextSyncCommitteeIndices()
while CI only show >20x difference. This is my local