-
Notifications
You must be signed in to change notification settings - Fork 282
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(L1 follower and rollup verifier): support CodecV7 in L1 follower mode and rollup verifier #1105
Conversation
…retrieve data from L1
…s submitted in one transaction
…single transactions
WalkthroughThis pull request introduces versioning and enhanced error handling for committed batch metadata within the rollup event processing system. It adds new fields and types (e.g., Version, ChunkBlockRanges, LastL1MessageQueueHash) to distinguish between legacy (V0) and new (V7) metadata. The changes update encoding/decoding routines and test cases accordingly and modify commit batch functions (V1 and V7) to better handle blob hashes and block timestamps. Additionally, dependency versions have been updated, and rollup synchronization logic now validates batches with additional parent metadata. Changes
Sequence Diagram(s)sequenceDiagram
participant Caller
participant RCM as ReadCommittedBatchMeta
participant DecV7 as Decoder (V7)
participant DecV0 as Decoder (V0)
Caller->>RCM: Request batch metadata read
RCM->>DecV7: Attempt V7 decode
alt V7 decode successful
DecV7-->>RCM: Return V7 metadata
else V7 decode fails
RCM->>DecV0: Attempt V0 decode
DecV0-->>RCM: Return V0 metadata or error
end
RCM-->>Caller: Return metadata (or error)
sequenceDiagram
participant ES as Rollup Event Stream
participant CBS as CalldataBlobSource
participant Helper as getAndAppendCommitBatchDA
ES->>CBS: New rollup event arrives
CBS->>CBS: Check commit transaction hash
alt Transaction changed
CBS->>Helper: Process and append previous commit events
Helper-->>CBS: Return aggregated commit data
else Same transaction
CBS->>CBS: Append commit event to current batch
end
CBS-->>ES: Return processed batch data
Possibly related PRs
Suggested reviewers
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
…-use-code-from-l1-follower
…1-follower-mode-update-da-codec
…atch within a set of batches
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
…erifier-use-code-from-l1-follower
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🔭 Outside diff range comments (1)
rollup/l1/abi.go (1)
246-258
: Initialize BlobHashes in newCommitBatchArgsFromCommitBatchWithProof.The function doesn't initialize the
BlobHashes
field in the returnedCommitBatchArgs
struct. This could lead to nil blob hashes even when they should be present.Apply this diff to initialize the field:
return &CommitBatchArgs{ Version: args.Version, ParentBatchHeader: args.ParentBatchHeader, Chunks: args.Chunks, SkippedL1MessageBitmap: args.SkippedL1MessageBitmap, + BlobHashes: nil, // Will be set by FetchCommitTxData }, nil
🧹 Nitpick comments (11)
rollup/da_syncer/da/commitV7.go (3)
21-31
: Add explanatory docstrings for CommitBatchDAV7.
While this struct is relatively straightforward, adding docstrings for each field and its usage context would improve maintainability and clarity for new contributors.
46-49
: Highlight intended time-unit in l1BlockTime parameter.
“l1BlockTime” is a uint64; ensure it’s always in seconds (or the correct unit). A clarifying comment or doc would help readers avoid confusion.
119-146
: Optimize or defer memory allocation in Blocks().
The method merges L1 messages and transactions by copying them into a newly allocated slice. Consider deferring the merge or using a streaming approach if memory usage profiles become an issue.core/rawdb/accessors_rollup_event.go (3)
31-36
: Retain or remove BlobVersionedHashes in committedBatchMetaV0 based on usage.
You’ve kept BlobVersionedHashes for compatibility, but if it’s officially deprecated/unreferenced, ensure it isn’t incorrectly retained in future versions.
38-42
: Add short doc comment to committedBatchMetaV7.
Even a brief note indicating the usage and difference from v0 aids readability.
187-216
: Improve error messages in ReadCommittedBatchMeta.
When decoding fails for both v7 and v0, it would help to log or note which version attempts occurred. This helps diagnose data corruption or misalignment in the database.rollup/da_syncer/da/calldata_blob_source.go (3)
99-119
: Consider more granular scheduling for commit events.
You buffer commit events sharing the same transaction hash and flush them together. Ensure large sets of events in a single transaction won’t inflate memory usage. Possibly process in smaller batches if needed.
181-186
: Add logs or metrics for leftover commit events.
Cleaning up leftover commit events after the loop is crucial. A debug log or metric can help detect unexpected buffering issues.
191-268
: Protect getCommitBatchDA from partial mixing of versions.
You handle the same transaction’s events under a single version. If future transactions begin mixing versions (e.g., bridging partial v7 data with v8), consider adding assertions or warnings to prevent data mistakes.rollup/rollup_sync_service/rollup_sync_service.go (1)
382-419
: Optimize repeated L1 message queue computation.Recomputing the message queue hash by reloading all blocks might be expensive for large batches. Consider caching or incremental hashing to avoid performance bottlenecks.
rollup/da_syncer/da/commitV1.go (1)
45-45
: Consider block time drift.Relying on l1BlockTime for blob retrieval is sensible, but ensure that any skew or mismatch between the actual L1 block time and the provided timestamp does not cause retrieval of an unintended blob.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sum
is excluded by!**/*.sum
📒 Files selected for processing (10)
core/rawdb/accessors_rollup_event.go
(3 hunks)core/rawdb/accessors_rollup_event_test.go
(5 hunks)go.mod
(1 hunks)rollup/da_syncer/da/calldata_blob_source.go
(3 hunks)rollup/da_syncer/da/commitV1.go
(1 hunks)rollup/da_syncer/da/commitV7.go
(1 hunks)rollup/l1/abi.go
(1 hunks)rollup/l1/reader.go
(1 hunks)rollup/rollup_sync_service/rollup_sync_service.go
(7 hunks)rollup/rollup_sync_service/rollup_sync_service_test.go
(13 hunks)
✅ Files skipped from review due to trivial changes (1)
- go.mod
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: test
🔇 Additional comments (20)
rollup/da_syncer/da/commitV7.go (2)
33-88
: Validate potential concurrency and retries for fetching blob data.
Currently, the function synchronously fetches the blob from the blob client. If the client is used concurrently, consider adding retry or backoff logic to handle transient issues. Also confirm that large blobs won't cause memory pressure.
58-66
: Acknowledging past suggestion for error message format.
This error message matches a prior suggestion about displaying both the “correct” and “fetched” hash.core/rawdb/accessors_rollup_event.go (2)
24-29
: Ensure backward compatibility with newly added fields in CommittedBatchMeta.
The added fields (Version, ChunkBlockRanges, LastL1MessageQueueHash) expand the structure. Carefully confirm that old data remains readable and consistent.
163-184
: Confirm version thresholds in WriteCommittedBatchMeta.
The check “if encoding.CodecVersion(...) < encoding.CodecV7” branches to v0 storage. Make sure any future versions (e.g., V8+) are handled or tested.rollup/da_syncer/da/calldata_blob_source.go (2)
11-11
: Validate import usage.
Importing “common” is essential for transaction-hash handling. Confirm that no additional re-exports are needed if you rely on the go-ethereum “common” package in other spots.
145-176
: Revisit partial commit events before revert/finalize.
You flush any pending commit events right before a revert/finalize event. Confirm the ordering logic if multiple reverts/finalizes appear consecutively.rollup/rollup_sync_service/rollup_sync_service.go (6)
14-14
: Minor import addition.This import for “common.Hash” cleanly facilitates the new hash-based workflow.
269-275
: Potential nil parent metadata scenario.Although reading parent metadata only when index > 0 is correct, consider verifying that the returned parentCommittedBatchMeta is not nil. If the parent’s data has been pruned or was never written, subsequent logic could fail.
286-286
: Check for consistent validateBatch usage.Ensure all call sites are updated to pass the newly added parentCommittedBatchMeta; any missed spot could lead to unexpected errors or panics.
371-374
: Clarify default chunk range for the genesis batch.Returning a chunk range of [0,0] for batchIndex=0 is sensible, but verify that no logic implicitly assumes an empty array or encounters an off-by-one edge case here.
421-424
: Initialize new metadata fields carefully.The assignment of Version and LastL1MessageQueueHash appears correct. Continue ensuring that the encoded data always matches these metadata fields, as downstream components rely on accurate versioning.
451-493
: Thorough handling of CodecV7 vs. legacy.Differentiating CodecV7 from previous versions is well-structured. Test transitional scenarios (e.g., from V0 to V7) to validate that lastL1MessageQueueHash is computed and utilized correctly.
rollup/da_syncer/da/commitV1.go (1)
24-33
: Function signature broadening.Adding versionedHashes and l1BlockTime clarifies data flow. Confirm that callers do not pass invalid or inconsistent parameters, such as multiple versionedHashes or an incorrect block time.
core/rawdb/accessors_rollup_event_test.go (4)
6-7
: Switching to testify/require.Using “require” for assertions strengthens test reliability and readability.
152-223
: Improved test coverage.The expanded scenarios for Version and LastL1MessageQueueHash in TestWriteReadDeleteCommittedBatchMeta significantly increase confidence in the new fields.
229-242
: Overwrite tests ensure robustness.Testing overwriting the same batch index confirms that both old and new metadata handle collisions as expected.
271-271
: Include LastL1MessageQueueHash in comparisons.The updated compareCommittedBatchMeta properly checks LastL1MessageQueueHash, reducing the risk of storing incomplete metadata.
Also applies to: 285-285
rollup/l1/reader.go (1)
382-382
: LGTM! Properly propagating blob hashes.The change correctly assigns the blob hashes from the transaction to the
CommitBatchArgs
structure, which is essential for supportingCodecV7
in L1 follower mode.rollup/l1/abi.go (1)
237-237
: LGTM! Added support for blob hashes.The new field properly extends the
CommitBatchArgs
struct to support blob hashes, which is required forCodecV7
support.rollup/rollup_sync_service/rollup_sync_service_test.go (1)
256-257
: LGTM! Simplified test setup.The changes correctly remove the
BlobVersionedHashes
field fromCommittedBatchMeta
initialization in tests, as blob hashes are now handled throughCommitBatchArgs
. The test setup is now simpler and more focused on the version field.Also applies to: 284-285, 321-322, 348-349, 383-384, 410-411, 447-449, 474-476, 505-507, 532-534, 558-560, 584-586, 620-622, 624-626, 628-630, 632-634
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
rollup/rollup_sync_service/rollup_sync_service_test.go (1)
661-808
: Well-structured test coverage for CodecV7.The test function provides comprehensive coverage for CodecV7 batch validation, including:
- Single and multiple batch validation
- Message queue hash progression
- Parent batch hash linking
- Block number continuity
Consider adding test cases for:
- Error scenarios (invalid parent hash, incorrect message queue hash)
- Edge cases (empty blocks, maximum message count)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sum
is excluded by!**/*.sum
📒 Files selected for processing (3)
go.mod
(1 hunks)rollup/rollup_sync_service/rollup_sync_service_test.go
(15 hunks)rollup/rollup_sync_service/testdata/blockTrace_06.json
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- go.mod
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: test
🔇 Additional comments (10)
rollup/rollup_sync_service/rollup_sync_service_test.go (2)
256-257
: LGTM! Consistent updates to test metadata.The changes consistently update the
CommittedBatchMeta
instances across all test functions to use only theVersion
field, removing theBlobVersionedHashes
field. This aligns with the PR objectives of supporting CodecV7.Also applies to: 283-285, 320-322, 347-349, 383-385, 410-412, 447-449, 474-476, 505-507, 532-534, 558-560, 584-586, 620-622, 624-626, 628-630, 632-634
819-822
: LGTM! Clean helper function implementation.The helper function is concise and effectively serves its purpose in the CodecV7 tests.
rollup/rollup_sync_service/testdata/blockTrace_06.json (8)
1-9
: Coinbase Object Structure – Looks Consistent
The Coinbase section defines the expected fields (address, nonce, balance, keccakCodeHash, poseidonCodeHash, codeSize) using a mix of numeric and hex string formats. Verify that the numeric fields (like nonce and codeSize) match the expected types in downstream processing.
10-28
: Header Data Validation – Formatting and Null Field Check
The header provides key block metadata (parentHash, miner, stateRoot, transactionRoot, etc.) using consistent hexadecimal representations. Note that "baseFeePerGas" is null; please confirm that this is intentional (e.g. reflecting a pre-EIP-1559 state or a specific test scenario).
29-31
: Row Consumption Field – Empty Array as Expected
The "row_consumption" field is represented as an empty array. This appears correct for test data with no row consumption output. Ensure that consuming code can correctly handle an empty list.
32-63
: Transactions Array – Consistency and Duplicate txHash Review
Two transactions are provided with detailed fields (type, nonce, txHash, gas, gasPrice, from, to, chainId, value, data, isCreate, v, r, s). Note that both transactions share the same "txHash" value. Verify that duplicate txHash entries are intentional in this test scenario and that downstream logic is prepared to handle such cases if they occur.
65-118
: StorageTrace Block – Proofs and Case Sensitivity Considerations
The "storageTrace" object correctly lists "rootBefore", "rootAfter", and a "proofs" dictionary keyed by addresses. The proofs arrays appear comprehensive. Observe that some address keys use mixed-case formatting (e.g. "0x1a258d17bF244C4dF02d40343a7626A9D321e105"); ensure that the system’s address comparison logic accounts for potential case variations.
120-168
: StorageProofs Structure – Detailed and Well-Formed
The "storageProofs" section provides nested proofs per address and includes extra nested data (e.g. an "extraData" object with a "proofList"). The structure is detailed and appears to conform to the test data schema. Double-check that all nested keys and arrays match the expected format for the rollup verifier’s processing routines.
169-868
: ExecutionResults – Comprehensive Trace with Detailed StructLogs
This section captures the execution outcome including gas usage, failure indicator, sender and receiver details, post-execution account states, the contract bytecode, and an extensive "structLogs" array of EVM opcodes. In the structLogs, note that one entry (around line 835) registers an "error" with “out of gas” – confirm that this is an intentional negative test scenario. Overall, the diverse opcode logs provide a rich trace; however, given the volume, verify that the testing framework and log analyzers are optimized to handle such verbose data.
869-870
: Withdraw Trie Root – Zeroed Value Verification
The "withdraw_trie_root" field is set to a string of zeros. Ensure that a zeroed root is the desired output in this test case (e.g. indicating an absence of withdrawal data) and is consistent with other parts of the system’s expectations for this field.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
core/rawdb/accessors_rollup_event.go (2)
196-207
: Enhance version validation.While the version check exists, consider adding an upper bound check to ensure the version is not higher than the maximum supported version.
if encoding.CodecVersion(cbm7.Version) < encoding.CodecV7 { return nil, fmt.Errorf("unexpected committed batch metadata version: batch index %d, version %d", batchIndex, cbm7.Version) } +if encoding.CodecVersion(cbm7.Version) > encoding.MaxCodecVersion { + return nil, fmt.Errorf("unsupported committed batch metadata version: batch index %d, version %d", batchIndex, cbm7.Version) +}
209-219
: Consider adding version validation for V0.Similar to V7 validation, consider adding version validation for V0 to ensure data integrity.
if err = rlp.Decode(bytes.NewReader(data), cbm0); err != nil { return nil, fmt.Errorf("failed to decode committed batch metadata: batch index %d, err: %w", batchIndex, err) } +if encoding.CodecVersion(cbm0.Version) >= encoding.CodecV7 { + return nil, fmt.Errorf("unexpected version for V0 format: batch index %d, version %d", batchIndex, cbm0.Version) +}
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
core/rawdb/accessors_rollup_event.go
(3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: test
🔇 Additional comments (4)
core/rawdb/accessors_rollup_event.go (4)
23-29
: LGTM! Well-structured metadata versioning.The
CommittedBatchMeta
struct is well-designed with clear versioning support and backward compatibility considerations. The comment for the new field clearly indicates its version requirement.
31-36
: LGTM! Good backward compatibility.The
committedBatchMetaV0
struct maintains backward compatibility while clearly documenting the unused field with a helpful comment.
162-184
: LGTM! Clean version-based encoding.The function elegantly handles different versions of metadata while maintaining backward compatibility. The error handling is robust and the version check is properly implemented.
187-194
: LGTM! Improved error handling.Good improvement in error handling by returning errors instead of using
log.Crit
. This allows better error recovery in the calling code.
6d5af23
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
rollup/da_syncer/da/calldata_blob_source.go (1)
234-259
: Consider usingdefault
for CodecV7 and above.Based on a previous review comment, using
default
instead of an explicit case for V7 would make it easier to handle future ZK bug fixes that might require a new verifier version without changing the batch format.Apply this diff to improve future maintainability:
- default: // CodecVersion 7 and above + case 7:
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
rollup/da_syncer/da/calldata_blob_source.go
(3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: test
🔇 Additional comments (5)
rollup/da_syncer/da/calldata_blob_source.go (5)
11-11
: LGTM!The addition of the
common
package import is necessary for using theHash
type in the new implementation.
99-121
: LGTM! Well-structured implementation for batch processing.The implementation effectively tracks and processes commit events from the same transaction together, with clear comments explaining the logic. The helper function
getAndAppendCommitBatchDA
provides a clean way to process batched events.
134-144
: LGTM! Robust handling of commit events.The code correctly identifies and processes commit events from different transactions while maintaining the order of events. The implementation aligns with the comment from the previous review about handling multiple commit batch events belonging to the same transaction.
145-186
: LGTM! Consistent cleanup of pending commit events.The code ensures that any pending commit events are processed before handling revert or finalize events, maintaining data consistency.
191-233
: LGTM! Thorough validation of commit events.The implementation includes comprehensive validation of commit events, ensuring:
- Non-empty commit events list
- Matching transaction hashes
- Matching block numbers and hashes
- Sequential batch indices
ca408bc
1. Purpose or design rationale of this PR
This PR introduces
CodecV7
for theEuclidV2
hardfork to the L1 follower mode and rollup verifier. Both components reuse the same underlyingCalldataBlobSource
. Additionally, the rollup verifier is modified to support the new batch structure when verifying batches locally vs received from L1.2. PR title
Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:
3. Deployment tag versioning
Has the version in
params/version.go
been updated?4. Breaking change label
Does this PR have the
breaking-change
label?Summary by CodeRabbit
New Features
Refactor
Tests
Chore