Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(L1 follower and rollup verifier): support CodecV7 in L1 follower mode and rollup verifier #1105

Merged
merged 48 commits into from
Feb 13, 2025

Conversation

jonastheis
Copy link

@jonastheis jonastheis commented Jan 22, 2025

1. Purpose or design rationale of this PR

This PR introduces CodecV7 for the EuclidV2 hardfork to the L1 follower mode and rollup verifier. Both components reuse the same underlying CalldataBlobSource. Additionally, the rollup verifier is modified to support the new batch structure when verifying batches locally vs received from L1.

2. PR title

Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:

  • build: Changes that affect the build system or external dependencies (example scopes: yarn, eslint, typescript)
  • ci: Changes to our CI configuration files and scripts (example scopes: vercel, github, cypress)
  • docs: Documentation-only changes
  • feat: A new feature
  • fix: A bug fix
  • perf: A code change that improves performance
  • refactor: A code change that doesn't fix a bug, or add a feature, or improves performance
  • style: Changes that do not affect the meaning of the code (white-space, formatting, missing semi-colons, etc)
  • test: Adding missing tests or correcting existing tests

3. Deployment tag versioning

Has the version in params/version.go been updated?

  • This PR doesn't involve a new deployment, git tag, docker image tag, and it doesn't affect traces
  • Yes

4. Breaking change label

Does this PR have the breaking-change label?

  • This PR is not a breaking change
  • Yes

Summary by CodeRabbit

  • New Features

    • Enhanced batch processing with advanced version tracking and improved data validation for a more robust synchronization experience.
    • Introduced new structures and methods for handling committed batch metadata and rollup events more effectively.
    • Added support for blob hashes in batch commit arguments to accommodate additional data.
  • Refactor

    • Consolidated encoding and decoding logic with backward compatibility improvements for smoother operations.
    • Simplified the handling of committed batch metadata in various components.
  • Tests

    • Updated test scenarios to ensure reliability and better error reporting for revised data handling.
    • Added new test functions to validate the latest codec version and its associated logic.
  • Chore

    • Upgraded key dependencies to strengthen overall performance and stability.
    • Incremented the patch version to reflect updates in the versioning system.

Copy link

coderabbitai bot commented Jan 22, 2025

Walkthrough

This pull request introduces versioning and enhanced error handling for committed batch metadata within the rollup event processing system. It adds new fields and types (e.g., Version, ChunkBlockRanges, LastL1MessageQueueHash) to distinguish between legacy (V0) and new (V7) metadata. The changes update encoding/decoding routines and test cases accordingly and modify commit batch functions (V1 and V7) to better handle blob hashes and block timestamps. Additionally, dependency versions have been updated, and rollup synchronization logic now validates batches with additional parent metadata.

Changes

File(s) Change Summary
core/rawdb/accessors_rollup_event.go
core/rawdb/accessors_rollup_event_test.go
Added new fields (Version, ChunkBlockRanges, LastL1MessageQueueHash) to the CommittedBatchMeta structure and introduced two helper types (committedBatchMetaV0 and committedBatchMetaV7); updated Read/Write functions for dual-version decoding and improved error handling; updated test cases to reflect these structural changes.
go.mod Updated the dependency version for github.com/scroll-tech/da-codec from v0.1.3-0.20241218102542-9852fa4e1be5 to v0.1.3-0.20250210041951-d028c537b995.
rollup/da_syncer/da/calldata_blob_source.go Modified the processRollupEventsToDA method to track commit transaction hashes and group events from the same transaction; added variables for commit tracking and a helper function getAndAppendCommitBatchDA to aggregate commit events.
rollup/da_syncer/da/commitV1.go Renamed NewCommitBatchDAWithBlob to NewCommitBatchDAV1; modified its signature by adding versionedHashes and l1BlockTime parameters and removed legacy logic for fetching versionedHashes and block header time.
rollup/da_syncer/da/commitV7.go Introduced a new file with the CommitBatchDAV7 struct and associated methods; implemented a constructor (NewCommitBatchDAV7) and new processing logic including the getL1MessagesV7 function, to handle DA batches with the new codec version and ensure robust error checking and consistent processing.
rollup/l1/abi.go
rollup/l1/reader.go
Added a new BlobHashes field to the CommitBatchArgs struct and updated the FetchCommitTxData method to assign BlobHashes from the transaction, thus expanding the data structure for batch commit information.
rollup/rollup_sync_service/rollup_sync_service.go
rollup/rollup_sync_service_test.go
Updated batch validation and metadata retrieval functions to include the LastL1MessageQueueHash and parent committed batch metadata; adjusted method signatures and error handling to accommodate new CodecV7 requirements and simplified test validations by removing deprecated BlobVersionedHashes checks.
rollup/rollup_sync_service/testdata/blockTrace_06.json Introduced a new JSON structure containing detailed information about a blockchain block trace, including sections for coinbase, header, transactions, and execution results.
params/version.go Updated the VersionPatch constant from 6 to 7, indicating an increment in the patch version component of the current release.

Sequence Diagram(s)

sequenceDiagram
    participant Caller
    participant RCM as ReadCommittedBatchMeta
    participant DecV7 as Decoder (V7)
    participant DecV0 as Decoder (V0)
    
    Caller->>RCM: Request batch metadata read
    RCM->>DecV7: Attempt V7 decode
    alt V7 decode successful
        DecV7-->>RCM: Return V7 metadata
    else V7 decode fails
        RCM->>DecV0: Attempt V0 decode
        DecV0-->>RCM: Return V0 metadata or error
    end
    RCM-->>Caller: Return metadata (or error)
Loading
sequenceDiagram
    participant ES as Rollup Event Stream
    participant CBS as CalldataBlobSource
    participant Helper as getAndAppendCommitBatchDA
    
    ES->>CBS: New rollup event arrives
    CBS->>CBS: Check commit transaction hash
    alt Transaction changed
        CBS->>Helper: Process and append previous commit events
        Helper-->>CBS: Return aggregated commit data
    else Same transaction
        CBS->>CBS: Append commit event to current batch
    end
    CBS-->>ES: Return processed batch data
Loading

Possibly related PRs

  • refactor(rollup sync service): use CalldataBlobSource to retrieve data from L1 #1103: The changes in the main PR are related to the modifications of the CommittedBatchMeta structure and its handling, which directly connects to the updates in the rollup_sync_service that involve the use of CalldataBlobSource for retrieving data, as both involve enhancements to batch metadata handling.
  • refactor: use new da-codec interfaces #1068: The changes in the main PR are related to the modifications of the CommittedBatchMeta structure and its handling in the same file, core/rawdb/accessors_rollup_event.go, while the retrieved PR focuses on the removal of functions related to batch chunk range management in the same file, indicating a shift in how batch data is handled.

Suggested reviewers

  • Thegaram
  • colinlyguo
  • 0xmountaintop

Poem

(_/)
I'm a rabbit, hopping through the code,
Leaping over bugs on every road.
Batch metadata now sings a brand new tune,
Versioned and verified beneath the coding moon.
With each skip and bound, our logic's light and true!


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3c21f4e and ca408bc.

📒 Files selected for processing (1)
  • params/version.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • params/version.go

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@colinlyguo colinlyguo self-requested a review January 25, 2025 11:21
colinlyguo
colinlyguo previously approved these changes Jan 25, 2025
Copy link
Member

@colinlyguo colinlyguo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (1)
rollup/l1/abi.go (1)

246-258: Initialize BlobHashes in newCommitBatchArgsFromCommitBatchWithProof.

The function doesn't initialize the BlobHashes field in the returned CommitBatchArgs struct. This could lead to nil blob hashes even when they should be present.

Apply this diff to initialize the field:

 return &CommitBatchArgs{
   Version:                args.Version,
   ParentBatchHeader:      args.ParentBatchHeader,
   Chunks:                 args.Chunks,
   SkippedL1MessageBitmap: args.SkippedL1MessageBitmap,
+  BlobHashes:             nil, // Will be set by FetchCommitTxData
 }, nil
🧹 Nitpick comments (11)
rollup/da_syncer/da/commitV7.go (3)

21-31: Add explanatory docstrings for CommitBatchDAV7.
While this struct is relatively straightforward, adding docstrings for each field and its usage context would improve maintainability and clarity for new contributors.


46-49: Highlight intended time-unit in l1BlockTime parameter.
“l1BlockTime” is a uint64; ensure it’s always in seconds (or the correct unit). A clarifying comment or doc would help readers avoid confusion.


119-146: Optimize or defer memory allocation in Blocks().
The method merges L1 messages and transactions by copying them into a newly allocated slice. Consider deferring the merge or using a streaming approach if memory usage profiles become an issue.

core/rawdb/accessors_rollup_event.go (3)

31-36: Retain or remove BlobVersionedHashes in committedBatchMetaV0 based on usage.
You’ve kept BlobVersionedHashes for compatibility, but if it’s officially deprecated/unreferenced, ensure it isn’t incorrectly retained in future versions.


38-42: Add short doc comment to committedBatchMetaV7.
Even a brief note indicating the usage and difference from v0 aids readability.


187-216: Improve error messages in ReadCommittedBatchMeta.
When decoding fails for both v7 and v0, it would help to log or note which version attempts occurred. This helps diagnose data corruption or misalignment in the database.

rollup/da_syncer/da/calldata_blob_source.go (3)

99-119: Consider more granular scheduling for commit events.
You buffer commit events sharing the same transaction hash and flush them together. Ensure large sets of events in a single transaction won’t inflate memory usage. Possibly process in smaller batches if needed.


181-186: Add logs or metrics for leftover commit events.
Cleaning up leftover commit events after the loop is crucial. A debug log or metric can help detect unexpected buffering issues.


191-268: Protect getCommitBatchDA from partial mixing of versions.
You handle the same transaction’s events under a single version. If future transactions begin mixing versions (e.g., bridging partial v7 data with v8), consider adding assertions or warnings to prevent data mistakes.

rollup/rollup_sync_service/rollup_sync_service.go (1)

382-419: Optimize repeated L1 message queue computation.

Recomputing the message queue hash by reloading all blocks might be expensive for large batches. Consider caching or incremental hashing to avoid performance bottlenecks.

rollup/da_syncer/da/commitV1.go (1)

45-45: Consider block time drift.

Relying on l1BlockTime for blob retrieval is sensible, but ensure that any skew or mismatch between the actual L1 block time and the provided timestamp does not cause retrieval of an unintended blob.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b56dd9f and e048f53.

⛔ Files ignored due to path filters (1)
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (10)
  • core/rawdb/accessors_rollup_event.go (3 hunks)
  • core/rawdb/accessors_rollup_event_test.go (5 hunks)
  • go.mod (1 hunks)
  • rollup/da_syncer/da/calldata_blob_source.go (3 hunks)
  • rollup/da_syncer/da/commitV1.go (1 hunks)
  • rollup/da_syncer/da/commitV7.go (1 hunks)
  • rollup/l1/abi.go (1 hunks)
  • rollup/l1/reader.go (1 hunks)
  • rollup/rollup_sync_service/rollup_sync_service.go (7 hunks)
  • rollup/rollup_sync_service/rollup_sync_service_test.go (13 hunks)
✅ Files skipped from review due to trivial changes (1)
  • go.mod
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: test
🔇 Additional comments (20)
rollup/da_syncer/da/commitV7.go (2)

33-88: Validate potential concurrency and retries for fetching blob data.
Currently, the function synchronously fetches the blob from the blob client. If the client is used concurrently, consider adding retry or backoff logic to handle transient issues. Also confirm that large blobs won't cause memory pressure.


58-66: Acknowledging past suggestion for error message format.
This error message matches a prior suggestion about displaying both the “correct” and “fetched” hash.

core/rawdb/accessors_rollup_event.go (2)

24-29: Ensure backward compatibility with newly added fields in CommittedBatchMeta.
The added fields (Version, ChunkBlockRanges, LastL1MessageQueueHash) expand the structure. Carefully confirm that old data remains readable and consistent.


163-184: Confirm version thresholds in WriteCommittedBatchMeta.
The check “if encoding.CodecVersion(...) < encoding.CodecV7” branches to v0 storage. Make sure any future versions (e.g., V8+) are handled or tested.

rollup/da_syncer/da/calldata_blob_source.go (2)

11-11: Validate import usage.
Importing “common” is essential for transaction-hash handling. Confirm that no additional re-exports are needed if you rely on the go-ethereum “common” package in other spots.


145-176: Revisit partial commit events before revert/finalize.
You flush any pending commit events right before a revert/finalize event. Confirm the ordering logic if multiple reverts/finalizes appear consecutively.

rollup/rollup_sync_service/rollup_sync_service.go (6)

14-14: Minor import addition.

This import for “common.Hash” cleanly facilitates the new hash-based workflow.


269-275: Potential nil parent metadata scenario.

Although reading parent metadata only when index > 0 is correct, consider verifying that the returned parentCommittedBatchMeta is not nil. If the parent’s data has been pruned or was never written, subsequent logic could fail.


286-286: Check for consistent validateBatch usage.

Ensure all call sites are updated to pass the newly added parentCommittedBatchMeta; any missed spot could lead to unexpected errors or panics.


371-374: Clarify default chunk range for the genesis batch.

Returning a chunk range of [0,0] for batchIndex=0 is sensible, but verify that no logic implicitly assumes an empty array or encounters an off-by-one edge case here.


421-424: Initialize new metadata fields carefully.

The assignment of Version and LastL1MessageQueueHash appears correct. Continue ensuring that the encoded data always matches these metadata fields, as downstream components rely on accurate versioning.


451-493: Thorough handling of CodecV7 vs. legacy.

Differentiating CodecV7 from previous versions is well-structured. Test transitional scenarios (e.g., from V0 to V7) to validate that lastL1MessageQueueHash is computed and utilized correctly.

rollup/da_syncer/da/commitV1.go (1)

24-33: Function signature broadening.

Adding versionedHashes and l1BlockTime clarifies data flow. Confirm that callers do not pass invalid or inconsistent parameters, such as multiple versionedHashes or an incorrect block time.

core/rawdb/accessors_rollup_event_test.go (4)

6-7: Switching to testify/require.

Using “require” for assertions strengthens test reliability and readability.


152-223: Improved test coverage.

The expanded scenarios for Version and LastL1MessageQueueHash in TestWriteReadDeleteCommittedBatchMeta significantly increase confidence in the new fields.


229-242: Overwrite tests ensure robustness.

Testing overwriting the same batch index confirms that both old and new metadata handle collisions as expected.


271-271: Include LastL1MessageQueueHash in comparisons.

The updated compareCommittedBatchMeta properly checks LastL1MessageQueueHash, reducing the risk of storing incomplete metadata.

Also applies to: 285-285

rollup/l1/reader.go (1)

382-382: LGTM! Properly propagating blob hashes.

The change correctly assigns the blob hashes from the transaction to the CommitBatchArgs structure, which is essential for supporting CodecV7 in L1 follower mode.

rollup/l1/abi.go (1)

237-237: LGTM! Added support for blob hashes.

The new field properly extends the CommitBatchArgs struct to support blob hashes, which is required for CodecV7 support.

rollup/rollup_sync_service/rollup_sync_service_test.go (1)

256-257: LGTM! Simplified test setup.

The changes correctly remove the BlobVersionedHashes field from CommittedBatchMeta initialization in tests, as blob hashes are now handled through CommitBatchArgs. The test setup is now simpler and more focused on the version field.

Also applies to: 284-285, 321-322, 348-349, 383-384, 410-411, 447-449, 474-476, 505-507, 532-534, 558-560, 584-586, 620-622, 624-626, 628-630, 632-634

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
rollup/rollup_sync_service/rollup_sync_service_test.go (1)

661-808: Well-structured test coverage for CodecV7.

The test function provides comprehensive coverage for CodecV7 batch validation, including:

  • Single and multiple batch validation
  • Message queue hash progression
  • Parent batch hash linking
  • Block number continuity

Consider adding test cases for:

  • Error scenarios (invalid parent hash, incorrect message queue hash)
  • Edge cases (empty blocks, maximum message count)
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e048f53 and 80976ad.

⛔ Files ignored due to path filters (1)
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (3)
  • go.mod (1 hunks)
  • rollup/rollup_sync_service/rollup_sync_service_test.go (15 hunks)
  • rollup/rollup_sync_service/testdata/blockTrace_06.json (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • go.mod
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: test
🔇 Additional comments (10)
rollup/rollup_sync_service/rollup_sync_service_test.go (2)

256-257: LGTM! Consistent updates to test metadata.

The changes consistently update the CommittedBatchMeta instances across all test functions to use only the Version field, removing the BlobVersionedHashes field. This aligns with the PR objectives of supporting CodecV7.

Also applies to: 283-285, 320-322, 347-349, 383-385, 410-412, 447-449, 474-476, 505-507, 532-534, 558-560, 584-586, 620-622, 624-626, 628-630, 632-634


819-822: LGTM! Clean helper function implementation.

The helper function is concise and effectively serves its purpose in the CodecV7 tests.

rollup/rollup_sync_service/testdata/blockTrace_06.json (8)

1-9: Coinbase Object Structure – Looks Consistent
The Coinbase section defines the expected fields (address, nonce, balance, keccakCodeHash, poseidonCodeHash, codeSize) using a mix of numeric and hex string formats. Verify that the numeric fields (like nonce and codeSize) match the expected types in downstream processing.


10-28: Header Data Validation – Formatting and Null Field Check
The header provides key block metadata (parentHash, miner, stateRoot, transactionRoot, etc.) using consistent hexadecimal representations. Note that "baseFeePerGas" is null; please confirm that this is intentional (e.g. reflecting a pre-EIP-1559 state or a specific test scenario).


29-31: Row Consumption Field – Empty Array as Expected
The "row_consumption" field is represented as an empty array. This appears correct for test data with no row consumption output. Ensure that consuming code can correctly handle an empty list.


32-63: Transactions Array – Consistency and Duplicate txHash Review
Two transactions are provided with detailed fields (type, nonce, txHash, gas, gasPrice, from, to, chainId, value, data, isCreate, v, r, s). Note that both transactions share the same "txHash" value. Verify that duplicate txHash entries are intentional in this test scenario and that downstream logic is prepared to handle such cases if they occur.


65-118: StorageTrace Block – Proofs and Case Sensitivity Considerations
The "storageTrace" object correctly lists "rootBefore", "rootAfter", and a "proofs" dictionary keyed by addresses. The proofs arrays appear comprehensive. Observe that some address keys use mixed-case formatting (e.g. "0x1a258d17bF244C4dF02d40343a7626A9D321e105"); ensure that the system’s address comparison logic accounts for potential case variations.


120-168: StorageProofs Structure – Detailed and Well-Formed
The "storageProofs" section provides nested proofs per address and includes extra nested data (e.g. an "extraData" object with a "proofList"). The structure is detailed and appears to conform to the test data schema. Double-check that all nested keys and arrays match the expected format for the rollup verifier’s processing routines.


169-868: ExecutionResults – Comprehensive Trace with Detailed StructLogs
This section captures the execution outcome including gas usage, failure indicator, sender and receiver details, post-execution account states, the contract bytecode, and an extensive "structLogs" array of EVM opcodes. In the structLogs, note that one entry (around line 835) registers an "error" with “out of gas” – confirm that this is an intentional negative test scenario. Overall, the diverse opcode logs provide a rich trace; however, given the volume, verify that the testing framework and log analyzers are optimized to handle such verbose data.


869-870: Withdraw Trie Root – Zeroed Value Verification
The "withdraw_trie_root" field is set to a string of zeros. Ensure that a zeroed root is the desired output in this test case (e.g. indicating an absence of withdrawal data) and is consistent with other parts of the system’s expectations for this field.

colinlyguo
colinlyguo previously approved these changes Feb 11, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
core/rawdb/accessors_rollup_event.go (2)

196-207: Enhance version validation.

While the version check exists, consider adding an upper bound check to ensure the version is not higher than the maximum supported version.

 if encoding.CodecVersion(cbm7.Version) < encoding.CodecV7 {
     return nil, fmt.Errorf("unexpected committed batch metadata version: batch index %d, version %d", batchIndex, cbm7.Version)
 }
+if encoding.CodecVersion(cbm7.Version) > encoding.MaxCodecVersion {
+    return nil, fmt.Errorf("unsupported committed batch metadata version: batch index %d, version %d", batchIndex, cbm7.Version)
+}

209-219: Consider adding version validation for V0.

Similar to V7 validation, consider adding version validation for V0 to ensure data integrity.

 if err = rlp.Decode(bytes.NewReader(data), cbm0); err != nil {
     return nil, fmt.Errorf("failed to decode committed batch metadata: batch index %d, err: %w", batchIndex, err)
 }
+if encoding.CodecVersion(cbm0.Version) >= encoding.CodecV7 {
+    return nil, fmt.Errorf("unexpected version for V0 format: batch index %d, version %d", batchIndex, cbm0.Version)
+}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4022989 and d4cc897.

📒 Files selected for processing (1)
  • core/rawdb/accessors_rollup_event.go (3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: test
🔇 Additional comments (4)
core/rawdb/accessors_rollup_event.go (4)

23-29: LGTM! Well-structured metadata versioning.

The CommittedBatchMeta struct is well-designed with clear versioning support and backward compatibility considerations. The comment for the new field clearly indicates its version requirement.


31-36: LGTM! Good backward compatibility.

The committedBatchMetaV0 struct maintains backward compatibility while clearly documenting the unused field with a helpful comment.


162-184: LGTM! Clean version-based encoding.

The function elegantly handles different versions of metadata while maintaining backward compatibility. The error handling is robust and the version check is properly implemented.


187-194: LGTM! Improved error handling.

Good improvement in error handling by returning errors instead of using log.Crit. This allows better error recovery in the calling code.

colinlyguo
colinlyguo previously approved these changes Feb 12, 2025
Thegaram
Thegaram previously approved these changes Feb 12, 2025
@github-actions github-actions bot dismissed stale reviews from Thegaram and colinlyguo via 6d5af23 February 12, 2025 13:54
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
rollup/da_syncer/da/calldata_blob_source.go (1)

234-259: Consider using default for CodecV7 and above.

Based on a previous review comment, using default instead of an explicit case for V7 would make it easier to handle future ZK bug fixes that might require a new verifier version without changing the batch format.

Apply this diff to improve future maintainability:

-		default: // CodecVersion 7 and above
+		case 7:
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6d5af23 and 3c21f4e.

📒 Files selected for processing (1)
  • rollup/da_syncer/da/calldata_blob_source.go (3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: test
🔇 Additional comments (5)
rollup/da_syncer/da/calldata_blob_source.go (5)

11-11: LGTM!

The addition of the common package import is necessary for using the Hash type in the new implementation.


99-121: LGTM! Well-structured implementation for batch processing.

The implementation effectively tracks and processes commit events from the same transaction together, with clear comments explaining the logic. The helper function getAndAppendCommitBatchDA provides a clean way to process batched events.


134-144: LGTM! Robust handling of commit events.

The code correctly identifies and processes commit events from different transactions while maintaining the order of events. The implementation aligns with the comment from the previous review about handling multiple commit batch events belonging to the same transaction.


145-186: LGTM! Consistent cleanup of pending commit events.

The code ensures that any pending commit events are processed before handling revert or finalize events, maintaining data consistency.


191-233: LGTM! Thorough validation of commit events.

The implementation includes comprehensive validation of commit events, ensuring:

  • Non-empty commit events list
  • Matching transaction hashes
  • Matching block numbers and hashes
  • Sequential batch indices

colinlyguo
colinlyguo previously approved these changes Feb 13, 2025
Thegaram
Thegaram previously approved these changes Feb 13, 2025
@github-actions github-actions bot dismissed stale reviews from Thegaram and colinlyguo via ca408bc February 13, 2025 09:16
@Thegaram Thegaram merged commit 7de11ed into develop Feb 13, 2025
1 check passed
@Thegaram Thegaram deleted the jt/l1-follower-mode-update-da-codec branch February 13, 2025 09:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants