Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor(rollup sync service): use CalldataBlobSource to retrieve data from L1 #1103

Merged
merged 26 commits into from
Feb 7, 2025

Conversation

jonastheis
Copy link

@jonastheis jonastheis commented Dec 26, 2024

1. Purpose or design rationale of this PR

This PR refactors the rollup sync service to use CalldataBlobSource to retrieve data from L1. This removes a lot of duplicate code and makes future upgrades easier to support.

Specifically, before this PR the rollup sync service relied solely on the batch information contained in the commit transaction's calldata. However, with a future upgrade we will move this calldata to the blob. Therefore, the rollup sync service needs to be able to retrieve and interpret the blob data additionally.

This functionality is already given within the L1 follower mode which is why this PR reuses the internal component CalldataBlobSource to do this job. To support another upgrade (new batch version) changes need to be made only within CalldataBlobSource.

2. PR title

Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:

  • build: Changes that affect the build system or external dependencies (example scopes: yarn, eslint, typescript)
  • ci: Changes to our CI configuration files and scripts (example scopes: vercel, github, cypress)
  • docs: Documentation-only changes
  • feat: A new feature
  • fix: A bug fix
  • perf: A code change that improves performance
  • refactor: A code change that doesn't fix a bug, or add a feature, or improves performance
  • style: Changes that do not affect the meaning of the code (white-space, formatting, missing semi-colons, etc)
  • test: Adding missing tests or correcting existing tests

3. Deployment tag versioning

Has the version in params/version.go been updated?

  • This PR doesn't involve a new deployment, git tag, docker image tag, and it doesn't affect traces
  • Yes

4. Breaking change label

Does this PR have the breaking-change label?

  • This PR is not a breaking change
  • Yes

Summary by CodeRabbit

  • New Features

    • Improved configuration flexibility for data availability, enabling independent endpoint settings.
    • Introduced additional state management capabilities for more robust processing of rollup events and batch data.
    • Added new methods for managing L1 height and finalized states within the calldata blob source.
    • Enhanced error handling and functionality in the rollup synchronization service.
  • Refactor

    • Streamlined event processing and batch validation logic to enhance reliability and error handling.
    • Consolidated event data structures and removed outdated components to simplify system operations.
  • Tests

    • Expanded and reorganized tests to ensure consistent and stable performance in batch and event handling.
    • Introduced new test cases focusing on batch metadata retrieval and validation.
    • Updated tests to align with new event handling structures and configurations.

Copy link

coderabbitai bot commented Dec 26, 2024

Walkthrough

This change set refactors and extends multiple components related to data availability (DA) and rollup synchronization. Key modifications include updating flag configuration logic, enhancing DA struct definitions and methods, and migrating from field-based to event-based data access within commit, finalize, and revert operations. Several functions and tests have been updated accordingly, while obsolete files and test data have been removed. The overall control flow in the RollupSyncService is modified to improve error handling and batch metadata processing.

Changes

File(s) Change Summary
cmd/utils/flags.go Moved API endpoint flag assignments outside the DASyncEnabledFlag conditional for independent configuration.
core/rawdb/accessors_rollup_event.go Added field BlobVersionedHashes to CommittedBatchMeta and reformatted the Version field.
eth/backend.go Modified the instantiation of rollupSyncService to include config.DA in its parameters.
rollup/da_syncer/da/calldata_blob_source.go
rollup/da_syncer/da/commitV0.go
rollup/da_syncer/da/commitV1.go
rollup/da_syncer/da/da.go
rollup/da_syncer/da/finalize.go
rollup/da_syncer/da/revert.go
Updated DA syncer files: added new methods (e.g., SetL1Height, L1Finalized), changed struct fields from individual fields to an encapsulated event field, updated version types, and added interface methods for event-based access.
rollup/l1/abi.go
rollup/l1/reader.go
rollup/l1/types.go
Introduced a new NewFinalizeBatchEvent function, renamed FetchTxBlobHash to FetchTxBlobHashes (returning a slice), and added MockNopClient with mock implementations for testing.
rollup/rollup_sync_service/abi.go
rollup/rollup_sync_service/abi_test.go
rollup/rollup_sync_service/l1client.go
rollup/rollup_sync_service/l1client_test.go
Removed obsolete ABI and L1 client files previously used for rollup event handling.
rollup/rollup_sync_service/rollup_sync_service.go
rollup/rollup_sync_service/rollup_sync_service_test.go
Modified RollupSyncService to accept a new da_syncer.Config parameter, updated methods (e.g., fetchRollupEvents, getCommittedBatchMeta), added blockRangesFromChunks, and restructured/renamed tests for batch metadata handling.
rollup/rollup_sync_service/testdata/* Deleted obsolete test data JSON files for various codec versions.
trie/zk_trie_test.go Added new test functions and helper methods to compare the equivalence of zkTrie and Merkle Patricia Trie implementations.

Sequence Diagram(s)

sequenceDiagram
    participant RS as RollupSyncService
    participant CDS as CalldataBlobSource
    participant DB as Database
    participant DA as DA Syncer Config

    RS->>+CDS: Initialize with DA config
    CDS-->>RS: Blob source ready
    RS->>RS: Call fetchRollupEvents() → updateRollupEvents()
    RS->>+DB: getCommittedBatchMeta(commitedBatch)
    DB-->>RS: Return batch metadata
    RS->>RS: Validate batch using new event fields
Loading

Suggested labels

bump-version

Suggested reviewers

  • georgehao
  • 0xmountaintop
  • jonastheis

Poem

I'm a bunny with a carrot grin,
Hop-ping over code changes within.
Flags now dance outside their nest,
DA and events all feeling their best.
In fields of commits, I celebrate with a twitch and a spin!
🥕🐇


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3335654 and 8d45851.

📒 Files selected for processing (1)
  • params/version.go (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: test
🔇 Additional comments (1)
params/version.go (1)

27-27: Verify if version increment is necessary.

The PR objectives state that no versioning updates are required, yet the patch version is being incremented from 4 to 5. Since this PR is a refactor that doesn't introduce breaking changes, please confirm if this version increment is intentional and necessary.

✨ Finishing Touches
  • 📝 Generate Docstrings (Beta)

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@jonastheis jonastheis marked this pull request as ready for review December 27, 2024 03:21
@jonastheis jonastheis force-pushed the jt/rollup-verifier-use-code-from-l1-follower branch from 6e06325 to da81a2e Compare December 27, 2024 03:27
Copy link
Member

@colinlyguo colinlyguo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice work and LGTM.

colinlyguo
colinlyguo previously approved these changes Dec 27, 2024
Base automatically changed from jt/l1-follower-mode-l1-reader to develop January 21, 2025 01:30
@jonastheis jonastheis dismissed colinlyguo’s stale review January 21, 2025 01:30

The base branch was changed.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (7)
rollup/l1/types.go (1)

24-60: Add blob-specific testing capabilities to support the refactoring.

Since this PR refactors the rollup sync service to use CalldataBlobSource, the mock client should include capabilities for testing blob-related scenarios.

Consider adding:

  1. Methods to simulate blob data retrieval
  2. Configuration options for different blob formats
  3. Test helpers for verifying blob-related behavior

Example additions:

type MockNopClient struct {
    // ... existing fields ...
    blobData     []byte
    blobVersion  uint
}

// SetBlobData configures mock blob data
func (m *MockNopClient) SetBlobData(data []byte, version uint) {
    m.blobData = data
    m.blobVersion = version
}

// Add test helper methods for blob verification
func (m *MockNopClient) VerifyBlobAccess() bool {
    return m.blobData != nil
}
rollup/rollup_sync_service/rollup_sync_service.go (1)

357-358: Fix typo in parameter name commitedBatch

The parameter name commitedBatch in the getCommittedBatchMeta function has a typo. It should be committedBatch to maintain consistency and avoid confusion.

Apply this diff to correct the typo:

-func (s *RollupSyncService) getCommittedBatchMeta(commitedBatch da.EntryWithBlocks) (*rawdb.CommittedBatchMeta, error) {
+func (s *RollupSyncService) getCommittedBatchMeta(committedBatch da.EntryWithBlocks) (*rawdb.CommittedBatchMeta, error) {
rollup/rollup_sync_service/rollup_sync_service_test.go (1)

164-182: Avoid panics in unimplemented mock methods

In the mockEntryWithBlocks struct, the unimplemented methods panic with "implement me". To improve test safety and prevent accidental panics, consider returning default zero values or implementing minimal functionality.

For example, modify the Type method to return a default value:

-func (m mockEntryWithBlocks) Type() da.Type {
-    panic("implement me")
+func (m mockEntryWithBlocks) Type() da.Type {
+    return da.UnknownType
}

Apply similar changes to other methods as needed.

rollup/da_syncer/da/calldata_blob_source.go (1)

113-128: Reduce code duplication in type assertions.

The type assertion pattern is repeated for each event type. Consider extracting it into a helper function to improve maintainability.

+func assertEventType[T l1.RollupEvent](event l1.RollupEvent) (T, error) {
+    typed, ok := event.(T)
+    if !ok {
+        return nil, fmt.Errorf("unexpected type of rollup event: %T", event)
+    }
+    return typed, nil
+}

 case l1.RevertEventType:
-    revertEvent, ok := rollupEvent.(*l1.RevertBatchEvent)
-    if !ok {
-        return nil, fmt.Errorf("unexpected type of rollup event: %T", rollupEvent)
-    }
+    revertEvent, err := assertEventType[*l1.RevertBatchEvent](rollupEvent)
+    if err != nil {
+        return nil, err
+    }
     entry = NewRevertBatch(revertEvent)

 case l1.FinalizeEventType:
-    finalizeEvent, ok := rollupEvent.(*l1.FinalizeBatchEvent)
-    if !ok {
-        return nil, fmt.Errorf("unexpected type of rollup event: %T", rollupEvent)
-    }
+    finalizeEvent, err := assertEventType[*l1.FinalizeBatchEvent](rollupEvent)
+    if err != nil {
+        return nil, err
+    }
     entry = NewFinalizeBatch(finalizeEvent)
rollup/da_syncer/da/commitV0.go (1)

74-84: Document BlobVersionedHashes implementation.

Please add a comment explaining why BlobVersionedHashes returns nil for V0 commits. This helps clarify the expected behavior for this version.

+// BlobVersionedHashes returns nil as V0 commits do not support blob data.
 func (c *CommitBatchDAV0) BlobVersionedHashes() []common.Hash {
     return nil
 }
core/rawdb/accessors_rollup_event.go (1)

22-22: Consider documenting the versioned hash format.

While the comment indicates this field is unused and kept for compatibility, it would be helpful to document the expected format of the versioned hashes for future reference.

rollup/da_syncer/da/commitV1.go (1)

38-48: Consider enhancing error handling for versioned hashes validation.

While the validation is correct, consider providing more context in the error message about the expected behavior.

-    return nil, fmt.Errorf("unexpected number of versioned hashes: %d", len(versionedHashes))
+    return nil, fmt.Errorf("expected exactly one versioned hash for batch %d, but got %d", commitEvent.BatchIndex().Uint64(), len(versionedHashes))
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e5dfc75 and d222f58.

📒 Files selected for processing (21)
  • cmd/utils/flags.go (1 hunks)
  • core/rawdb/accessors_rollup_event.go (1 hunks)
  • eth/backend.go (1 hunks)
  • rollup/da_syncer/da/calldata_blob_source.go (2 hunks)
  • rollup/da_syncer/da/commitV0.go (5 hunks)
  • rollup/da_syncer/da/commitV1.go (3 hunks)
  • rollup/da_syncer/da/da.go (2 hunks)
  • rollup/da_syncer/da/finalize.go (2 hunks)
  • rollup/da_syncer/da/revert.go (2 hunks)
  • rollup/l1/abi.go (1 hunks)
  • rollup/l1/reader.go (1 hunks)
  • rollup/l1/types.go (1 hunks)
  • rollup/rollup_sync_service/abi.go (0 hunks)
  • rollup/rollup_sync_service/abi_test.go (0 hunks)
  • rollup/rollup_sync_service/l1client.go (0 hunks)
  • rollup/rollup_sync_service/l1client_test.go (0 hunks)
  • rollup/rollup_sync_service/rollup_sync_service.go (10 hunks)
  • rollup/rollup_sync_service/rollup_sync_service_test.go (9 hunks)
  • rollup/rollup_sync_service/testdata/commitBatchWithBlobProof_input_codecv3.json (0 hunks)
  • rollup/rollup_sync_service/testdata/commitBatch_input_codecv1.json (0 hunks)
  • rollup/rollup_sync_service/testdata/commitBatch_input_codecv2.json (0 hunks)
💤 Files with no reviewable changes (7)
  • rollup/rollup_sync_service/l1client_test.go
  • rollup/rollup_sync_service/testdata/commitBatchWithBlobProof_input_codecv3.json
  • rollup/rollup_sync_service/abi_test.go
  • rollup/rollup_sync_service/abi.go
  • rollup/rollup_sync_service/testdata/commitBatch_input_codecv2.json
  • rollup/rollup_sync_service/l1client.go
  • rollup/rollup_sync_service/testdata/commitBatch_input_codecv1.json
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: test
🔇 Additional comments (17)
rollup/l1/types.go (1)

24-24: LGTM! Clear and conventional mock struct naming.

The empty struct follows Go conventions for mock implementations.

rollup/da_syncer/da/finalize.go (1)

7-9: LGTM!

The refactoring simplifies the FinalizeBatch struct by encapsulating the event data, enhancing code readability and maintainability.

rollup/da_syncer/da/calldata_blob_source.go (1)

Line range hint 138-166: LGTM! Error handling is thorough and consistent.

The error handling provides good context by including batch indices and transaction hashes in error messages.

rollup/da_syncer/da/da.go (1)

32-32: LGTM! Interface changes align with blob data support.

The new methods enhance the interfaces to support blob data retrieval while maintaining backward compatibility:

  • Event() provides access to the underlying rollup event
  • Version(), Chunks(), and BlobVersionedHashes() support blob data handling

Also applies to: 38-40

rollup/da_syncer/da/revert.go (1)

Line range hint 7-29: LGTM! RevertBatch refactor is clean and consistent.

The struct has been simplified to use the event-centric approach, maintaining consistency with other event types.

rollup/da_syncer/da/commitV0.go (2)

18-25: LGTM! Struct changes improve type safety.

The changes enhance type safety with encoding.CodecVersion and maintain consistency with the event-centric approach.


Line range hint 141-142: Address TODOs for block metadata.

The method contains TODOs for difficulty and extra data values. These should be replaced with proper implementations.

Would you like me to help implement proper difficulty and extra data handling or create an issue to track this task?

core/rawdb/accessors_rollup_event.go (1)

21-22: LGTM! Field formatting is consistent with codebase style.

The Version field formatting maintains consistency with the codebase style.

rollup/l1/reader.go (2)

142-142: LGTM! Direct return of transaction data.

Simple and efficient implementation.


146-158: Improved error handling and support for multiple blob hashes.

The changes enhance error reporting and support retrieving multiple blob hashes, which aligns with the PR's objective to support future blob data format. The error messages are descriptive and include relevant context.

eth/backend.go (1)

248-248: LGTM! Added DA config to RollupSyncService initialization.

The change properly integrates the data availability configuration into the rollup sync service initialization.

rollup/l1/abi.go (1)

161-179: LGTM! Well-structured constructor for FinalizeBatchEvent.

The constructor provides a clean and type-safe way to create FinalizeBatchEvent instances. Parameter ordering and naming are consistent with the struct fields.

cmd/utils/flags.go (1)

1632-1640: LGTM! Clean implementation of DA configuration flags.

The implementation follows the established pattern for flag handling and properly validates flag existence before accessing values.

rollup/da_syncer/da/commitV1.go (4)

20-21: LGTM! Well-structured field addition.

The versionedHashes field is appropriately typed and positioned in the struct.


81-89: LGTM! Clean constructor implementation.

The constructor properly initializes both v0 and v1 fields while maintaining backward compatibility.


95-98: LGTM! Clean getter implementation.

The BlobVersionedHashes method follows Go conventions for getters.


Line range hint 1-98: Verify the impact on batch processing performance.

The implementation looks solid and well-structured. However, since this changes how batch data is processed, it would be good to verify:

  1. Performance impact of fetching and validating versioned hashes
  2. Memory usage with the new blob handling
  3. Backward compatibility with existing batches

Run the following script to check for potential performance impact:

✅ Verification successful

Implementation follows performance best practices

The code follows an efficient implementation pattern:

  • Uses standard Go memory management without excessive allocations
  • Performs only necessary cryptographic validations
  • Maintains backward compatibility through clean inheritance

The added overhead from versioned hash validation is an essential security requirement that cannot be optimized further.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for potential performance bottlenecks in batch processing

# Test 1: Search for batch processing metrics
rg -A 2 "batch.*process.*time|batch.*duration"

# Test 2: Search for memory allocation patterns in blob handling
rg -A 2 "make.*\[\].*blob|new.*blob|alloc.*blob"

# Test 3: Check for backward compatibility handling
ast-grep --pattern 'if $version == "v0" || $version == "V0" {
  $$$
}'

Length of output: 10495

colinlyguo
colinlyguo previously approved these changes Jan 25, 2025
@jonastheis jonastheis changed the base branch from develop to omerfirmak/mpt January 28, 2025 03:04
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments.

Actionable comments posted: 13

🧹 Nitpick comments (31)
core/state/snapshot/generate.go (1)

621-622: LGTM! Consider extracting the account struct definition.

The RLP tag changes are consistent with the other files. However, consider extracting this anonymous struct definition to a named type to improve code reusability and maintain the RLP tags in a single location.

+// snapshotAccount represents the Ethereum account structure used in snapshots
+type snapshotAccount struct {
+    Nonce            uint64
+    Balance          *big.Int
+    Root             common.Hash
+    KeccakCodeHash   []byte
+    PoseidonCodeHash []byte `rlp:"-"`
+    CodeSize         uint64 `rlp:"-"`
+}
+
 func (dl *diskLayer) generate(stats *generatorStats) {
     // ...
-    var acc struct {
-        Nonce            uint64
-        Balance          *big.Int
-        Root             common.Hash
-        KeccakCodeHash   []byte
-        PoseidonCodeHash []byte `rlp:"-"`
-        CodeSize         uint64 `rlp:"-"`
-    }
+    var acc snapshotAccount
     // ...
 }
rollup/pipeline/pipeline.go (3)

Line range hint 231-378: Consider enhancing error handling for L1 message transactions.

The error handling for L1 message transactions could be more explicit to aid in debugging issues during the transition to blob-based data retrieval.

Consider adding more context to error messages:

 if p.txs.Len() == 0 && tx.IsL1MessageTx() && err != nil {
-    // L1 message errored as the first txn, skip
+    // L1 message errored as the first transaction, skip and increment index
+    log.Debug("Skipping failed L1 message transaction",
+        "index", tx.AsL1MessageTx().QueueIndex,
+        "error", err)
     p.nextL1MsgIndex = tx.AsL1MessageTx().QueueIndex + 1
 }

Line range hint 379-486: Document the channel buffer sizing strategy.

The downstreamChCapacity function's implementation could benefit from documentation explaining the rationale behind the buffer sizing strategy.

Consider adding more detailed documentation:

 // downstreamChCapacity returns the channel capacity that should be used for downstream channels.
-// It aims to minimize stalls caused by different computational costs of different transactions
+// It aims to minimize stalls caused by different computational costs of different transactions.
+// The capacity is set to either 1 or MaxTxPerBlock (if configured) to:
+// 1. Ensure consistent memory usage
+// 2. Prevent excessive buffering that could lead to increased latency
+// 3. Maintain back-pressure between pipeline stages
 func (p *Pipeline) downstreamChCapacity() int {

Line range hint 487-535: Enhance CCC error reporting for better debugging.

The Circuit Capacity Checker (CCC) error handling could provide more detailed information to assist in troubleshooting.

Consider enhancing the error context:

 if err != nil {
-    log.Warn("failed to apply CCC", "txHash", lastTxn.Hash().Hex(), "err", err)
+    log.Warn("Circuit Capacity Check failed",
+        "txHash", lastTxn.Hash().Hex(),
+        "txType", lastTxn.Type(),
+        "isL1Message", lastTxn.IsL1MessageTx(),
+        "error", err)
     resultCh <- &Result{
         OverflowingTx:    lastTxn,
         OverflowingTrace: candidate.LastTrace,
         CCCErr:           err,
         Rows:             lastAccRows,
         FinalBlock:       lastCandidate,
     }
     return
 }
core/stateless/witness.go (3)

31-34: Clarify usage of HeaderReader interface in docs.
The interface is straightforward, but adding a more descriptive docstring (e.g., mentioning how implementations should handle unknown headers) could improve maintainability.


82-88: Consider hashing code to reduce memory footprint.
Storing full bytecode in w.Codes can become memory-heavy for large code. A hashed or truncated representation might be more efficient if lookups of exact code bytes are not required.


101-114: Allow optional deep copy of the underlying chain reference.
chain is copied by reference, which is typically fine if it's a read-only or concurrency-safe object. However, if the default usage pattern requires isolating chain interactions, consider clarifying or implementing a deeper copy strategy.

trie/zk_trie_test.go (2)

290-293: Add docstring for struct dbs.
A simple docstring explaining that dbs holds references to separate DB instances for parallel or comparative testing would be beneficial.


333-339: Confirm that checkStorageEquality handles deeply nested storage.
Currently, the function compares simple values. For more complex storage structures, consider a deeper validation strategy if needed.

trie/trie.go (7)

320-324: Reexamine the trace insertion logic.
The onInsert call is placed only once for the newly branched node. If you also want to track re-insertions or expansions, you may need to add more calls or expand the logic.


339-343: Consider storing the diff-based node updates for better auditing.
Tracking only the final insertion path might be insufficient in complex scenarios. If historical changes are required, expand onInsert to store old vs. new data.


396-400: onDelete call is helpful, but ensure the child node merges are also traced.
When short node merges occur, you might want to track that event separately so the tracer has a complete picture of transformations.


413-416: Improve comment readability for short node merges.
The in-code comment references "track is deleted as well" with a missing phrase. Reword for better readability (e.g., "track the original short node as deleted as well").

- // The child shortNode is merged into its parent, track is deleted as well.
+ // The child shortNode is merged into its parent; track the original short node as deleted as well.

478-482: Tracer consistency on nested merges.
If multiple merges happen in a chain, ensure that each node removal is properly recorded. You might need a loop or repeated calls in certain scenarios.


618-619: Allow toggling tracer reset behavior.
If partial trace data is needed beyond a single set of operations, consider making t.tracer.reset() optional or delayed.


621-631: Optimize for large access lists.
Returning a map of raw node data via Witness() can become large if many nodes are accessed. Consider a lazy approach or a streaming mechanism for extremely large tries.

eth/api.go (4)

327-338: New ExecutionWitness method structure.
The error handling is precise, and the flow is clear. However, consider caching or reusing the witness data if repeatedly invoked with the same block, to reduce overhead.


340-368: generateWitness: Check for unexpected state transitions.
The function effectively builds and processes the block in a separate state, verifying correctness via ValidateState. Make sure that exceptions (e.g., large blocks, invalid transactions) are surfaced. This method imposes extra runtime overhead; consider a caching mechanism if frequently invoked for the same block.


419-429: ToExecutionWitness: Handling large code or state sets.
Implementation is clear. For very large maps, the memory overhead can be high. If memory usage is a concern, consider streaming or chunking the data.


975-994: DiskRoot method: Avoid discarding returned errors.
The code calls rawdb.ReadDiskStateRoot and discards the second return parameter with _. If that method can fail, consider checking the error. Otherwise, this looks solid.

miner/scroll_worker.go (1)

493-496: Tracer activation for UseZktrie.
Enabling vmConfig.Debug and assigning vmConfig.Tracer = cccLogger is a clear approach for in-depth logs. Caution about performance overhead in production if left enabled inadvertently.

core/state/statedb.go (1)

311-313: GetCodeSize similarly logs code for the witness.
Implementation parallels GetCode. Minimal duplication is acceptable here, but consider factoring out the shared logic if it grows significantly.

core/stateless/encoding.go (2)

32-34: Consider using pre-allocated slices for better performance.

The current implementation appends to slices in loops. For better performance, consider pre-allocating the slices with the exact capacity needed.

-  ext.Codes = make([][]byte, 0, len(w.Codes))
+  ext.Codes = make([][]byte, len(w.Codes))
+  i := 0
   for code := range w.Codes {
-    ext.Codes = append(ext.Codes, []byte(code))
+    ext.Codes[i] = []byte(code)
+    i++
   }

Also applies to: 36-38


63-69: Improve error handling in DecodeRLP.

The error from fromExtWitness should be wrapped with additional context for better debugging.

 func (w *Witness) DecodeRLP(s *rlp.Stream) error {
     var ext extWitness
     if err := s.Decode(&ext); err != nil {
-        return err
+        return fmt.Errorf("failed to decode witness: %w", err)
     }
-    return w.fromExtWitness(&ext)
+    if err := w.fromExtWitness(&ext); err != nil {
+        return fmt.Errorf("failed to convert witness: %w", err)
+    }
+    return nil
 }
rollup/ccc/async_checker_test.go (1)

28-29: Add cleanup and improve test robustness.

The test should be improved to:

  1. Clean up resources after test completion
  2. Handle potential race conditions in async operations
  3. Add more test cases for edge scenarios
 func TestAsyncChecker(t *testing.T) {
+    defer func() {
+        if chain != nil {
+            chain.Stop()
+        }
+    }()
     
     // ... existing test code ...
     
     time.Sleep(time.Second)
+    
+    // Add test cases for edge scenarios
+    t.Run("concurrent operations", func(t *testing.T) {
+        // Add concurrent operation tests
+    })
+    
+    t.Run("error handling", func(t *testing.T) {
+        // Add error handling tests
+    })
 }

Also applies to: 31-31, 35-35, 39-39, 41-41

core/rawdb/accessors_state.go (1)

104-110: Add input validation and documentation.

The function should validate input parameters and include documentation about the expected behavior when the state root is not found.

+// ReadDiskStateRoot retrieves the disk state root for a given header root.
+// Returns (common.Hash{}, ethereum.NotFound) if the state root doesn't exist.
 func ReadDiskStateRoot(db ethdb.KeyValueReader, headerRoot common.Hash) (common.Hash, error) {
+    if (headerRoot == common.Hash{}) {
+        return common.Hash{}, errors.New("invalid header root")
+    }
     data, err := db.Get(diskStateRootKey(headerRoot))
     if err != nil {
         return common.Hash{}, err
     }
+    if len(data) != common.HashLength {
+        return common.Hash{}, fmt.Errorf("invalid state root length: got %d, want %d", len(data), common.HashLength)
+    }
     return common.BytesToHash(data), nil
 }
trie/tracer.go (2)

95-106: Consider optimizing map allocations in copy method.

The current implementation allocates new maps with the exact capacity needed. This is a good practice for memory efficiency.

Consider adding a comment explaining why maps.Clone is used for inserts and deletes but not for accessList:

 func (t *tracer) copy() *tracer {
+    // Deep copy accessList as it contains byte slices that need to be copied
     accessList := make(map[string][]byte, len(t.accessList))
     for path, blob := range t.accessList {
         accessList[path] = common.CopyBytes(blob)
     }

108-122: Consider adding error handling for large deletions.

The deletedNodes method could potentially handle a large number of deletions. Consider adding a capacity hint to the slice allocation.

 func (t *tracer) deletedNodes() []string {
-    var paths []string
+    paths := make([]string, 0, len(t.deletes))
     for path := range t.deletes {
rollup/ccc/async_checker.go (1)

101-104: LGTM! Clear handling of unsupported blocks.

The early return for Euclid blocks is well-implemented with a clear comment explaining why these blocks are skipped.

Consider expanding the comment to provide more context:

-    // Euclid blocks use MPT and CCC doesn't support them
+    // Euclid blocks use Merkle Patricia Trie (MPT) for state storage
+    // and are not supported by Chain Consensus Check (CCC)
trie/zk_trie.go (1)

262-265: Add TODO comment for unimplemented method.

The Witness method is currently unimplemented. Add a TODO comment to track this implementation task.

Apply this diff to add a TODO comment:

// Witness returns a set containing all trie nodes that have been accessed.
func (t *ZkTrie) Witness() map[string]struct{} {
+	// TODO: Implement Witness method to track accessed trie nodes
 	panic("not implemented")
}
params/config.go (1)

655-665: Consider using a more efficient deep copy method.

The current implementation of Clone uses JSON marshaling/unmarshaling for deep copying, which could be inefficient for large configurations. Consider using a direct field-by-field copy or a more efficient deep copy library.

-func (c *ChainConfig) Clone() *ChainConfig {
-	var clone ChainConfig
-	j, err := json.Marshal(c)
-	if err != nil {
-		panic(err)
-	}
-	if err = json.Unmarshal(j, &clone); err != nil {
-		panic(err)
-	}
-	return &clone
+func (c *ChainConfig) Clone() *ChainConfig {
+	clone := *c // Copy the basic fields
+	
+	// Deep copy pointer fields
+	if c.ChainID != nil {
+		chainID := new(big.Int).Set(c.ChainID)
+		clone.ChainID = chainID
+	}
+	// ... similar deep copies for other pointer fields
+	
+	return &clone
}
🛑 Comments failed to post (13)
core/stateless/witness.go (2)

116-122: 🛠️ Refactor suggestion

Handle empty Headers slice to avoid panic.
Accessing w.Headers[0] will panic if Headers is empty. Although documented, you could fail gracefully early to help diagnose a bad witness.

 func (w *Witness) Root() common.Hash {
+    if len(w.Headers) == 0 {
+       // Return zero hash or a descriptive error
+       return common.Hash{}
+    }
     return w.Headers[0].Root
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

// Root returns the pre-state root from the first header.
//
// Note, this method will panic in case of a bad witness (but RLP decoding will
// sanitize it and fail before that).
func (w *Witness) Root() common.Hash {
	if len(w.Headers) == 0 {
		// Return zero hash or a descriptive error
		return common.Hash{}
	}
	return w.Headers[0].Root
}

71-80: ⚠️ Potential issue

Add nil-check after retrieving the parent header.
In this loop, you append w.chain.GetHeader(...) directly to w.Headers. If the fetched header is nil, it can propagate a panic upon future use.

 for int(w.context.Number.Uint64()-number) > len(w.Headers) {
     tail := w.Headers[len(w.Headers)-1]
+    parentHeader := w.chain.GetHeader(tail.ParentHash, tail.Number.Uint64()-1)
+    if parentHeader == nil {
+        // Return or handle error to avoid appending nil
+        return
+    }
-    w.Headers = append(w.Headers, w.chain.GetHeader(tail.ParentHash, tail.Number.Uint64()-1))
+    w.Headers = append(w.Headers, parentHeader)
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

// AddBlockHash adds a "blockhash" to the witness with the designated offset from
// chain head. Under the hood, this method actually pulls in enough headers from
// the chain to cover the block being added.
func (w *Witness) AddBlockHash(number uint64) {
	// Keep pulling in headers until this hash is populated
	for int(w.context.Number.Uint64()-number) > len(w.Headers) {
		tail := w.Headers[len(w.Headers)-1]
		parentHeader := w.chain.GetHeader(tail.ParentHash, tail.Number.Uint64()-1)
		if parentHeader == nil {
			// Return or handle error to avoid appending nil
			return
		}
		w.Headers = append(w.Headers, parentHeader)
	}
}
trie/zk_trie_test.go (3)

297-317: ⚠️ Potential issue

Check concurrency concerns in checkTrieEquality.
If multiple tests run in parallel, the global accountsLeft usage can conflict. Make accountsLeft an argument or local variable for safer concurrency.


295-295: 🛠️ Refactor suggestion

Use local or function-level variable instead of global accountsLeft.
Global variables can cause concurrency issues in parallel tests. Consider making this variable local or embedding it in a struct.

-var accountsLeft = -1
+func TestEquivalence(t *testing.T) {
+    accountsLeft := -1

Committable suggestion skipped: line range outside the PR's diff.


272-288: ⚠️ Potential issue

Avoid skipping the entire TestEquivalence or remove it if incomplete.
This test is skipped unconditionally. If it’s still in development, consider leaving a TODO comment with instructions or remove it until ready. Hardcoded file paths are also present, which can break in CI or other machines.

 func TestEquivalence(t *testing.T) {
-    t.Skip()
+    // t.Skip("Unskip once local path references are updated or made parameterizable")
...
-    zkDb, err := leveldb.New("/Users/omer/Documents/go-ethereum/l2geth-datadir/geth/chaindata", 0, 0, "", true)
+    zkDb, err := leveldb.New("relative/or/configured/path/chaindata", 0, 0, "", true)
...

Committable suggestion skipped: line range outside the PR's diff.

trie/trie.go (1)

536-540: ⚠️ Potential issue

Handle db.Node errors more comprehensively.
You log an error if reading the node fails, but also call t.tracer.onRead(prefix, rlp). Ensure partial read or decoding errors are accounted for in onRead.

eth/api.go (1)

369-398: ⚠️ Potential issue

testWitness method: Potential confusion in error context.
When comparing statedb.GetRootHash() to postStateRoot, the fallback returns an error that wraps err, but err is not reassigned in that final block. This can cause confusion if the outer err is nil at that point. Suggest explicitly returning a new error message without wrapping a possibly nil error.

- return fmt.Errorf("failed to commit statelessly %d: %w", block.Number(), err)
+ return fmt.Errorf("failed to commit statelessly for block %d. Root mismatch detected", block.Number())
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

func testWitness(blockchain *core.BlockChain, block *types.Block, witness *stateless.Witness) error {
	stateRoot := witness.Root()
	if diskRoot, _ := rawdb.ReadDiskStateRoot(blockchain.Database(), stateRoot); diskRoot != (common.Hash{}) {
		stateRoot = diskRoot
	}

	// Create and populate the state database to serve as the stateless backend
	statedb, err := state.New(stateRoot, state.NewDatabase(witness.MakeHashDB()), nil)
	if err != nil {
		return fmt.Errorf("failed to create state database: %w", err)
	}

	receipts, _, usedGas, err := blockchain.Processor().Process(block, statedb, *blockchain.GetVMConfig())
	if err != nil {
		return fmt.Errorf("failed to process block %d: %w", block.Number(), err)
	}

	if err := blockchain.Validator().ValidateState(block, statedb, receipts, usedGas); err != nil {
		return fmt.Errorf("failed to validate block %d: %w", block.Number(), err)
	}

	postStateRoot := block.Root()
	if diskRoot, _ := rawdb.ReadDiskStateRoot(blockchain.Database(), postStateRoot); diskRoot != (common.Hash{}) {
		postStateRoot = diskRoot
	}
	if statedb.GetRootHash() != postStateRoot {
		return fmt.Errorf("failed to commit statelessly for block %d. Root mismatch detected", block.Number())
	}
	return nil
}
core/stateless/encoding.go (1)

43-54: 🛠️ Refactor suggestion

Add validation for nil fields and empty data.

The fromExtWitness method should validate input data before processing. Consider adding checks for nil pointers and empty data.

 func (w *Witness) fromExtWitness(ext *extWitness) error {
+    if ext == nil {
+        return errors.New("nil witness")
+    }
     w.Headers = ext.Headers
 
     w.Codes = make(map[string]struct{}, len(ext.Codes))
     for _, code := range ext.Codes {
+        if len(code) == 0 {
+            return errors.New("empty code")
+        }
         w.Codes[string(code)] = struct{}{}
     }
     w.State = make(map[string]struct{}, len(ext.State))
     for _, node := range ext.State {
+        if len(node) == 0 {
+            return errors.New("empty state node")
+        }
         w.State[string(node)] = struct{}{}
     }
     return nil
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

func (w *Witness) fromExtWitness(ext *extWitness) error {
	if ext == nil {
		return errors.New("nil witness")
	}
	w.Headers = ext.Headers

	w.Codes = make(map[string]struct{}, len(ext.Codes))
	for _, code := range ext.Codes {
		if len(code) == 0 {
			return errors.New("empty code")
		}
		w.Codes[string(code)] = struct{}{}
	}
	w.State = make(map[string]struct{}, len(ext.State))
	for _, node := range ext.State {
		if len(node) == 0 {
			return errors.New("empty state node")
		}
		w.State[string(node)] = struct{}{}
	}
	return nil
}
core/stateless/database.go (1)

36-67: 🛠️ Refactor suggestion

Optimize memory usage and add error handling.

The current implementation could be improved in several ways:

  1. The hasher is reused but could be moved to a pool for concurrent usage
  2. Missing error handling for database operations
  3. No batch operations for better performance
 func (w *Witness) MakeHashDB() ethdb.Database {
     var (
         memdb  = rawdb.NewMemoryDatabase()
         hasher = crypto.NewKeccakState()
         hash   = make([]byte, 32)
+        batch  = memdb.NewBatch()
     )
+    defer hasher.Reset()
 
     // Inject all the "block hashes" (i.e. headers) into the ephemeral database
     for _, header := range w.Headers {
-        rawdb.WriteHeader(memdb, header)
+        rawdb.WriteHeader(batch, header)
     }
 
     // Inject all the bytecodes into the ephemeral database
     for code := range w.Codes {
         blob := []byte(code)
 
         hasher.Reset()
         hasher.Write(blob)
         hasher.Read(hash)
 
-        rawdb.WriteCode(memdb, common.BytesToHash(hash), blob)
+        rawdb.WriteCode(batch, common.BytesToHash(hash), blob)
     }
 
     // Inject all the MPT trie nodes into the ephemeral database
     for node := range w.State {
         blob := []byte(node)
 
         hasher.Reset()
         hasher.Write(blob)
         hasher.Read(hash)
 
-        rawdb.WriteTrieNode(memdb, common.BytesToHash(hash), blob)
+        rawdb.WriteTrieNode(batch, common.BytesToHash(hash), blob)
     }
+
+    if err := batch.Write(); err != nil {
+        log.Crit("Failed to write witness data to database", "err", err)
+    }
     return memdb
 }

Committable suggestion skipped: line range outside the PR's diff.

core/rawdb/accessors_state.go (1)

98-102: 🛠️ Refactor suggestion

Consider using batch operations and improve error handling.

The current implementation uses critical logging for errors, which might not be appropriate for all scenarios. Consider using batch operations for better performance and returning errors instead of logging them critically.

-func WriteDiskStateRoot(db ethdb.KeyValueWriter, headerRoot, diskRoot common.Hash) {
+func WriteDiskStateRoot(db ethdb.KeyValueWriter, headerRoot, diskRoot common.Hash) error {
     if err := db.Put(diskStateRootKey(headerRoot), diskRoot.Bytes()); err != nil {
-        log.Crit("Failed to store disk state root", "err", err)
+        return fmt.Errorf("failed to store disk state root: %w", err)
     }
+    return nil
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

func WriteDiskStateRoot(db ethdb.KeyValueWriter, headerRoot, diskRoot common.Hash) error {
	if err := db.Put(diskStateRootKey(headerRoot), diskRoot.Bytes()); err != nil {
		return fmt.Errorf("failed to store disk state root: %w", err)
	}
	return nil
}
light/trie.go (1)

184-188: 💡 Codebase verification

⚠️ Potential issue

Critical: Witness() method requires proper implementation

The Witness() method is part of the core Trie interface contract and is actively used in state management operations. The current panic implementation in odrTrie will cause runtime failures when:

  • Collecting state witnesses in StateDB
  • Processing state operations that track accessed trie nodes

Evidence of usage:

  • Called in core/state/statedb.go for witness collection
  • Required by the Trie interface in core/state/database.go
  • Other implementations (e.g., SecureTrie) properly implement this method
🔗 Analysis chain

Replace panic with proper error handling.

The Witness method's implementation using panic is not production-ready and could cause runtime issues.

Consider implementing proper error handling:

 func (t *odrTrie) Witness() map[string]struct{} {
-    panic("not implemented")
+    // TODO: Implement witness functionality for odrTrie
+    return nil
 }

Let's check if this method is called anywhere in the codebase:

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for any calls to Witness() on odrTrie
rg -l "odrTrie.*\.Witness\(\)"

Length of output: 32


Script:

#!/bin/bash
# Search for interface definitions containing Witness method
echo "=== Interfaces with Witness method ==="
ast-grep --pattern 'type $_ interface {
  $$$
  Witness() map[string]struct{}
  $$$
}'

# Search for any usage of odrTrie type
echo -e "\n=== odrTrie type usage ==="
rg "odrTrie" -A 2 -B 2

# Search for any Witness() calls
echo -e "\n=== Any Witness() calls ==="
rg "\.Witness\(\)" -A 2 -B 2

Length of output: 8437

trie/zk_trie.go (2)

237-243: 🛠️ Refactor suggestion

Consider using error handling instead of panic.

The CountLeaves method currently panics if it cannot get the root. Consider returning an error instead to allow better error handling by the caller.

Apply this diff to improve error handling:

-func (t *ZkTrie) CountLeaves() uint64 {
+func (t *ZkTrie) CountLeaves() (uint64, error) {
 	root, err := t.ZkTrie.Tree().Root()
 	if err != nil {
-		panic("CountLeaves cannot get root")
+		return 0, fmt.Errorf("failed to get root: %w", err)
 	}
-	return t.countLeaves(root)
+	return t.countLeaves(root), nil
}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

func (t *ZkTrie) CountLeaves() (uint64, error) {
	root, err := t.ZkTrie.Tree().Root()
	if err != nil {
		return 0, fmt.Errorf("failed to get root: %w", err)
	}
	return t.countLeaves(root), nil
}

245-260: 🛠️ Refactor suggestion

Improve error handling in recursive method.

Similar to CountLeaves, the countLeaves method should handle errors gracefully instead of panicking.

Apply this diff to improve error handling:

-func (t *ZkTrie) countLeaves(root *zkt.Hash) uint64 {
+func (t *ZkTrie) countLeaves(root *zkt.Hash) (uint64, error) {
 	if root == nil {
-		return 0
+		return 0, nil
 	}
 
 	rootNode, err := t.ZkTrie.Tree().GetNode(root)
 	if err != nil {
-		panic("countLeaves cannot get rootNode")
+		return 0, fmt.Errorf("failed to get node: %w", err)
 	}
 
 	if rootNode.Type == zktrie.NodeTypeLeaf_New {
-		return 1
+		return 1, nil
 	} else {
-		return t.countLeaves(rootNode.ChildL) + t.countLeaves(rootNode.ChildR)
+		leftCount, err := t.countLeaves(rootNode.ChildL)
+		if err != nil {
+			return 0, err
+		}
+		rightCount, err := t.countLeaves(rootNode.ChildR)
+		if err != nil {
+			return 0, err
+		}
+		return leftCount + rightCount, nil
 	}
}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

func (t *ZkTrie) countLeaves(root *zkt.Hash) (uint64, error) {
	if root == nil {
		return 0, nil
	}

	rootNode, err := t.ZkTrie.Tree().GetNode(root)
	if err != nil {
		return 0, fmt.Errorf("failed to get node: %w", err)
	}

	if rootNode.Type == zktrie.NodeTypeLeaf_New {
		return 1, nil
	} else {
		leftCount, err := t.countLeaves(rootNode.ChildL)
		if err != nil {
			return 0, err
		}
		rightCount, err := t.countLeaves(rootNode.ChildR)
		if err != nil {
			return 0, err
		}
		return leftCount + rightCount, nil
	}
}

Base automatically changed from omerfirmak/mpt to develop February 5, 2025 13:57
…-use-code-from-l1-follower

 Conflicts:
	eth/api.go
	trie/zk_trie.go
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
cmd/utils/flags.go (1)

1642-1654: LGTM! Consider adding documentation.

The implementation correctly sets up the DA syncing configuration and API endpoints. The endpoints are set independently of the DA sync enabled flag, which provides good flexibility.

Consider adding documentation to explain:

  • The purpose of each API endpoint
  • The relationship between these endpoints and the DA syncing feature
  • Any requirements or constraints for the endpoint URLs

Example:

 func setDA(ctx *cli.Context, cfg *ethconfig.Config) {
+	// Configure Data Availability (DA) syncing and related API endpoints.
+	// The API endpoints are used to retrieve blob data from L1 and can be set
+	// independently of whether DA syncing is enabled.
 	if ctx.IsSet(DASyncEnabledFlag.Name) {
 		cfg.EnableDASyncing = ctx.Bool(DASyncEnabledFlag.Name)
 	}
+	// BlobScan API endpoint for retrieving blob data
 	if ctx.IsSet(DABlobScanAPIEndpointFlag.Name) {
 		cfg.DA.BlobScanAPIEndpoint = ctx.String(DABlobScanAPIEndpointFlag.Name)
 	}
+	// BlockNative API endpoint for retrieving block data
 	if ctx.IsSet(DABlockNativeAPIEndpointFlag.Name) {
 		cfg.DA.BlockNativeAPIEndpoint = ctx.String(DABlockNativeAPIEndpointFlag.Name)
 	}
+	// Beacon node API endpoint for retrieving beacon chain data
 	if ctx.IsSet(DABeaconNodeAPIEndpointFlag.Name) {
 		cfg.DA.BeaconNodeAPIEndpoint = ctx.String(DABeaconNodeAPIEndpointFlag.Name)
 	}
 }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 94c0ad5 and ab3bedf.

📒 Files selected for processing (1)
  • cmd/utils/flags.go (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: test

colinlyguo
colinlyguo previously approved these changes Feb 6, 2025
@Thegaram Thegaram merged commit b56dd9f into develop Feb 7, 2025
9 checks passed
@Thegaram Thegaram deleted the jt/rollup-verifier-use-code-from-l1-follower branch February 7, 2025 10:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants