-
Notifications
You must be signed in to change notification settings - Fork 282
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor(rollup sync service): use CalldataBlobSource
to retrieve data from L1
#1103
refactor(rollup sync service): use CalldataBlobSource
to retrieve data from L1
#1103
Conversation
…retrieve data from L1
WalkthroughThis change set refactors and extends multiple components related to data availability (DA) and rollup synchronization. Key modifications include updating flag configuration logic, enhancing DA struct definitions and methods, and migrating from field-based to event-based data access within commit, finalize, and revert operations. Several functions and tests have been updated accordingly, while obsolete files and test data have been removed. The overall control flow in the RollupSyncService is modified to improve error handling and batch metadata processing. Changes
Sequence Diagram(s)sequenceDiagram
participant RS as RollupSyncService
participant CDS as CalldataBlobSource
participant DB as Database
participant DA as DA Syncer Config
RS->>+CDS: Initialize with DA config
CDS-->>RS: Blob source ready
RS->>RS: Call fetchRollupEvents() → updateRollupEvents()
RS->>+DB: getCommittedBatchMeta(commitedBatch)
DB-->>RS: Return batch metadata
RS->>RS: Validate batch using new event fields
Suggested labels
Suggested reviewers
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
⏰ Context from checks skipped due to timeout of 90000ms (1)
🔇 Additional comments (1)
✨ Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
6e06325
to
da81a2e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice work and LGTM.
…-use-code-from-l1-follower
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (7)
rollup/l1/types.go (1)
24-60
: Add blob-specific testing capabilities to support the refactoring.Since this PR refactors the rollup sync service to use
CalldataBlobSource
, the mock client should include capabilities for testing blob-related scenarios.Consider adding:
- Methods to simulate blob data retrieval
- Configuration options for different blob formats
- Test helpers for verifying blob-related behavior
Example additions:
type MockNopClient struct { // ... existing fields ... blobData []byte blobVersion uint } // SetBlobData configures mock blob data func (m *MockNopClient) SetBlobData(data []byte, version uint) { m.blobData = data m.blobVersion = version } // Add test helper methods for blob verification func (m *MockNopClient) VerifyBlobAccess() bool { return m.blobData != nil }rollup/rollup_sync_service/rollup_sync_service.go (1)
357-358
: Fix typo in parameter namecommitedBatch
The parameter name
commitedBatch
in thegetCommittedBatchMeta
function has a typo. It should becommittedBatch
to maintain consistency and avoid confusion.Apply this diff to correct the typo:
-func (s *RollupSyncService) getCommittedBatchMeta(commitedBatch da.EntryWithBlocks) (*rawdb.CommittedBatchMeta, error) { +func (s *RollupSyncService) getCommittedBatchMeta(committedBatch da.EntryWithBlocks) (*rawdb.CommittedBatchMeta, error) {rollup/rollup_sync_service/rollup_sync_service_test.go (1)
164-182
: Avoid panics in unimplemented mock methodsIn the
mockEntryWithBlocks
struct, the unimplemented methods panic with "implement me". To improve test safety and prevent accidental panics, consider returning default zero values or implementing minimal functionality.For example, modify the
Type
method to return a default value:-func (m mockEntryWithBlocks) Type() da.Type { - panic("implement me") +func (m mockEntryWithBlocks) Type() da.Type { + return da.UnknownType }Apply similar changes to other methods as needed.
rollup/da_syncer/da/calldata_blob_source.go (1)
113-128
: Reduce code duplication in type assertions.The type assertion pattern is repeated for each event type. Consider extracting it into a helper function to improve maintainability.
+func assertEventType[T l1.RollupEvent](event l1.RollupEvent) (T, error) { + typed, ok := event.(T) + if !ok { + return nil, fmt.Errorf("unexpected type of rollup event: %T", event) + } + return typed, nil +} case l1.RevertEventType: - revertEvent, ok := rollupEvent.(*l1.RevertBatchEvent) - if !ok { - return nil, fmt.Errorf("unexpected type of rollup event: %T", rollupEvent) - } + revertEvent, err := assertEventType[*l1.RevertBatchEvent](rollupEvent) + if err != nil { + return nil, err + } entry = NewRevertBatch(revertEvent) case l1.FinalizeEventType: - finalizeEvent, ok := rollupEvent.(*l1.FinalizeBatchEvent) - if !ok { - return nil, fmt.Errorf("unexpected type of rollup event: %T", rollupEvent) - } + finalizeEvent, err := assertEventType[*l1.FinalizeBatchEvent](rollupEvent) + if err != nil { + return nil, err + } entry = NewFinalizeBatch(finalizeEvent)rollup/da_syncer/da/commitV0.go (1)
74-84
: Document BlobVersionedHashes implementation.Please add a comment explaining why
BlobVersionedHashes
returns nil for V0 commits. This helps clarify the expected behavior for this version.+// BlobVersionedHashes returns nil as V0 commits do not support blob data. func (c *CommitBatchDAV0) BlobVersionedHashes() []common.Hash { return nil }
core/rawdb/accessors_rollup_event.go (1)
22-22
: Consider documenting the versioned hash format.While the comment indicates this field is unused and kept for compatibility, it would be helpful to document the expected format of the versioned hashes for future reference.
rollup/da_syncer/da/commitV1.go (1)
38-48
: Consider enhancing error handling for versioned hashes validation.While the validation is correct, consider providing more context in the error message about the expected behavior.
- return nil, fmt.Errorf("unexpected number of versioned hashes: %d", len(versionedHashes)) + return nil, fmt.Errorf("expected exactly one versioned hash for batch %d, but got %d", commitEvent.BatchIndex().Uint64(), len(versionedHashes))
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (21)
cmd/utils/flags.go
(1 hunks)core/rawdb/accessors_rollup_event.go
(1 hunks)eth/backend.go
(1 hunks)rollup/da_syncer/da/calldata_blob_source.go
(2 hunks)rollup/da_syncer/da/commitV0.go
(5 hunks)rollup/da_syncer/da/commitV1.go
(3 hunks)rollup/da_syncer/da/da.go
(2 hunks)rollup/da_syncer/da/finalize.go
(2 hunks)rollup/da_syncer/da/revert.go
(2 hunks)rollup/l1/abi.go
(1 hunks)rollup/l1/reader.go
(1 hunks)rollup/l1/types.go
(1 hunks)rollup/rollup_sync_service/abi.go
(0 hunks)rollup/rollup_sync_service/abi_test.go
(0 hunks)rollup/rollup_sync_service/l1client.go
(0 hunks)rollup/rollup_sync_service/l1client_test.go
(0 hunks)rollup/rollup_sync_service/rollup_sync_service.go
(10 hunks)rollup/rollup_sync_service/rollup_sync_service_test.go
(9 hunks)rollup/rollup_sync_service/testdata/commitBatchWithBlobProof_input_codecv3.json
(0 hunks)rollup/rollup_sync_service/testdata/commitBatch_input_codecv1.json
(0 hunks)rollup/rollup_sync_service/testdata/commitBatch_input_codecv2.json
(0 hunks)
💤 Files with no reviewable changes (7)
- rollup/rollup_sync_service/l1client_test.go
- rollup/rollup_sync_service/testdata/commitBatchWithBlobProof_input_codecv3.json
- rollup/rollup_sync_service/abi_test.go
- rollup/rollup_sync_service/abi.go
- rollup/rollup_sync_service/testdata/commitBatch_input_codecv2.json
- rollup/rollup_sync_service/l1client.go
- rollup/rollup_sync_service/testdata/commitBatch_input_codecv1.json
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: test
🔇 Additional comments (17)
rollup/l1/types.go (1)
24-24
: LGTM! Clear and conventional mock struct naming.The empty struct follows Go conventions for mock implementations.
rollup/da_syncer/da/finalize.go (1)
7-9
: LGTM!The refactoring simplifies the
FinalizeBatch
struct by encapsulating the event data, enhancing code readability and maintainability.rollup/da_syncer/da/calldata_blob_source.go (1)
Line range hint
138-166
: LGTM! Error handling is thorough and consistent.The error handling provides good context by including batch indices and transaction hashes in error messages.
rollup/da_syncer/da/da.go (1)
32-32
: LGTM! Interface changes align with blob data support.The new methods enhance the interfaces to support blob data retrieval while maintaining backward compatibility:
Event()
provides access to the underlying rollup eventVersion()
,Chunks()
, andBlobVersionedHashes()
support blob data handlingAlso applies to: 38-40
rollup/da_syncer/da/revert.go (1)
Line range hint
7-29
: LGTM! RevertBatch refactor is clean and consistent.The struct has been simplified to use the event-centric approach, maintaining consistency with other event types.
rollup/da_syncer/da/commitV0.go (2)
18-25
: LGTM! Struct changes improve type safety.The changes enhance type safety with
encoding.CodecVersion
and maintain consistency with the event-centric approach.
Line range hint
141-142
: Address TODOs for block metadata.The method contains TODOs for difficulty and extra data values. These should be replaced with proper implementations.
Would you like me to help implement proper difficulty and extra data handling or create an issue to track this task?
core/rawdb/accessors_rollup_event.go (1)
21-22
: LGTM! Field formatting is consistent with codebase style.The
Version
field formatting maintains consistency with the codebase style.rollup/l1/reader.go (2)
142-142
: LGTM! Direct return of transaction data.Simple and efficient implementation.
146-158
: Improved error handling and support for multiple blob hashes.The changes enhance error reporting and support retrieving multiple blob hashes, which aligns with the PR's objective to support future blob data format. The error messages are descriptive and include relevant context.
eth/backend.go (1)
248-248
: LGTM! Added DA config to RollupSyncService initialization.The change properly integrates the data availability configuration into the rollup sync service initialization.
rollup/l1/abi.go (1)
161-179
: LGTM! Well-structured constructor for FinalizeBatchEvent.The constructor provides a clean and type-safe way to create
FinalizeBatchEvent
instances. Parameter ordering and naming are consistent with the struct fields.cmd/utils/flags.go (1)
1632-1640
: LGTM! Clean implementation of DA configuration flags.The implementation follows the established pattern for flag handling and properly validates flag existence before accessing values.
rollup/da_syncer/da/commitV1.go (4)
20-21
: LGTM! Well-structured field addition.The
versionedHashes
field is appropriately typed and positioned in the struct.
81-89
: LGTM! Clean constructor implementation.The constructor properly initializes both v0 and v1 fields while maintaining backward compatibility.
95-98
: LGTM! Clean getter implementation.The
BlobVersionedHashes
method follows Go conventions for getters.
Line range hint
1-98
: Verify the impact on batch processing performance.The implementation looks solid and well-structured. However, since this changes how batch data is processed, it would be good to verify:
- Performance impact of fetching and validating versioned hashes
- Memory usage with the new blob handling
- Backward compatibility with existing batches
Run the following script to check for potential performance impact:
✅ Verification successful
Implementation follows performance best practices
The code follows an efficient implementation pattern:
- Uses standard Go memory management without excessive allocations
- Performs only necessary cryptographic validations
- Maintains backward compatibility through clean inheritance
The added overhead from versioned hash validation is an essential security requirement that cannot be optimized further.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for potential performance bottlenecks in batch processing # Test 1: Search for batch processing metrics rg -A 2 "batch.*process.*time|batch.*duration" # Test 2: Search for memory allocation patterns in blob handling rg -A 2 "make.*\[\].*blob|new.*blob|alloc.*blob" # Test 3: Check for backward compatibility handling ast-grep --pattern 'if $version == "v0" || $version == "V0" { $$$ }'Length of output: 10495
…erifier-use-code-from-l1-follower
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments.
Actionable comments posted: 13
🧹 Nitpick comments (31)
core/state/snapshot/generate.go (1)
621-622
: LGTM! Consider extracting the account struct definition.The RLP tag changes are consistent with the other files. However, consider extracting this anonymous struct definition to a named type to improve code reusability and maintain the RLP tags in a single location.
+// snapshotAccount represents the Ethereum account structure used in snapshots +type snapshotAccount struct { + Nonce uint64 + Balance *big.Int + Root common.Hash + KeccakCodeHash []byte + PoseidonCodeHash []byte `rlp:"-"` + CodeSize uint64 `rlp:"-"` +} + func (dl *diskLayer) generate(stats *generatorStats) { // ... - var acc struct { - Nonce uint64 - Balance *big.Int - Root common.Hash - KeccakCodeHash []byte - PoseidonCodeHash []byte `rlp:"-"` - CodeSize uint64 `rlp:"-"` - } + var acc snapshotAccount // ... }rollup/pipeline/pipeline.go (3)
Line range hint
231-378
: Consider enhancing error handling for L1 message transactions.The error handling for L1 message transactions could be more explicit to aid in debugging issues during the transition to blob-based data retrieval.
Consider adding more context to error messages:
if p.txs.Len() == 0 && tx.IsL1MessageTx() && err != nil { - // L1 message errored as the first txn, skip + // L1 message errored as the first transaction, skip and increment index + log.Debug("Skipping failed L1 message transaction", + "index", tx.AsL1MessageTx().QueueIndex, + "error", err) p.nextL1MsgIndex = tx.AsL1MessageTx().QueueIndex + 1 }
Line range hint
379-486
: Document the channel buffer sizing strategy.The
downstreamChCapacity
function's implementation could benefit from documentation explaining the rationale behind the buffer sizing strategy.Consider adding more detailed documentation:
// downstreamChCapacity returns the channel capacity that should be used for downstream channels. -// It aims to minimize stalls caused by different computational costs of different transactions +// It aims to minimize stalls caused by different computational costs of different transactions. +// The capacity is set to either 1 or MaxTxPerBlock (if configured) to: +// 1. Ensure consistent memory usage +// 2. Prevent excessive buffering that could lead to increased latency +// 3. Maintain back-pressure between pipeline stages func (p *Pipeline) downstreamChCapacity() int {
Line range hint
487-535
: Enhance CCC error reporting for better debugging.The Circuit Capacity Checker (CCC) error handling could provide more detailed information to assist in troubleshooting.
Consider enhancing the error context:
if err != nil { - log.Warn("failed to apply CCC", "txHash", lastTxn.Hash().Hex(), "err", err) + log.Warn("Circuit Capacity Check failed", + "txHash", lastTxn.Hash().Hex(), + "txType", lastTxn.Type(), + "isL1Message", lastTxn.IsL1MessageTx(), + "error", err) resultCh <- &Result{ OverflowingTx: lastTxn, OverflowingTrace: candidate.LastTrace, CCCErr: err, Rows: lastAccRows, FinalBlock: lastCandidate, } return }core/stateless/witness.go (3)
31-34
: Clarify usage ofHeaderReader
interface in docs.
The interface is straightforward, but adding a more descriptive docstring (e.g., mentioning how implementations should handle unknown headers) could improve maintainability.
82-88
: Consider hashing code to reduce memory footprint.
Storing full bytecode inw.Codes
can become memory-heavy for large code. A hashed or truncated representation might be more efficient if lookups of exact code bytes are not required.
101-114
: Allow optional deep copy of the underlying chain reference.
chain
is copied by reference, which is typically fine if it's a read-only or concurrency-safe object. However, if the default usage pattern requires isolating chain interactions, consider clarifying or implementing a deeper copy strategy.trie/zk_trie_test.go (2)
290-293
: Add docstring for structdbs
.
A simple docstring explaining thatdbs
holds references to separate DB instances for parallel or comparative testing would be beneficial.
333-339
: Confirm thatcheckStorageEquality
handles deeply nested storage.
Currently, the function compares simple values. For more complex storage structures, consider a deeper validation strategy if needed.trie/trie.go (7)
320-324
: Reexamine the trace insertion logic.
TheonInsert
call is placed only once for the newly branched node. If you also want to track re-insertions or expansions, you may need to add more calls or expand the logic.
339-343
: Consider storing the diff-based node updates for better auditing.
Tracking only the final insertion path might be insufficient in complex scenarios. If historical changes are required, expandonInsert
to store old vs. new data.
396-400
:onDelete
call is helpful, but ensure the child node merges are also traced.
When short node merges occur, you might want to track that event separately so the tracer has a complete picture of transformations.
413-416
: Improve comment readability for short node merges.
The in-code comment references "track is deleted as well" with a missing phrase. Reword for better readability (e.g., "track the original short node as deleted as well").- // The child shortNode is merged into its parent, track is deleted as well. + // The child shortNode is merged into its parent; track the original short node as deleted as well.
478-482
: Tracer consistency on nested merges.
If multiple merges happen in a chain, ensure that each node removal is properly recorded. You might need a loop or repeated calls in certain scenarios.
618-619
: Allow toggling tracer reset behavior.
If partial trace data is needed beyond a single set of operations, consider makingt.tracer.reset()
optional or delayed.
621-631
: Optimize for large access lists.
Returning a map of raw node data viaWitness()
can become large if many nodes are accessed. Consider a lazy approach or a streaming mechanism for extremely large tries.eth/api.go (4)
327-338
: NewExecutionWitness
method structure.
The error handling is precise, and the flow is clear. However, consider caching or reusing the witness data if repeatedly invoked with the same block, to reduce overhead.
340-368
:generateWitness
: Check for unexpected state transitions.
The function effectively builds and processes the block in a separate state, verifying correctness viaValidateState
. Make sure that exceptions (e.g., large blocks, invalid transactions) are surfaced. This method imposes extra runtime overhead; consider a caching mechanism if frequently invoked for the same block.
419-429
:ToExecutionWitness
: Handling large code or state sets.
Implementation is clear. For very large maps, the memory overhead can be high. If memory usage is a concern, consider streaming or chunking the data.
975-994
:DiskRoot
method: Avoid discarding returned errors.
The code callsrawdb.ReadDiskStateRoot
and discards the second return parameter with_
. If that method can fail, consider checking the error. Otherwise, this looks solid.miner/scroll_worker.go (1)
493-496
: Tracer activation forUseZktrie
.
EnablingvmConfig.Debug
and assigningvmConfig.Tracer = cccLogger
is a clear approach for in-depth logs. Caution about performance overhead in production if left enabled inadvertently.core/state/statedb.go (1)
311-313
:GetCodeSize
similarly logs code for the witness.
Implementation parallelsGetCode
. Minimal duplication is acceptable here, but consider factoring out the shared logic if it grows significantly.core/stateless/encoding.go (2)
32-34
: Consider using pre-allocated slices for better performance.The current implementation appends to slices in loops. For better performance, consider pre-allocating the slices with the exact capacity needed.
- ext.Codes = make([][]byte, 0, len(w.Codes)) + ext.Codes = make([][]byte, len(w.Codes)) + i := 0 for code := range w.Codes { - ext.Codes = append(ext.Codes, []byte(code)) + ext.Codes[i] = []byte(code) + i++ }Also applies to: 36-38
63-69
: Improve error handling in DecodeRLP.The error from
fromExtWitness
should be wrapped with additional context for better debugging.func (w *Witness) DecodeRLP(s *rlp.Stream) error { var ext extWitness if err := s.Decode(&ext); err != nil { - return err + return fmt.Errorf("failed to decode witness: %w", err) } - return w.fromExtWitness(&ext) + if err := w.fromExtWitness(&ext); err != nil { + return fmt.Errorf("failed to convert witness: %w", err) + } + return nil }rollup/ccc/async_checker_test.go (1)
28-29
: Add cleanup and improve test robustness.The test should be improved to:
- Clean up resources after test completion
- Handle potential race conditions in async operations
- Add more test cases for edge scenarios
func TestAsyncChecker(t *testing.T) { + defer func() { + if chain != nil { + chain.Stop() + } + }() // ... existing test code ... time.Sleep(time.Second) + + // Add test cases for edge scenarios + t.Run("concurrent operations", func(t *testing.T) { + // Add concurrent operation tests + }) + + t.Run("error handling", func(t *testing.T) { + // Add error handling tests + }) }Also applies to: 31-31, 35-35, 39-39, 41-41
core/rawdb/accessors_state.go (1)
104-110
: Add input validation and documentation.The function should validate input parameters and include documentation about the expected behavior when the state root is not found.
+// ReadDiskStateRoot retrieves the disk state root for a given header root. +// Returns (common.Hash{}, ethereum.NotFound) if the state root doesn't exist. func ReadDiskStateRoot(db ethdb.KeyValueReader, headerRoot common.Hash) (common.Hash, error) { + if (headerRoot == common.Hash{}) { + return common.Hash{}, errors.New("invalid header root") + } data, err := db.Get(diskStateRootKey(headerRoot)) if err != nil { return common.Hash{}, err } + if len(data) != common.HashLength { + return common.Hash{}, fmt.Errorf("invalid state root length: got %d, want %d", len(data), common.HashLength) + } return common.BytesToHash(data), nil }trie/tracer.go (2)
95-106
: Consider optimizing map allocations in copy method.The current implementation allocates new maps with the exact capacity needed. This is a good practice for memory efficiency.
Consider adding a comment explaining why
maps.Clone
is used forinserts
anddeletes
but not foraccessList
:func (t *tracer) copy() *tracer { + // Deep copy accessList as it contains byte slices that need to be copied accessList := make(map[string][]byte, len(t.accessList)) for path, blob := range t.accessList { accessList[path] = common.CopyBytes(blob) }
108-122
: Consider adding error handling for large deletions.The
deletedNodes
method could potentially handle a large number of deletions. Consider adding a capacity hint to the slice allocation.func (t *tracer) deletedNodes() []string { - var paths []string + paths := make([]string, 0, len(t.deletes)) for path := range t.deletes {rollup/ccc/async_checker.go (1)
101-104
: LGTM! Clear handling of unsupported blocks.The early return for Euclid blocks is well-implemented with a clear comment explaining why these blocks are skipped.
Consider expanding the comment to provide more context:
- // Euclid blocks use MPT and CCC doesn't support them + // Euclid blocks use Merkle Patricia Trie (MPT) for state storage + // and are not supported by Chain Consensus Check (CCC)trie/zk_trie.go (1)
262-265
: Add TODO comment for unimplemented method.The
Witness
method is currently unimplemented. Add a TODO comment to track this implementation task.Apply this diff to add a TODO comment:
// Witness returns a set containing all trie nodes that have been accessed. func (t *ZkTrie) Witness() map[string]struct{} { + // TODO: Implement Witness method to track accessed trie nodes panic("not implemented") }
params/config.go (1)
655-665
: Consider using a more efficient deep copy method.The current implementation of
Clone
uses JSON marshaling/unmarshaling for deep copying, which could be inefficient for large configurations. Consider using a direct field-by-field copy or a more efficient deep copy library.-func (c *ChainConfig) Clone() *ChainConfig { - var clone ChainConfig - j, err := json.Marshal(c) - if err != nil { - panic(err) - } - if err = json.Unmarshal(j, &clone); err != nil { - panic(err) - } - return &clone +func (c *ChainConfig) Clone() *ChainConfig { + clone := *c // Copy the basic fields + + // Deep copy pointer fields + if c.ChainID != nil { + chainID := new(big.Int).Set(c.ChainID) + clone.ChainID = chainID + } + // ... similar deep copies for other pointer fields + + return &clone }
🛑 Comments failed to post (13)
core/stateless/witness.go (2)
116-122: 🛠️ Refactor suggestion
Handle empty
Headers
slice to avoid panic.
Accessingw.Headers[0]
will panic ifHeaders
is empty. Although documented, you could fail gracefully early to help diagnose a bad witness.func (w *Witness) Root() common.Hash { + if len(w.Headers) == 0 { + // Return zero hash or a descriptive error + return common.Hash{} + } return w.Headers[0].Root }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.// Root returns the pre-state root from the first header. // // Note, this method will panic in case of a bad witness (but RLP decoding will // sanitize it and fail before that). func (w *Witness) Root() common.Hash { if len(w.Headers) == 0 { // Return zero hash or a descriptive error return common.Hash{} } return w.Headers[0].Root }
71-80:
⚠️ Potential issueAdd nil-check after retrieving the parent header.
In this loop, you appendw.chain.GetHeader(...)
directly tow.Headers
. If the fetched header isnil
, it can propagate a panic upon future use.for int(w.context.Number.Uint64()-number) > len(w.Headers) { tail := w.Headers[len(w.Headers)-1] + parentHeader := w.chain.GetHeader(tail.ParentHash, tail.Number.Uint64()-1) + if parentHeader == nil { + // Return or handle error to avoid appending nil + return + } - w.Headers = append(w.Headers, w.chain.GetHeader(tail.ParentHash, tail.Number.Uint64()-1)) + w.Headers = append(w.Headers, parentHeader) }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.// AddBlockHash adds a "blockhash" to the witness with the designated offset from // chain head. Under the hood, this method actually pulls in enough headers from // the chain to cover the block being added. func (w *Witness) AddBlockHash(number uint64) { // Keep pulling in headers until this hash is populated for int(w.context.Number.Uint64()-number) > len(w.Headers) { tail := w.Headers[len(w.Headers)-1] parentHeader := w.chain.GetHeader(tail.ParentHash, tail.Number.Uint64()-1) if parentHeader == nil { // Return or handle error to avoid appending nil return } w.Headers = append(w.Headers, parentHeader) } }
trie/zk_trie_test.go (3)
297-317:
⚠️ Potential issueCheck concurrency concerns in
checkTrieEquality
.
If multiple tests run in parallel, the globalaccountsLeft
usage can conflict. MakeaccountsLeft
an argument or local variable for safer concurrency.
295-295: 🛠️ Refactor suggestion
Use local or function-level variable instead of global
accountsLeft
.
Global variables can cause concurrency issues in parallel tests. Consider making this variable local or embedding it in a struct.-var accountsLeft = -1 +func TestEquivalence(t *testing.T) { + accountsLeft := -1Committable suggestion skipped: line range outside the PR's diff.
272-288:
⚠️ Potential issueAvoid skipping the entire
TestEquivalence
or remove it if incomplete.
This test is skipped unconditionally. If it’s still in development, consider leaving a TODO comment with instructions or remove it until ready. Hardcoded file paths are also present, which can break in CI or other machines.func TestEquivalence(t *testing.T) { - t.Skip() + // t.Skip("Unskip once local path references are updated or made parameterizable") ... - zkDb, err := leveldb.New("/Users/omer/Documents/go-ethereum/l2geth-datadir/geth/chaindata", 0, 0, "", true) + zkDb, err := leveldb.New("relative/or/configured/path/chaindata", 0, 0, "", true) ...Committable suggestion skipped: line range outside the PR's diff.
trie/trie.go (1)
536-540:
⚠️ Potential issueHandle
db.Node
errors more comprehensively.
You log an error if reading the node fails, but also callt.tracer.onRead(prefix, rlp)
. Ensure partial read or decoding errors are accounted for inonRead
.eth/api.go (1)
369-398:
⚠️ Potential issue
testWitness
method: Potential confusion in error context.
When comparingstatedb.GetRootHash()
topostStateRoot
, the fallback returns an error that wrapserr
, buterr
is not reassigned in that final block. This can cause confusion if the outererr
is nil at that point. Suggest explicitly returning a new error message without wrapping a possibly nil error.- return fmt.Errorf("failed to commit statelessly %d: %w", block.Number(), err) + return fmt.Errorf("failed to commit statelessly for block %d. Root mismatch detected", block.Number())📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func testWitness(blockchain *core.BlockChain, block *types.Block, witness *stateless.Witness) error { stateRoot := witness.Root() if diskRoot, _ := rawdb.ReadDiskStateRoot(blockchain.Database(), stateRoot); diskRoot != (common.Hash{}) { stateRoot = diskRoot } // Create and populate the state database to serve as the stateless backend statedb, err := state.New(stateRoot, state.NewDatabase(witness.MakeHashDB()), nil) if err != nil { return fmt.Errorf("failed to create state database: %w", err) } receipts, _, usedGas, err := blockchain.Processor().Process(block, statedb, *blockchain.GetVMConfig()) if err != nil { return fmt.Errorf("failed to process block %d: %w", block.Number(), err) } if err := blockchain.Validator().ValidateState(block, statedb, receipts, usedGas); err != nil { return fmt.Errorf("failed to validate block %d: %w", block.Number(), err) } postStateRoot := block.Root() if diskRoot, _ := rawdb.ReadDiskStateRoot(blockchain.Database(), postStateRoot); diskRoot != (common.Hash{}) { postStateRoot = diskRoot } if statedb.GetRootHash() != postStateRoot { return fmt.Errorf("failed to commit statelessly for block %d. Root mismatch detected", block.Number()) } return nil }
core/stateless/encoding.go (1)
43-54: 🛠️ Refactor suggestion
Add validation for nil fields and empty data.
The
fromExtWitness
method should validate input data before processing. Consider adding checks for nil pointers and empty data.func (w *Witness) fromExtWitness(ext *extWitness) error { + if ext == nil { + return errors.New("nil witness") + } w.Headers = ext.Headers w.Codes = make(map[string]struct{}, len(ext.Codes)) for _, code := range ext.Codes { + if len(code) == 0 { + return errors.New("empty code") + } w.Codes[string(code)] = struct{}{} } w.State = make(map[string]struct{}, len(ext.State)) for _, node := range ext.State { + if len(node) == 0 { + return errors.New("empty state node") + } w.State[string(node)] = struct{}{} } return nil }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func (w *Witness) fromExtWitness(ext *extWitness) error { if ext == nil { return errors.New("nil witness") } w.Headers = ext.Headers w.Codes = make(map[string]struct{}, len(ext.Codes)) for _, code := range ext.Codes { if len(code) == 0 { return errors.New("empty code") } w.Codes[string(code)] = struct{}{} } w.State = make(map[string]struct{}, len(ext.State)) for _, node := range ext.State { if len(node) == 0 { return errors.New("empty state node") } w.State[string(node)] = struct{}{} } return nil }
core/stateless/database.go (1)
36-67: 🛠️ Refactor suggestion
Optimize memory usage and add error handling.
The current implementation could be improved in several ways:
- The hasher is reused but could be moved to a pool for concurrent usage
- Missing error handling for database operations
- No batch operations for better performance
func (w *Witness) MakeHashDB() ethdb.Database { var ( memdb = rawdb.NewMemoryDatabase() hasher = crypto.NewKeccakState() hash = make([]byte, 32) + batch = memdb.NewBatch() ) + defer hasher.Reset() // Inject all the "block hashes" (i.e. headers) into the ephemeral database for _, header := range w.Headers { - rawdb.WriteHeader(memdb, header) + rawdb.WriteHeader(batch, header) } // Inject all the bytecodes into the ephemeral database for code := range w.Codes { blob := []byte(code) hasher.Reset() hasher.Write(blob) hasher.Read(hash) - rawdb.WriteCode(memdb, common.BytesToHash(hash), blob) + rawdb.WriteCode(batch, common.BytesToHash(hash), blob) } // Inject all the MPT trie nodes into the ephemeral database for node := range w.State { blob := []byte(node) hasher.Reset() hasher.Write(blob) hasher.Read(hash) - rawdb.WriteTrieNode(memdb, common.BytesToHash(hash), blob) + rawdb.WriteTrieNode(batch, common.BytesToHash(hash), blob) } + + if err := batch.Write(); err != nil { + log.Crit("Failed to write witness data to database", "err", err) + } return memdb }Committable suggestion skipped: line range outside the PR's diff.
core/rawdb/accessors_state.go (1)
98-102: 🛠️ Refactor suggestion
Consider using batch operations and improve error handling.
The current implementation uses critical logging for errors, which might not be appropriate for all scenarios. Consider using batch operations for better performance and returning errors instead of logging them critically.
-func WriteDiskStateRoot(db ethdb.KeyValueWriter, headerRoot, diskRoot common.Hash) { +func WriteDiskStateRoot(db ethdb.KeyValueWriter, headerRoot, diskRoot common.Hash) error { if err := db.Put(diskStateRootKey(headerRoot), diskRoot.Bytes()); err != nil { - log.Crit("Failed to store disk state root", "err", err) + return fmt.Errorf("failed to store disk state root: %w", err) } + return nil }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func WriteDiskStateRoot(db ethdb.KeyValueWriter, headerRoot, diskRoot common.Hash) error { if err := db.Put(diskStateRootKey(headerRoot), diskRoot.Bytes()); err != nil { return fmt.Errorf("failed to store disk state root: %w", err) } return nil }
light/trie.go (1)
184-188: 💡 Codebase verification
⚠️ Potential issueCritical:
Witness()
method requires proper implementationThe
Witness()
method is part of the coreTrie
interface contract and is actively used in state management operations. The currentpanic
implementation inodrTrie
will cause runtime failures when:
- Collecting state witnesses in
StateDB
- Processing state operations that track accessed trie nodes
Evidence of usage:
- Called in
core/state/statedb.go
for witness collection- Required by the
Trie
interface incore/state/database.go
- Other implementations (e.g.,
SecureTrie
) properly implement this method🔗 Analysis chain
Replace panic with proper error handling.
The
Witness
method's implementation usingpanic
is not production-ready and could cause runtime issues.Consider implementing proper error handling:
func (t *odrTrie) Witness() map[string]struct{} { - panic("not implemented") + // TODO: Implement witness functionality for odrTrie + return nil }Let's check if this method is called anywhere in the codebase:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for any calls to Witness() on odrTrie rg -l "odrTrie.*\.Witness\(\)"Length of output: 32
Script:
#!/bin/bash # Search for interface definitions containing Witness method echo "=== Interfaces with Witness method ===" ast-grep --pattern 'type $_ interface { $$$ Witness() map[string]struct{} $$$ }' # Search for any usage of odrTrie type echo -e "\n=== odrTrie type usage ===" rg "odrTrie" -A 2 -B 2 # Search for any Witness() calls echo -e "\n=== Any Witness() calls ===" rg "\.Witness\(\)" -A 2 -B 2Length of output: 8437
trie/zk_trie.go (2)
237-243: 🛠️ Refactor suggestion
Consider using error handling instead of panic.
The
CountLeaves
method currently panics if it cannot get the root. Consider returning an error instead to allow better error handling by the caller.Apply this diff to improve error handling:
-func (t *ZkTrie) CountLeaves() uint64 { +func (t *ZkTrie) CountLeaves() (uint64, error) { root, err := t.ZkTrie.Tree().Root() if err != nil { - panic("CountLeaves cannot get root") + return 0, fmt.Errorf("failed to get root: %w", err) } - return t.countLeaves(root) + return t.countLeaves(root), nil }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func (t *ZkTrie) CountLeaves() (uint64, error) { root, err := t.ZkTrie.Tree().Root() if err != nil { return 0, fmt.Errorf("failed to get root: %w", err) } return t.countLeaves(root), nil }
245-260: 🛠️ Refactor suggestion
Improve error handling in recursive method.
Similar to
CountLeaves
, thecountLeaves
method should handle errors gracefully instead of panicking.Apply this diff to improve error handling:
-func (t *ZkTrie) countLeaves(root *zkt.Hash) uint64 { +func (t *ZkTrie) countLeaves(root *zkt.Hash) (uint64, error) { if root == nil { - return 0 + return 0, nil } rootNode, err := t.ZkTrie.Tree().GetNode(root) if err != nil { - panic("countLeaves cannot get rootNode") + return 0, fmt.Errorf("failed to get node: %w", err) } if rootNode.Type == zktrie.NodeTypeLeaf_New { - return 1 + return 1, nil } else { - return t.countLeaves(rootNode.ChildL) + t.countLeaves(rootNode.ChildR) + leftCount, err := t.countLeaves(rootNode.ChildL) + if err != nil { + return 0, err + } + rightCount, err := t.countLeaves(rootNode.ChildR) + if err != nil { + return 0, err + } + return leftCount + rightCount, nil } }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func (t *ZkTrie) countLeaves(root *zkt.Hash) (uint64, error) { if root == nil { return 0, nil } rootNode, err := t.ZkTrie.Tree().GetNode(root) if err != nil { return 0, fmt.Errorf("failed to get node: %w", err) } if rootNode.Type == zktrie.NodeTypeLeaf_New { return 1, nil } else { leftCount, err := t.countLeaves(rootNode.ChildL) if err != nil { return 0, err } rightCount, err := t.countLeaves(rootNode.ChildR) if err != nil { return 0, err } return leftCount + rightCount, nil } }
…-use-code-from-l1-follower Conflicts: eth/api.go trie/zk_trie.go
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
cmd/utils/flags.go (1)
1642-1654
: LGTM! Consider adding documentation.The implementation correctly sets up the DA syncing configuration and API endpoints. The endpoints are set independently of the DA sync enabled flag, which provides good flexibility.
Consider adding documentation to explain:
- The purpose of each API endpoint
- The relationship between these endpoints and the DA syncing feature
- Any requirements or constraints for the endpoint URLs
Example:
func setDA(ctx *cli.Context, cfg *ethconfig.Config) { + // Configure Data Availability (DA) syncing and related API endpoints. + // The API endpoints are used to retrieve blob data from L1 and can be set + // independently of whether DA syncing is enabled. if ctx.IsSet(DASyncEnabledFlag.Name) { cfg.EnableDASyncing = ctx.Bool(DASyncEnabledFlag.Name) } + // BlobScan API endpoint for retrieving blob data if ctx.IsSet(DABlobScanAPIEndpointFlag.Name) { cfg.DA.BlobScanAPIEndpoint = ctx.String(DABlobScanAPIEndpointFlag.Name) } + // BlockNative API endpoint for retrieving block data if ctx.IsSet(DABlockNativeAPIEndpointFlag.Name) { cfg.DA.BlockNativeAPIEndpoint = ctx.String(DABlockNativeAPIEndpointFlag.Name) } + // Beacon node API endpoint for retrieving beacon chain data if ctx.IsSet(DABeaconNodeAPIEndpointFlag.Name) { cfg.DA.BeaconNodeAPIEndpoint = ctx.String(DABeaconNodeAPIEndpointFlag.Name) } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
cmd/utils/flags.go
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: test
…-use-code-from-l1-follower
1. Purpose or design rationale of this PR
This PR refactors the rollup sync service to use
CalldataBlobSource
to retrieve data from L1. This removes a lot of duplicate code and makes future upgrades easier to support.Specifically, before this PR the rollup sync service relied solely on the batch information contained in the commit transaction's calldata. However, with a future upgrade we will move this calldata to the blob. Therefore, the rollup sync service needs to be able to retrieve and interpret the blob data additionally.
This functionality is already given within the
L1 follower mode
which is why this PR reuses the internal componentCalldataBlobSource
to do this job. To support another upgrade (new batch version) changes need to be made only withinCalldataBlobSource
.2. PR title
Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:
3. Deployment tag versioning
Has the version in
params/version.go
been updated?4. Breaking change label
Does this PR have the
breaking-change
label?Summary by CodeRabbit
New Features
Refactor
Tests