-
Notifications
You must be signed in to change notification settings - Fork 194
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(katana): dedup fork request #2001
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2001 +/- ##
==========================================
- Coverage 57.41% 55.91% -1.50%
==========================================
Files 406 439 +33
Lines 51330 56147 +4817
==========================================
+ Hits 29470 31395 +1925
- Misses 21860 24752 +2892 ☔ View full report in Codecov by Sentry. |
@cwkang1998 sorry for the delay on this one. A lot's of changes have happen since. @kariy any recommendation around this issue and how it should be tackled? |
Hey, sorry for being so slow on this as well! I think I am still missing some implementation details for this so I will get an update to this PR within this week, but if there's any changes I will have to make please do tell me and I will try my best to implement it asap. |
nope, nothing changed from what we discussed before. this PR is merely for optimization purpose, and is very isolated from changes made on other part of the codebase and isn't tight to the system's architecture itself. so the current changes made are already good, we're only missing the part where we wait for the response of the initial request and reuse it for the deduped ones. |
259f197
to
2458c37
Compare
@cwkang1998 no pressure but do you think you can continue this PR? otherwise i can take over it as necessary. |
Just added the changes required to make the result return as expected. Sorry for such a long delay. Will attempt to add more tests to this. |
WalkthroughOhayo, sensei! The pull request introduces significant enhancements to the Changes
Assessment against linked issues
Possibly related PRs
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (5)
crates/katana/storage/provider/src/providers/fork/backend.rs (5)
138-140
: Effective use of data structures for request managementThe addition of
request_dedup_map
andpending_requests
to theBackend
struct appropriately manages deduplicated requests and pending futures. Ensure that these structures are managed efficiently to handle high loads without performance degradation.Consider monitoring the size of these collections and implementing cleanup strategies if necessary to prevent potential memory bloat.
203-306
: Sensei, refactor duplicated code inhandle_requests
to improve maintainabilityThe deduplication logic in the
handle_requests
method is repeated for each request type, leading to code duplication. This can make future maintenance and updates more error-prone.Consider abstracting the common deduplication logic into a helper function or macro that can be reused for all request types. This will reduce code repetition and simplify future modifications.
357-370
: Ensure efficient cloning ofBackendResponse
to optimize performanceCloning
BackendResponse
for each waiting sender may introduce performance overhead, especially if the response contains large data.Consider wrapping the
BackendResponse
in anArc
to allow for cheap cloning and shared ownership:- sender.send(res.clone()).expect( + let shared_res = Arc::new(res); + sender.send(Arc::clone(&shared_res)).expect(This change reduces the cost of cloning responses and improves performance when dealing with multiple senders.
363-366
: Handle potential errors when sending responses to prevent panicsUsing
.expect
when sending responses can cause the application to panic if a sender is dropped or encounters an error.Modify the code to handle send errors gracefully:
- sender.send(res.clone()).expect( + if let Err(e) = sender.send(res.clone()) { + error!(target: LOG_TARGET, "Failed to send response for {:?}: {:?}", fut_key, e); + }This logs the error without panicking, ensuring the backend remains robust even if some receivers have disconnected.
842-874
: Improve test reliability by synchronizing threads instead of usingthread::sleep
In the test
get_nonce_request_should_be_deduplicated
,thread::sleep(Duration::from_secs(1))
is used to wait for requests to complete. This approach can lead to flaky tests due to timing issues.Consider using synchronization primitives like channels or
Arc<AtomicUsize>
counters to synchronize the completion of threads:let counter = Arc::new(AtomicUsize::new(0)); let h1 = handle.clone(); let c1 = counter.clone(); thread::spawn(move || { h1.get_nonce(felt!("0x1").into()).expect(ERROR_SEND_REQUEST); c1.fetch_add(1, Ordering::SeqCst); }); // Repeat for other threads... // Wait until counter reaches the expected value while counter.load(Ordering::SeqCst) < EXPECTED_COUNT { std::thread::yield_now(); }This ensures the test waits for all spawned threads to complete without relying on arbitrary sleep durations.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (1)
crates/katana/storage/provider/src/providers/fork/backend.rs
(19 hunks)
🔇 Additional comments (5)
crates/katana/storage/provider/src/providers/fork/backend.rs (5)
38-45
: Ohayo, sensei! Well-structured BackendResponse
enum enhances response handling
The introduction of the BackendResponse
enum provides a unified structure for handling various backend responses. This improves code maintainability and clarity by standardizing the response types.
54-60
: Consistency in error handling using Arc<anyhow::Error>
Wrapping errors with Arc<anyhow::Error>
in the BackendError
enum ensures thread-safe error propagation across asynchronous contexts. This is crucial for concurrent environments to prevent data races and maintain error integrity.
Line range hint 84-105
: Updated method signatures align with unified response handling
The changes in the method signatures of BackendRequest
variants to return (BackendRequest, OneshotReceiver<BackendResponse>)
align with the new unified BackendResponse
enum. This streamlines response handling and reduces code complexity.
117-127
: Introduction of BackendRequestIdentifier
for efficient deduplication
Using the BackendRequestIdentifier
enum to uniquely identify requests enables efficient deduplication of identical requests. This optimization can significantly reduce redundant network calls and improve overall performance.
703-708
: Comprehensive handling of provider errors in handle_not_found_err
The handle_not_found_err
function converts specific StarknetError
variants to Option::None
, following the provider API conventions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (3)
crates/katana/storage/provider/src/providers/fork/backend.rs (3)
47-60
: Consider adding more context to error messagesWhile the error handling is generally good, some error messages could be more descriptive to aid in debugging:
#[derive(Debug, thiserror::Error, Clone)] pub enum BackendError { - #[error("failed to send request to backend: {0}")] + #[error("failed to send request to backend (channel might be full or closed): {0}")] FailedSendRequest(#[from] SendError), - #[error("failed to receive result from backend: {0}")] + #[error("failed to receive result from backend (sender might have been dropped): {0}")] FailedReceiveResult(#[from] RecvError),
369-388
: Improve error handling for sender failuresThe current implementation uses
unwrap_or_else
with error logging for sender failures. While this works, we could improve the robustness by:
- Tracking failed sends
- Cleaning up resources for failed senders
Consider refactoring the response handling:
-sender_vec.iter().for_each(|sender| { - sender.send(res.clone()).unwrap_or_else(|_| error!(target: LOG_TARGET, "failed to send result of request {:?} to sender {:?}", fut_key, sender)); -}); +let (successful_senders, failed_senders): (Vec<_>, Vec<_>) = sender_vec + .iter() + .partition(|sender| sender.send(res.clone()).is_ok()); + +if !failed_senders.is_empty() { + error!( + target: LOG_TARGET, + "Failed to send results to {} senders for request {:?}", + failed_senders.len(), + fut_key + ); +}
Line range hint
878-1209
: Ohayo! Great test coverage with some potential improvementsThe test suite is comprehensive and well-structured. A few suggestions to make it even better:
- Add test cases for concurrent request failures
- Consider parameterized tests to reduce code duplication
- Add timing assertions to verify deduplication performance
Example of a parameterized test:
#[test] fn test_request_deduplication() { let test_cases = vec![ ("nonce", vec![ Box::new(|h: &BackendHandle| h.get_nonce(felt!("0x1").into())), Box::new(|h: &BackendHandle| h.get_nonce(felt!("0x1").into())), Box::new(|h: &BackendHandle| h.get_nonce(felt!("0x2").into())), ], 2), ("class", vec![ Box::new(|h: &BackendHandle| h.get_class_at(felt!("0x1"))), Box::new(|h: &BackendHandle| h.get_class_at(felt!("0x1"))), Box::new(|h: &BackendHandle| h.get_class_at(felt!("0x2"))), ], 2), // Add more cases ]; for (name, requests, expected_count) in test_cases { // Your existing test logic here } }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (1)
crates/katana/storage/provider/src/providers/fork/backend.rs
(20 hunks)
🔇 Additional comments (2)
crates/katana/storage/provider/src/providers/fork/backend.rs (2)
38-45
: Ohayo, sensei! The BackendResponse enum looks good!
The enum provides a clean and type-safe way to handle different response types. The implementation is well-structured and follows Rust best practices.
119-127
: Ohayo! Well-designed identifier enum for deduplication
The BackendRequestIdentifier
enum is well-structured for request deduplication:
- Derives appropriate traits for HashMap key usage
- Covers all request types
- Uses efficient memory layout with Copy
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (2)
crates/katana/storage/provider/src/providers/fork/backend.rs (2)
339-347
: Enhance error handling for response distributionThe error handling when sending responses to multiple senders could be improved. Currently, errors are just logged, which might hide issues.
Consider collecting failed sends and handling them more explicitly:
- sender_vec.iter().for_each(|sender| { - sender.send(res.clone()).unwrap_or_else(|_| error!(target: LOG_TARGET, "failed to send result of request {:?} to sender {:?}", fut_key, sender)); - }); + let failed_sends: Vec<_> = sender_vec.iter() + .filter_map(|sender| { + sender.send(res.clone()) + .err() + .map(|_| sender) + }) + .collect(); + + if !failed_sends.is_empty() { + error!( + target: LOG_TARGET, + "Failed to send results to {} senders for request {:?}", + failed_sends.len(), + fut_key + ); + }
736-774
: Add more detailed documentation for mock RPC serverWhile the implementation is solid, the documentation could be enhanced to better explain:
- The purpose of the response timing control
- The format requirements for the response string
- Usage examples
Consider adding documentation like this:
/// Helper function to start a TCP server that returns predefined JSON-RPC responses. /// /// # Arguments /// /// * `addr` - The address to bind the server to (e.g., "127.0.0.1:8080") /// * `response` - A JSON-RPC response string that will be returned for all requests /// /// # Returns /// /// A `SyncSender` that can be used to control when the server sends its response. /// This allows for testing race conditions and request deduplication timing. /// /// # Example /// /// ``` /// let response = r#"{"jsonrpc":"2.0","result":"0x123","id":1}"#; /// let sender = start_mock_rpc_server("127.0.0.1:8090".to_string(), response.to_string()); /// /// // Wait for client requests to arrive /// thread::sleep(Duration::from_secs(1)); /// /// // Signal server to send response /// sender.send(()).unwrap(); /// ```
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (1)
crates/katana/storage/provider/src/providers/fork/backend.rs
(21 hunks)
🔇 Additional comments (5)
crates/katana/storage/provider/src/providers/fork/backend.rs (5)
38-45
: Ohayo, sensei! Well-structured response type implementation
The BackendResponse
enum effectively standardizes the response types across different backend requests, making the code more maintainable and type-safe.
54-60
: Good use of Arc for error handling
Using Arc
for error types allows for efficient shared ownership of error instances, which is particularly important in a concurrent context.
119-127
: Well-designed request deduplication identifier
The BackendRequestIdentifier
enum provides a clean and type-safe way to identify and deduplicate requests. The derive macros for Eq
, Hash
, and PartialEq
are appropriately used for HashMap keys.
Line range hint 840-1171
: Ohayo, sensei! Excellent test coverage for deduplication
The test suite thoroughly covers various deduplication scenarios:
- Basic request deduplication
- Cross-function deduplication
- Same address with different keys
- Response consistency verification
The use of mock servers and controlled response timing is particularly well done.
272-297
: 🛠️ Refactor suggestion
Consider enhancing error handling in deduplication logic
The error handling in the dedup_request
function could be improved. Currently, it logs an error and does nothing when the sender vector is not found, which should never happen according to the comment.
Consider either:
- Using
expect
since this is a programming error that should never happen - Or returning a Result to properly handle the error case
- match self.request_dedup_map.get_mut(&req_key) {
- Some(sender_vec) => {
- sender_vec.push(sender);
- }
- None => {
- // Log this and do nothing here, as this should never happen.
- // If this does happen it is an unexpected bug.
- error!(target: LOG_TARGET, "failed to get current request dedup vector");
- }
- }
+ // This unwrap is safe because we just checked the entry exists
+ self.request_dedup_map.get_mut(&req_key)
+ .expect("request dedup vector disappeared")
+ .push(sender);
Likely invalid or redundant comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @cwkang1998. Awesome work on this! Nice that you write tests for all the request types. I just have some few minor comments, but overall the implementation looks fine.
rpc_call_future: F, | ||
) where | ||
F: FnOnce() -> BoxFuture<'static, BackendResponse>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a reason why this need to be a FnOnce
and not just BoxFuture<'static, BackendResponse>
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that I think about it, not really, going to change it to just a BoxFuture.
) where | ||
F: FnOnce() -> BoxFuture<'static, BackendResponse>, | ||
{ | ||
if let std::collections::hash_map::Entry::Vacant(e) = self.request_dedup_map.entry(req_key) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there shouldn't be a conflict so prefer to do it this way
if let std::collections::hash_map::Entry::Vacant(e) = self.request_dedup_map.entry(req_key) | |
if let Entry::Vacant(e) = self.request_dedup_map.entry(req_key) |
sender_vec.iter().for_each(|sender| { | ||
sender.send(res.clone()).unwrap_or_else(|_| error!(target: LOG_TARGET, "failed to send result of request {:?} to sender {:?}", fut_key, sender)); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's use structured fields
sender_vec.iter().for_each(|sender| { | |
sender.send(res.clone()).unwrap_or_else(|_| error!(target: LOG_TARGET, "failed to send result of request {:?} to sender {:?}", fut_key, sender)); | |
}); | |
sender.send(res.clone()).unwrap_or_else(|error| { | |
error!(target: LOG_TARGET, key = ?fut_key, %error, "Failed to send result.") | |
}); |
// send requests to the backend | ||
let h1 = handle.clone(); | ||
thread::spawn(move || { | ||
h1.get_nonce(felt!("0x1").into()).expect(ERROR_SEND_REQUEST); | ||
}); | ||
let h2 = handle.clone(); | ||
thread::spawn(move || { | ||
h2.get_nonce(felt!("0x1").into()).expect(ERROR_SEND_REQUEST); | ||
}); | ||
// Different request, should be counted | ||
let h3 = handle.clone(); | ||
thread::spawn(move || { | ||
h3.get_nonce(felt!("0x2").into()).expect(ERROR_SEND_REQUEST); | ||
}); | ||
|
||
// wait for the requests to be handled | ||
thread::sleep(Duration::from_secs(1)); | ||
|
||
// check request are handled | ||
let stats = handle.stats().expect(ERROR_STATS); | ||
assert_eq!(stats, 2, "Backend should only have 2 ongoing requests.") | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's do a check before sending the third request to make sure the requests actually get dedup'd. Having it only at the end can be a bit vague because it doesn't indicate whether the 2 ongoing ones are exactly what we're expecting.
// send requests to the backend | |
let h1 = handle.clone(); | |
thread::spawn(move || { | |
h1.get_nonce(felt!("0x1").into()).expect(ERROR_SEND_REQUEST); | |
}); | |
let h2 = handle.clone(); | |
thread::spawn(move || { | |
h2.get_nonce(felt!("0x1").into()).expect(ERROR_SEND_REQUEST); | |
}); | |
// Different request, should be counted | |
let h3 = handle.clone(); | |
thread::spawn(move || { | |
h3.get_nonce(felt!("0x2").into()).expect(ERROR_SEND_REQUEST); | |
}); | |
// wait for the requests to be handled | |
thread::sleep(Duration::from_secs(1)); | |
// check request are handled | |
let stats = handle.stats().expect(ERROR_STATS); | |
assert_eq!(stats, 2, "Backend should only have 2 ongoing requests.") | |
} | |
// send requests to the backend | |
let h1 = handle.clone(); | |
thread::spawn(move || { | |
h1.get_nonce(felt!("0x1").into()).expect(ERROR_SEND_REQUEST); | |
}); | |
let h2 = handle.clone(); | |
thread::spawn(move || { | |
h2.get_nonce(felt!("0x1").into()).expect(ERROR_SEND_REQUEST); | |
}); | |
// wait for the requests to be handled | |
thread::sleep(Duration::from_secs(1)); | |
// check request are handled | |
let stats = handle.stats().expect(ERROR_STATS); | |
assert_eq!(stats, 1, "Backend should only have 1 ongoing requests.") | |
// Different request, should be counted | |
let h3 = handle.clone(); | |
thread::spawn(move || { | |
h3.get_nonce(felt!("0x2").into()).expect(ERROR_SEND_REQUEST); | |
}); | |
// wait for the requests to be handled | |
thread::sleep(Duration::from_secs(1)); | |
// check request are handled | |
let stats = handle.stats().expect(ERROR_STATS); | |
assert_eq!(stats, 2, "Backend should only have 2 ongoing requests.") | |
} |
Please run |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
crates/katana/storage/provider/src/providers/fork/backend.rs (3)
120-128
: Consider adding more detailed documentation for BackendRequestIdentifierThe enum implementation is solid, but it would benefit from documentation explaining how each variant is used in the deduplication process.
Add documentation like this:
// Identifier for pending requests. // This is used for request deduplication. +/// Enum representing unique identifiers for different types of backend requests. +/// Used as keys in the deduplication HashMap to track and combine duplicate requests. #[derive(Eq, Hash, PartialEq, Clone, Copy, Debug)] enum BackendRequestIdentifier { + /// Identifies a nonce request for a specific contract address Nonce(ContractAddress), + /// Identifies a class request for a specific class hash Class(ClassHash), // ... similar for other variants }
844-1131
: Enhance test assertions for better clarityThe deduplication tests are comprehensive, but some assertions could be more descriptive.
Consider improving the assertion messages:
- assert_eq!(stats, 1, "Backend should have 1 ongoing requests."); + assert_eq!( + stats, + 1, + "Backend should have exactly 1 ongoing request after deduplicating identical requests" + );Also, consider adding test cases for:
- Concurrent requests with different timing
- Error cases in deduplication
- Edge cases with request cancellation
743-778
: Add error handling for mock RPC serverWhile the implementation works well for testing, it could benefit from more robust error handling.
Consider these improvements:
pub fn start_mock_rpc_server(addr: String, response: String) -> SyncSender<()> { use tokio::runtime::Builder; let (tx, rx) = sync_channel::<()>(1); thread::spawn(move || { Builder::new_current_thread().enable_all().build().unwrap().block_on(async move { - let listener = TcpListener::bind(addr).await.unwrap(); + let listener = TcpListener::bind(&addr).await.expect("Failed to bind mock server"); loop { - let (mut socket, _) = listener.accept().await.unwrap(); + let (mut socket, _) = match listener.accept().await { + Ok(socket) => socket, + Err(e) => { + error!(target: LOG_TARGET, %addr, %e, "Mock server accept failed"); + continue; + } + };
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
crates/katana/storage/provider/src/providers/fork/backend.rs
(22 hunks)
🔇 Additional comments (4)
crates/katana/storage/provider/src/providers/fork/backend.rs (4)
39-46
: Ohayo, sensei! The BackendResponse and BackendError changes look good!
The new BackendResponse
enum provides a clean abstraction for different response types, and the Arc
wrapping in BackendError
enables efficient thread-safe error sharing.
Also applies to: 55-61
336-357
: Ohayo, sensei! The response handling implementation is well done!
The code effectively manages multiple senders, uses structured logging for errors, and properly cleans up the deduplication map after processing.
1164-1209
: Ohayo, sensei! Excellent test for result consistency!
The test effectively verifies that all deduplicated requests receive the same response, using proper synchronization and clear assertions.
690-695
: Clean error handling implementation!
The function effectively converts provider-specific errors into the expected None responses, following Katana's provider API conventions.
fn dedup_request( | ||
&mut self, | ||
req_key: BackendRequestIdentifier, | ||
sender: OneshotSender<BackendResponse>, | ||
rpc_call_future: BoxFuture<'static, BackendResponse>, | ||
) { | ||
if let Entry::Vacant(e) = self.request_dedup_map.entry(req_key) { | ||
self.pending_requests.push((req_key, rpc_call_future)); | ||
e.insert(vec![sender]); | ||
} else { | ||
match self.request_dedup_map.get_mut(&req_key) { | ||
Some(sender_vec) => { | ||
sender_vec.push(sender); | ||
} | ||
None => { | ||
// Log this and do nothing here, as this should never happen. | ||
// If this does happen it is an unexpected bug. | ||
error!(target: LOG_TARGET, "failed to get current request dedup vector"); | ||
} | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ohayo, sensei! Improve error handling in dedup_request
The deduplication logic is solid, but the error handling in the None
case could be more robust.
Consider this improvement:
- None => {
- // Log this and do nothing here, as this should never happen.
- // If this does happen it is an unexpected bug.
- error!(target: LOG_TARGET, "failed to get current request dedup vector");
- }
+ None => {
+ // This should never happen as we just checked the key exists
+ error!(
+ target: LOG_TARGET,
+ key = ?req_key,
+ "Bug: Failed to get request dedup vector for existing key"
+ );
+ // Recover by treating it as a new request
+ self.pending_requests.push((req_key, rpc_call_future));
+ self.request_dedup_map.insert(req_key, vec![sender]);
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
fn dedup_request( | |
&mut self, | |
req_key: BackendRequestIdentifier, | |
sender: OneshotSender<BackendResponse>, | |
rpc_call_future: BoxFuture<'static, BackendResponse>, | |
) { | |
if let Entry::Vacant(e) = self.request_dedup_map.entry(req_key) { | |
self.pending_requests.push((req_key, rpc_call_future)); | |
e.insert(vec![sender]); | |
} else { | |
match self.request_dedup_map.get_mut(&req_key) { | |
Some(sender_vec) => { | |
sender_vec.push(sender); | |
} | |
None => { | |
// Log this and do nothing here, as this should never happen. | |
// If this does happen it is an unexpected bug. | |
error!(target: LOG_TARGET, "failed to get current request dedup vector"); | |
} | |
} | |
} | |
} | |
fn dedup_request( | |
&mut self, | |
req_key: BackendRequestIdentifier, | |
sender: OneshotSender<BackendResponse>, | |
rpc_call_future: BoxFuture<'static, BackendResponse>, | |
) { | |
if let Entry::Vacant(e) = self.request_dedup_map.entry(req_key) { | |
self.pending_requests.push((req_key, rpc_call_future)); | |
e.insert(vec![sender]); | |
} else { | |
match self.request_dedup_map.get_mut(&req_key) { | |
Some(sender_vec) => { | |
sender_vec.push(sender); | |
} | |
None => { | |
// This should never happen as we just checked the key exists | |
error!( | |
target: LOG_TARGET, | |
key = ?req_key, | |
"Bug: Failed to get request dedup vector for existing key" | |
); | |
// Recover by treating it as a new request | |
self.pending_requests.push((req_key, rpc_call_future)); | |
self.request_dedup_map.insert(req_key, vec![sender]); | |
} | |
} | |
} | |
} |
Description
Implements a request deduplication process via the use of a
HashSet
on the requests. This also requires adding a key for thepending_requests
vector so we can add any pending requests that was popped back to the queue.The key for the requests deduplication was originally implemented as a
String
, but changed toEnum
after to enforce the possible types.Considered
IndexMap
to reduce need to add an extra data structure, but since that requires adding a package and this should achieve the same result, kept this implementation.Related issue
Closes #1465.
Tests
The tests for the changes I've made passed. However some seemingly unrelated tests are failing and I am not sure what's the exact issue. Hopefully its a setup issue and its only failing on my machine.
Added to documentation?
Not sure docs is needed for this on the dojo book, but if needed I can add to it. For now only comments are added for explanation.
Checklist
scripts/prettier.sh
,scripts/rust_fmt.sh
,scripts/cairo_fmt.sh
)scripts/clippy.sh
,scripts/docs.sh
)Summary by CodeRabbit
New Features
Bug Fixes
Tests