-
Notifications
You must be signed in to change notification settings - Fork 264
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migrate table service to pipeline architecture #352
Migrate table service to pipeline architecture #352
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good start. We'll want to make sure we run this against actual services to make sure things work though. I had a few points that I'd like to be addressed first. I also would like someone more familiar with table storage to take a look as well.
@@ -0,0 +1,4 @@ | |||
#[derive(Debug, Default, Clone, PartialEq, Eq)] | |||
pub struct TableContext {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is this for?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the creation of PipelineContext
, TableContext
is the second parameter. At the moment I have no data that needs to be passed between pipeline policies.
Is there a way to create PipelineContext
without any Content: C
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ShayMusachanov-dev we are actively discussing how to handle pipeline options in #474. The current solution leaves much to be desired as you have noticed here.
I would not fret too much here now as it's bound to change soon.
One particular point I think should be discussed is if we should pass the AuthorizationPolicy
down the pipeline instead of embedding it in the policy (as you here like it's currently done in CosmosDB). While the current approach is certainly easier, it prevents the user to change the authorization for a specific call, if need be.
use crate::{authorization::AuthorizationToken, table_context::TableContext}; | ||
|
||
#[derive(Debug, Clone, PartialEq, Eq)] | ||
pub struct AuthorizationPolicy { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm unfortunately not familiar with authorization for the storage APIs. Can any of this be shared with the authorization implementation in Cosmos?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've never worked with Cosmos, so I don't want to say anything from basic reading. Of course, if there is a shared code we should use the same implementation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Create example for TableClient, Start implement EntityClient, Remove old implementation, Change the expected status code in create table and entity Implement Insert Entity including Deserialize and Serialize Add 'next_table_name' header value in the response object of ListTable Remove the empty Impl of TableContext
…e review of rylev
…t without implantation.
@ShayMusachanov-dev, thank you for your work on this. Is this something that we want to merge before azure_storage is published. If so, @ShayMusachanov-dev could you resolve conflicts? @rylev, what do you think? |
If we resolve conflicts, I think we can try to land this. Thanks for your patience! |
@cataggar, I was thrilled to see your comment. I performed a marge from the Main branch. The current implementation does not support the streaming of entities for a given OData query (pagination using continuation token). Is there a generic implementation in the Core crate that I can use or guidelines for similar implementation? |
…n DeleteTableResponse
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just some nits from me. Overall it looks good.
Please note that very large changes like this one are hard to review and land. It would be helpful to try to land smaller changes in the future 😊
|
||
#[tokio::main] | ||
async fn main() -> Result<(), Error> { | ||
// in the follwing example we will interact with the user table. first let's create the table if it doesn't exists; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// in the follwing example we will interact with the user table. first let's create the table if it doesn't exists; | |
// in the following example we will interact with the user table. first let's create the table if it doesn't exists; |
@heaths @MindFlavor if you have the chance to review this as well that would be very helpful! Thank you 🧡 |
Fix typo and add more console prints to table example. Remove the command of the timeout parameter in the operations Option and append the timeout to the request Uri. Add more documentation on each table and entity operations. Update the prelude file and use import statements correctly.
The execution of the test, causing the workflow to fail. Just as a temporary solution, I put the test in a comment. Until I change the test to use the mock_transport_framework
@rylev, I have committed changes according to your comments. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Excellent work, thank you!
I love it ❤️!
I've tested it against both Azure and the emulator and it works.
There are some minors changes that I suggest, but after these I think this is good to go.
@@ -112,6 +118,16 @@ impl Request { | |||
pub fn clone_body(&self) -> Body { | |||
self.body.clone() | |||
} | |||
|
|||
/// Get a mutable reference to the request's body. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is not needed. You can use set_body
instead.
let md5 = base64::encode(&md5::compute(bytes.as_ref())[..]); | ||
headers.append("Content-MD5", HeaderValue::from_str(md5.as_str()).unwrap()); | ||
|
||
*request.body_mut() = bytes.into(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
*request.body_mut() = bytes.into(); | |
request.set_body(bytes.into()); |
]; | ||
|
||
let table_name = "users"; | ||
let table_client = TableClient::emulator(TableOptions::default()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd rather give the ability to test against Azure as well. In the SDK we have the unwritten convention to pass storage account and master key with env variables.
Something like:
let account = std::env::vars()
.find(|i| i.0 == "STORAGE_ACCOUNT")
.map(|i| i.1);
let key = std::env::vars()
.find(|i| i.0 == "STORAGE_MASTER_KEY")
.map(|i| i.1);
let auth_token = match (account.as_ref(), key) {
(Some(account), Some(key)) => Some(AuthorizationToken::SharedKeyToken {
account: account.to_owned(),
key,
}),
_ => None,
};
let table_name = "users";
let table_client = auth_token
.as_ref()
.map(|auth_token| {
TableClient::new(
account.clone().unwrap(),
auth_token.clone(),
TableOptions::default(),
)
})
.unwrap_or_else(|| TableClient::emulator(TableOptions::default()));
partition_key_client: Arc<PartitionKeyClient>, | ||
row_key: String, | ||
url: Url, | ||
table_name: Cow<'static, str>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer a simpler String
. Table names should be relatively small anyway.
self.partition_key_client.http_client() | ||
/// Creates a new EntityClient for the given table_name. | ||
/// Consuming table_client in the process. | ||
pub fn new<NAME: Into<Cow<'static, str>>>(table_client: TableClient, table_name: NAME) -> Self { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pub fn new<NAME: Into<Cow<'static, str>>>(table_client: TableClient, table_name: NAME) -> Self { | |
pub fn new(table_client: TableClient, table_name: impl Into<String>) -> Self { |
async fn test_insert() { | ||
let table_client = get_emulator_client(); | ||
/// Crates Entity client for a given table. consuming Self in the process. | ||
pub fn into_entity_client<S: Into<Cow<'static, str>>>(self, table_name: S) -> EntityClient { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pub fn into_entity_client<S: Into<Cow<'static, str>>>(self, table_name: S) -> EntityClient { | |
pub fn into_entity_client(self, table_name: impl Into<String>) -> EntityClient { |
Try to read the storage account name and key from the environment, else create an emulator client. Use the set_body fn instead of body_mut. Replace Cow smart pointer with From<String> in table and entity clients.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you very much! 👍
No description provided.