You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As of #2418, uploading an image is done by posting the whole image in 512kb chunks, keyed by offset. That's done in the CLI in oxidecomputer/oxide.rs#109. I am working on doing something similar in the web console: oxidecomputer/console#1453.
The responsibility for doing the chunking and making all those calls is offloaded to the client. While making 500 or 1000 network round trips from the browser or CLI is not ideal, this is a reasonable strategy and we have gone for make-the-client-do-it in other contexts as well.
However, the StreamingBody extractor added in oxidecomputer/dropshot#617 seems like a really good fit for this. It takes a single streaming body and does precisely the chunking that we're doing on the client-side. So the client would make a single streaming request with the file, and Nexus is able to loop through chunks and call disk_manual_import on each. See BufList chunking example in Dropshot.
/// Bulk write some bytes into a disk that's in state ImportingFromBulkWrites
pubasyncfndisk_manual_import(
Obstacles
Max request size
Streaming bodies are subject to our global configured max request body size, which is going to be too small for images, which you can expect to be measured in GiB. @sunshowers has oxidecomputer/dropshot#618 to allow per-endpoint configuration of this maximum, but hasn't been able to get them over the line due to more urgent work. Bumping the global max to a huge number is not a great interim solution.
This is new work and we already have something that works
I don't want to push this too hard for this reason. Just want to record my findings and have a good plan for followup work. I'm going to start by implementing the console side using the many POSTs and we'll see how it goes.
The text was updated successfully, but these errors were encountered:
Bad news! Browser support is not there in Firefox or Safari and doesn't appear to be coming soon. I'd say that more or less rules this idea out. Welp. We could still add it alongside the existing endpoint to use in the CLI, but at least in the short term there's no way it's a priority to do that because it duplicates an existing working thing.
As of #2418, uploading an image is done by posting the whole image in 512kb chunks, keyed by offset. That's done in the CLI in oxidecomputer/oxide.rs#109. I am working on doing something similar in the web console: oxidecomputer/console#1453.
The responsibility for doing the chunking and making all those calls is offloaded to the client. While making 500 or 1000 network round trips from the browser or CLI is not ideal, this is a reasonable strategy and we have gone for make-the-client-do-it in other contexts as well.
However, the
StreamingBody
extractor added in oxidecomputer/dropshot#617 seems like a really good fit for this. It takes a single streaming body and does precisely the chunking that we're doing on the client-side. So the client would make a single streaming request with the file, and Nexus is able to loop through chunks and calldisk_manual_import
on each. See BufList chunking example in Dropshot.omicron/nexus/src/app/disk.rs
Lines 463 to 464 in 23f996a
Obstacles
Max request size
Streaming bodies are subject to our global configured max request body size, which is going to be too small for images, which you can expect to be measured in GiB. @sunshowers has oxidecomputer/dropshot#618 to allow per-endpoint configuration of this maximum, but hasn't been able to get them over the line due to more urgent work. Bumping the global max to a huge number is not a great interim solution.
This is new work and we already have something that works
I don't want to push this too hard for this reason. Just want to record my findings and have a good plan for followup work. I'm going to start by implementing the console side using the many POSTs and we'll see how it goes.
The text was updated successfully, but these errors were encountered: