-
Notifications
You must be signed in to change notification settings - Fork 13.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
std::process::Command no way to handle command-line length limits #40384
Comments
See this. While I link the python documentation here, the idea is just as applicable to Rust. In this case it is more sensible to check for argument length by inspecting the error side of the Result. |
@nagisa I'm aware of that idiom and of the "Time of check to time of use" errors that it avoids, but it doesn't really apply in this case:
The only way I can think of to achieve this is
|
The command line for Windows is flattened into one big UTF-16 string, the length of which, including the null terminator, is limited to 32,768 UTF-16 codeunits. |
Erm, would it be safe to behave like |
It wouldn't be 100% safe: I could always come up with a unlikely scenario
where any number of arguments would result in a too-long command line.
More importantly , the lower the "-n" value becomes, the more safe it
becomes, but at the cost of lower performance. I wouldn't want to guess
what the best compromise between performance and safety is: adding a
"try_add_argument" type method means that I could easily be both safe and
maximally performant.
Mark
…On Fri, Aug 11, 2017 at 4:14 AM, Andrew Pennebaker ***@***.*** > wrote:
Erm, would it be safe to behave like xargs -n 100 <...>, that is,
defaulting to a batch size of 100 or so arguments, which most OS's can
reasonably be expected to handle?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#40384 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ALbIF9lgMGTH9nXxFacQaS4G48C8Tuejks5sW8cAgaJpZM4MX33u>
.
|
I agree that xargs-style batching needs to be supported somehow. I would be open to considering a well-tested cross-platform implementation of this in a PR. |
We can first start with calculating the size of argv and envp in Unix, and the size of the cmdline in Windows. Then we can have methods that make use of this is-it-full information.
|
On LInux at least, you'll need to calculate the length of each string (including NUL byte) plus the size of the actual arrays (e.g. |
Run rustfmt on batches of multiple files changelog: none This gives `cargo dev fmt` a nice speed boost, down from 90s (because old) on my laptop and 120s (because windows) on my desktop to ~5s on both 250 at a time was to give windows a good amount of headroom (failed at ~800, rust-lang/rust#40384) Also adds rustfmt to the toolchain file and has the clippy_dev workflow test using the pinned version as a follow up to #7963
The arg method doesn't track the total resulting command-line length and has no way of indicating to clients that the resulting command-line length would exceed the OS's underlying maximum length. This is fine for launching subprocesses with dozens of arguments, but renders it impossible to implement xargs or similar functionality.
Can I suggest a new method
fn try_add_arg<S: AsRef<OsStr>>(&mut self, arg: S) -> Option<S>
with documentation saying that it's only preferable to
arg
when you're wanting multi-kilobyte command-lines?try_add_arg
would keep track of the number of args already added, and if the number of args or the resulting length would exceed the OS limits, then it ignores the argument and returns it back to the client. Otherwise it acts like the normalarg
method and returns None.The text was updated successfully, but these errors were encountered: