-
Notifications
You must be signed in to change notification settings - Fork 13.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A soundness bug in std::fs #32670
Comments
These methods are stable, they cannot be marked unsafe (almost the entire ecosystem would break). I guess things like |
According to the stability document:
|
|
It's the safety holes part that justifies this. This is a safety hole. It's essentially the same as the scoped thread API, only the scoped thread API was removed from Rust before it was stabilized. |
I wouldn't consider this a safety hole. The fact that harmless file system operations can change arbitrary memory is unfortunate, but there will always be ways to somehow circumvent Rust's safety guarantees with external "help". The scoped thread API was a process-internal API not provided by the OS but by the Rust standard lib alone, and was only unsafe because of wrong assumptions made while it was designed. |
Marking a stable function as unsafe would probably be covered by the quoted wording even if the function was widely used, but that doesn't mean it's a reasonable interpretation in this case. Making opening a file unsafe is so baldly ridiculous that I am asking myself if this an April's fools joke. Surely it is possible got around the symlink thing and avoid opening |
It proposes to make |
You can also use safe |
What's worse, you could call unsafe code from safe code! This can only lead to security holes! Let's require |
The obvious solution is of course not to mark the methods as unsafe, but to drop Linux and Windows support and only support Redox (it's written in Rust so it must be safe) ;) |
That won't do. Redox uses FWIW I've long harbored thoughts that "safe" Rust does not go far enough. Removing ... aw crud. That requires a |
Except that deleting everything is not unsafe, while poking around arbitrarily in memory is. |
That is unfortunate short-sighted corner-cutting. |
Won't |
Nope. I get permission denied when I try to |
Since this seems to be an that April’s thing, and the thing’s over, closing. If my conjectures are incorrect, complain 😊 |
tbh if the code above actually works, maybe it should have an issue for real. I think it may be considered a soundness hole, although it may not be worth fixing. It shouldn't be this issue though, the silly will detract from serious discussion. |
I guess this is similar to creating a safe wrapper for a C library, even if an unusual input is given, and it triggers a bug/memory error in the library, the interface should remain safe. |
The example works. On Sat, Apr 2, 2016, 16:06 Marcell Pardavi notifications@github.com wrote:
|
I guess I’ll nominate this for libs team to see (since it is technically a way to circumvent memory safety), but I doubt the outcome will be any different. Sorry for the premature close!
It doesn’t really matter IME, because the core of the issue is still here. |
I think triggering memory unsafety from outside (using filesystem, other process, etc.) is not in scope of Rust's safety gurantee. Imagine implementing a library for controlling a repair robot. It has long flexible manupulator which can grab things, solder, connect to pins and do JTAG, etc. Should functions like But one can program the RepairBot to loop back the manipulator to the robot's own back, open the lid, solder to JTAG pins and then trigger overwriting memory, hence causing memory unsafety in Rust terms. More close-to-earth example: imagine implementing debugger like When triggering the unsafety involves some external component like filesystem, it is out of scope for Rust's safe/unsafe. You can't reliably detect a Strange loop. |
It's not like it's going to be fixed, whatever happens. |
It is the responsibility of the one running the program to provide a safe interface to the operating system and hardware. Rust cannot guard against your computer literally blowing itself up because your program executed some booby-trapped instruction. |
Are you saying that play.rust-lang.org, not to mention basically every Linux distribution in existence, is misconfigured? |
Your program should be in a sandbox denying access to any file it does not need, if you don't want this to happen. |
@vks, Undefined behaviour on the other hand can lead to malware code execution (which, for example, sends private data from |
|
I do not buy the suggestion that memory unsafety is okay if it's caused by interacting badly with the operating system or specific hardware. Certainly it is unacceptable for a safe Rust library to say, invoke the The point of a safe Rust interface is to offer precisely that, a memory-safe interface, an API that Rust code can use without worry of memory unsafety, in all circumstances, whatever silly mistake the programmer might make. The point is not to assign blame but to isolate the unsafety and fight it, so that the end result are safer, more reliable programs. Therefore, functions that are not marked Now, clearly, Rust cannot take care of everything. It can't prevent side channel attacks. It can't prevent authentification code from containing a logical bug that gives every user full privileges. It can't forsee a hardware error in future generations of CPUs that will cause it to write to address If a safe function in the Rust library is called, and it ends up overwriting memory of the running process, then that is a memory safety issue. It doesn't matter one bit if it goes through the OS, specifically, through the mock |
That's cool and all, but there's a reason I closed this bug; this particular safety hole is infeasible to fix. Other "memory-safe" languages (Python, Haskell, Java) share this hole, so there probably no easy way to fix it, and the hard ways of fixing it would get in the way far more than they help. (Marking the file open APIs unsafe would just be stupid.) |
@notriddle It probably wasn't clear from my rant, but I am standing by my earlier position of "this may very well be not worth fixing". Like you, I am skeptical that it can be reasonably prevented. But several recent comments in this thread seem to veer too far into another direction, sounding like "apologists" for memory unsafety so to speak. I am strongly of the opinion that leaving such a hole open must come from a pragmatist "We'd like to fix it but it's just not practical" place, not from a "It's hard to trigger and not our fault so whatevs 🤷" direction. |
@notriddle: can’t |
@sanmai-NL , What if Also imagine an operating system that has socket analogue of The main issue that OS-s allow memory introspection and it should not be considered Rust-unsafe. Even more intricate example: imagine a robotics library for a electronics repair robot that allows user code to control special manipulators that move around using motors and connect to pins (e.g. JTAG) of various boards around. But what if we command it to connect to JTAG of the same board that is controlling the robot ("self-repair mode")? Now we can read/write any memory, including one mapped to Rust process. Does it make motor-controlling or pin digital input/ouput functions Rust-unsafe? |
There are also limitations/defects in common hardware, such as those exploited by https://en.wikipedia.org/wiki/Row_hammer to modify arbitrary memory. Does rowhammer imply that all memory accesses should be considered unsafe? XD |
This comment was marked as resolved.
This comment was marked as resolved.
@Error1000 TBH, in that light you can consider any rust code unsafe. A big chunk of the stdlib will depend on libc and many low level crates that are pervasive throughout the ecosystem use unsafe. I don't think you can write any non-trivial rust program that does not have unsafe in it's dependencies. As for the problem at hand, I don't agree it's just outside of the scope Rust. From an OS perspective a program is allowed to do pretty much with its own memory what it wants. It's the Rust paradigm to put limitations on that and require the unsafe keyword. libc is one interface to the OS which allows a ton of things for which Rust requires unsafe and here it turns out that I also don't see a clear argumentation as to why this can't be fixed, but them I'm not a linux filesystem expert. Sure The question then becomes whether it is possible to close this hole and whether the perf overhead in these functions is acceptable. It would also require unsafe equivalents of these functions to allow opening files like |
I think panicking whenever the mem file is opened would be a viable solution to this problem |
@botahamec How would you do that? Note that it's possible to create symlinks, as mentioned in the original issue description. |
For anyone following along here, I've now posted #97837 to propose documentation for Rust's stance on /proc/self/mem. @najamelan Comparing inode numbers isn't trivial, because procfs can be mounted in multiple places, and when it is, each instance has its own inode numbers. Also, self/mem isn't the only dangerous file; there's also self/task/<tid>/mem, each with its own inode number, and threads can come and go dynamically. With enough cleverness, it may ultimately be possible to design a system which reliably detects whether a |
This program writes to arbitrary memory, violating Rust's safety guarantees, despite using no unsafe code:
Because the filesystem APIs cannot be made safe (blocking
/proc
paths specifically will not work, because symlinks can be created to it),File::create
,File::open
, andOpenOptions::open
should be marked unsafe. I am working on an RFC for that right now.The text was updated successfully, but these errors were encountered: