-
-
Notifications
You must be signed in to change notification settings - Fork 15.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
stdenvAdapters.useLibsFrom: consider reverting or changing the interface #283517
Comments
Dropping Silly question: why not use the default compiler and I'm asking to get the idea what the intended use case of swapping a c++ implementation from under the compiler. In theory the implementation could be swapped out. In practice I see a bunch of constraints that will probably require quite a bit of engineering:
I'd say both nuts are tough to crack in |
The intent is to use the old compiler, but omit the old libstdc++ from the closure and from the runpaths. Keeping only the newest libstdc++ at runtime is important for shared libraries, e.g. for native python extensions. There are two use-cases: 1) intermittent breakages like e.g. tensorflow and openvino right now (I didn't scan for more, but historically there have been many) which are usually fixed with a bunch of includes, but it's a burden for the maintainer to find those and prepare the patches; 2) variants of packages that consistently require the older toolchain, e.g. using cuda's compiler driver.
👍🏻 This is exactly the intended usage. The issue is that if we keep an "adapter", it should do something sensible outside those intended use-cases...
My understanding is that
I think it's still in early stages and shouldn't have metastasized 😆 |
From the process standpoint I would say that most prominent broken users should have been handled as part of the compiler update and should be a prerequisite before the merge into Otherwise it makes sense to hold back the package on a previous compiler version (if we had a mechanism for that today).
Does |
Blocking nixpkgs-unstable is probably reasonable, but blocking python-updates/staging/master is somewhat unrealistic: it's too many constraints, and way too few maintainers on the python side of things (often just one if we should be honest). Blocking the unstable branches sounds reasonable, I'll check if we do.
To sum up, cuda-stuff requiring old toolchains is no different from tensorflow requiring old toolchains
Yes. My suggestion is that we introduce a mechanism for that ( |
An alternative question to ask is: is it reasonable for anyone to rely on
So maybe instead of attempting the generalization of |
@SomeoneSerge raised an excellent point in #281371 (comment):
Is there a way to track this issue as something which should be addressed prior to 24.05? |
Issue description
The PR #275947 introduced a new attribute in
stdenvAdapters
, the experimental adapteruseLibsFrom = stdenvForLibs: stdenvForCC: ...
that constructs a stdenv with the former argument's "standard libraries" (chiefly libstdc++) and the latter's compilers and tools. The intended use case is using older toolchains (gcc12Stdenv
) to build shared libraries to be loaded in processes that may require a newer libstdc++ (one from the "default"stdenv
, e.g. gcc13). The particular form of the adapter is a result of a brief exchange on Matrix, not a product of thoughtful consideration or any reasonable review process.Impossible promises
As explicated in #282356, for this to actually work in non-trivial settings we'd need to be able to compose multiple cc-wrappers (in particular, through their
cc
andgccForLibs
arguments). This may not actually be trivial to implement, in which case the new adapter is dangerous (it's promising something we couldn't provide).Leaky interfaces
Additionally, the "adapter" interface is expressed in terms of wrapped values: it doesn't accept e.g.
ccForLibs
or a (made up name, meaning to refer either tolibcxx
orgcc.cc.lib
)cxxStdlib
as is, but only boxed inside a stdenv. The same applies to thecc-wrapper
itself: it takes two different inputs,libcxx
andgccForLibs
, that impact the same aspect of the wrapper and may be specified individually or simultaneously. In the latter case, inferring the priorities requires consulting with the documentation.It may be a revealing symptom that downstream expressions are sometimes tempted to inspect the
cc-wrapper
for their needs, e.g. to infer which implementation of the c++ standard library is being used, or whether the shared libraries' location is expected to be prefixed with the platform's "triple". A more frequent and innocuous need is to infer whether the wrapper contains a gcc or a clang. We should perhaps set (document) explicit boundaries so thatcc-wrapper
may be treated as a black box, and one with the minimal surface. It's likely OK to askcc.isClang
. On the other hand, a downstream package trying to directly access the c++ standard library might signal that we need to expose a more constraint tool (possiblyuseLibsFrom
is one).Some possibly motivating examples:
Rigid interfaces
The current interface might be too rigid. Example: the current form of
useLibsFrom
allows one to specify which library to use both and build- and run-times. We couldn't express e.g. a requirement like "use the oldest available libc/libstdc++ with a slightly less old gcc, but then put the newest possible libstdc++ in the runpaths". I don't know if this is a valid use-case (why not also use the oldest possible gcc?), but if it were we'd fail to accommodate it.Notify the maintainers
I've no experience with the wrapper or with stdenv, so I'm hoping for some advice here. @RaitoBezarius had dropped some comments on the matter before, even though this probably isn't his main area of interest. I understand @Ericson2314, @trofi, @aszlig, and @a-n-n-a-l-e-e have been deeply involved with these subsystems, if any of you have the time? Naturally, ideas from @amjoseph-nixpkgs would be very welcome
The text was updated successfully, but these errors were encountered: