-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v0.4.0+ breaks compatibility with Nvidia Deep Learning Containers #892
Comments
Odd. Why would the pytorch api define a #849 suggests diffusers is PyTorch 1.13 compatible, does it work if you use the newest version of the NGC container instead? |
Odd indeed. Tested this on Interestingly, the v1.12 docs for |
As far as I knew, the Feel free to open a PR to make it a bit more robust :) |
Ran a few tests: # Pull image & run container
docker run -it --platform linux/amd64 nvcr.io/nvidia/pytorch:<version>-py3 bash
# Inside container, install & import diffusers
pip install diffusers
python -c 'import diffusers' My results:
Will open a PR with the above workaround shortly |
Describe the bug
Nvidia's deep learning containers are a popular way to run machine learning workloads on top of Docker.
With diffusers 0.4.0+, I'm unable to import
diffusers
inside of this container becausetorch.backends.mps
doesn't exist.Culprit appears to be this line in
src/diffusers/utils/testing_utils.py
:The torch installation in the container doesn't have MPS, so it raises the following error:
Reproduction
(assuming you have Docker installed & configured)
Start the deep learning container
Inside the container:
Workaround
Adding another condition to the MPS check seems to fix at least the import issue for me:
Happy to contribute this as a PR if appropriate.
System Info
I've only tested this with version
22.04
of the deep learning container from nvidia because it's the latest one that comes with torch==1.12.0Output from running
diffusers-cli env
inside the container:diffusers
version: 0.4.0 (also tested with 0.5.1)The text was updated successfully, but these errors were encountered: