-
Notifications
You must be signed in to change notification settings - Fork 27.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: LDSR is broken after add support SD2 #5087
Comments
The same issue. |
Can confirm here as well. Same error. |
Confirm, the same issue. |
Same issue here |
Does not work:
|
What fixed it for me locally was copying the contents of |
I did it and still not fixed, hope we can get a fix soon.
|
same |
1 similar comment
same |
Same issue |
I've created a new PR to repair this functionality. Can someone please give it a test #5216 |
heartbroken. just want her back bros |
The trouble is that Stability AI removed all references of VQ from their repo, leaving only IK, and LDSR depends on VQ. My PR will get it working again at the cost of significant VRAM usage increase. Sometimes I think that maybe we should give LDSR up and put effort into getting the SD 2.0 4x upscaler working instead, seeing as the SD 2.0 4x upscaler is the spiritual successor to LDSR. Having said that, it looks like there's something wrong with the current version of SD 2.0 4x upscaler and it has excessive VRAM requirements. |
The PR was merged a few hours ago but on my 2080ti (11GB) I can't use even 2x LDSR anymore due to "out of memory" |
Same here (3080ti) |
Same |
LDSR is also broken for me. The code works, but as of a few commits ago I cannot use it due to OOM error. This used to work fine with --medvram enabled |
sad day :( same issue |
The PR is only to make it possible again, so that the next lot of work can be carried out. The next part is to re-implement back all the VQ-specific logic that Stability AI took out... and that'll take a while! |
Thanks for your effort. I could "make it work" only by scaling to 512X512 😅. Otherwise, it's: (GPU 0; 12.00 GiB total capacity; 10.80 GiB already allocated; 0 bytes free; 10.85 GiB reserved in total by PyTorch) |
I can't even do a 512x512 upscale, running out of vram on a 3090ti Maximum is 256x256 even with xformers enabled |
There are 3 jobs remaining to get it fully functional (and better than before).
We could do with some help from an actual ML engineer, so if you can help, please chip in!. Rather than a regular dev with only surface level understanding like myself. I tried 1 & 2 but couldn't get it to work. |
I've created PR #5415 to apply the point 3 above -
On my setup the VRAM usage has now gone back down to 5GB from 17GB. Can someone give it a test please? |
I also got it to apply Xformers optimization (through Anyone any ideas? |
Just to get in line: same here! GeForce 3090TI with 24GB of VRAM, still states out of memory. What's going on? |
@wywywywy |
Thanks for testing. Is the total time taken roughly the same as how it worked in the past? |
the whole process took about 20min in total. unfortunately I can't say anything about the past since I'm only on it and it hasn't worked since then |
I think it's probably about right. Even on my 3090, upscaling a 512x512 by 4x takes a while. The next PR will have optimisations (like Xformers) enabled, and that might help you a bit. |
The above PR #5415 has now been merged. So the memory usage should go back to previous working level now. I've also created a new PR #5586 to further improve it - it allows caching, optimization (e.g. Xformers), and Channels Last memory format. Please give it a test if you have time. I could not get |
PR has now been merged. LDSR should now be a viable option again. |
Yeah, confirmed. Upscaling to x4 with two levels of LDSR takes a good 6 minutes on my 3090ti |
Oh and, yeah, it's working again. |
It works! Thank you so much for fixing it!👍👍 |
I confirm, it works, thanks! |
Is there an existing issue for this?
What happened?
LSDR support is bronen in webui
Steps to reproduce the problem
What should have happened?
Lsdr working.
Commit where the problem happens
b5050ad
What platforms do you use to access UI ?
Windows
What browsers do you use to access the UI ?
Microsoft Edge
Command Line Arguments
Additional information, context and logs
Loading model from C:\diffusion\stable-diffusion-webui\models\LDSR\model.ckpt
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 113.62 M params.
Keeping EMAs of 308.
Error completing request
Arguments: (0, 0, <PIL.Image.Image image mode=RGB size=768x768 at 0x1BC7E25BDF0>, None, '', '', True, 0, 0, 0, 2, 512, 512, True, 3, 0, 0, False) {}
Traceback (most recent call last):
File "C:\diffusion\stable-diffusion-webui\modules\ui.py", line 185, in f
res = list(func(*args, **kwargs))
File "C:\diffusion\stable-diffusion-webui\webui.py", line 56, in f
res = func(*args, **kwargs)
File "C:\diffusion\stable-diffusion-webui\modules\extras.py", line 187, in run_extras
image, info = op(image, info)
File "C:\diffusion\stable-diffusion-webui\modules\extras.py", line 148, in run_upscalers_blend
res = upscale(image, *upscale_args)
File "C:\diffusion\stable-diffusion-webui\modules\extras.py", line 116, in upscale
res = upscaler.scaler.upscale(image, resize, upscaler.data_path)
File "C:\diffusion\stable-diffusion-webui\modules\upscaler.py", line 64, in upscale
img = self.do_upscale(img, selected_model)
File "C:\diffusion\stable-diffusion-webui\modules\ldsr_model.py", line 54, in do_upscale
return ldsr.super_resolution(img, ddim_steps, self.scale)
File "C:\diffusion\stable-diffusion-webui\modules\ldsr_model_arch.py", line 87, in super_resolution
model = self.load_model_from_config(half_attention)
File "C:\diffusion\stable-diffusion-webui\modules\ldsr_model_arch.py", line 25, in load_model_from_config
model = instantiate_from_config(config.model)
File "C:\diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "C:\diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 562, in init
self.instantiate_first_stage(first_stage_config)
File "C:\diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 614, in instantiate_first_stage
model = instantiate_from_config(config)
File "C:\diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "C:\diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 87, in get_obj_from_str
return getattr(importlib.import_module(module, package=None), cls)
AttributeError: module 'ldm.models.autoencoder' has no attribute 'VQModelInterface'
The text was updated successfully, but these errors were encountered: