Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: LDSR is broken after add support SD2 #5087

Closed
1 task done
dill-shower opened this issue Nov 26, 2022 · 35 comments
Closed
1 task done

[Bug]: LDSR is broken after add support SD2 #5087

dill-shower opened this issue Nov 26, 2022 · 35 comments
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@dill-shower
Copy link

dill-shower commented Nov 26, 2022

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

LSDR support is bronen in webui

Steps to reproduce the problem

  1. Go to extras
  2. Send any image and use LDSR upscaler
  3. Show the error

What should have happened?

Lsdr working.

Commit where the problem happens

b5050ad

What platforms do you use to access UI ?

Windows

What browsers do you use to access the UI ?

Microsoft Edge

Command Line Arguments

--xformers

Additional information, context and logs

Loading model from C:\diffusion\stable-diffusion-webui\models\LDSR\model.ckpt
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 113.62 M params.
Keeping EMAs of 308.
Error completing request
Arguments: (0, 0, <PIL.Image.Image image mode=RGB size=768x768 at 0x1BC7E25BDF0>, None, '', '', True, 0, 0, 0, 2, 512, 512, True, 3, 0, 0, False) {}
Traceback (most recent call last):
File "C:\diffusion\stable-diffusion-webui\modules\ui.py", line 185, in f
res = list(func(*args, **kwargs))
File "C:\diffusion\stable-diffusion-webui\webui.py", line 56, in f
res = func(*args, **kwargs)
File "C:\diffusion\stable-diffusion-webui\modules\extras.py", line 187, in run_extras
image, info = op(image, info)
File "C:\diffusion\stable-diffusion-webui\modules\extras.py", line 148, in run_upscalers_blend
res = upscale(image, *upscale_args)
File "C:\diffusion\stable-diffusion-webui\modules\extras.py", line 116, in upscale
res = upscaler.scaler.upscale(image, resize, upscaler.data_path)
File "C:\diffusion\stable-diffusion-webui\modules\upscaler.py", line 64, in upscale
img = self.do_upscale(img, selected_model)
File "C:\diffusion\stable-diffusion-webui\modules\ldsr_model.py", line 54, in do_upscale
return ldsr.super_resolution(img, ddim_steps, self.scale)
File "C:\diffusion\stable-diffusion-webui\modules\ldsr_model_arch.py", line 87, in super_resolution
model = self.load_model_from_config(half_attention)
File "C:\diffusion\stable-diffusion-webui\modules\ldsr_model_arch.py", line 25, in load_model_from_config
model = instantiate_from_config(config.model)
File "C:\diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "C:\diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 562, in init
self.instantiate_first_stage(first_stage_config)
File "C:\diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 614, in instantiate_first_stage
model = instantiate_from_config(config)
File "C:\diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "C:\diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 87, in get_obj_from_str
return getattr(importlib.import_module(module, package=None), cls)
AttributeError: module 'ldm.models.autoencoder' has no attribute 'VQModelInterface'

@dill-shower dill-shower added the bug-report Report of a bug, yet to be confirmed label Nov 26, 2022
@Renaldas111
Copy link

The same issue.

@leppie
Copy link
Contributor

leppie commented Nov 26, 2022

Can confirm here as well. Same error.

@ebziw
Copy link

ebziw commented Nov 27, 2022

Confirm, the same issue.

@Blavkm
Copy link

Blavkm commented Nov 28, 2022

Same issue here

@mpolsky
Copy link

mpolsky commented Nov 28, 2022

Does not work:

  • nor for SD scale up script,
  • nor for scaling at Extras tab.

@DanielWeiner
Copy link

What fixed it for me locally was copying the contents of repositories/stable-diffusion/ldm/models/autoencoder.py into repositories/stable-diffusion-stability-ai/ldm/models/autoencoder.py. Obviously not an ideal solution.

@ZeroCool22
Copy link

What fixed it for me locally was copying the contents of repositories/stable-diffusion/ldm/models/autoencoder.py into repositories/stable-diffusion-stability-ai/ldm/models/autoencoder.py. Obviously not an ideal solution.

I did it and still not fixed, hope we can get a fix soon.

Traceback (most recent call last):
  File "C:\Users\ZeroCool22\Desktop\Auto\modules\call_queue.py", line 45, in f
    res = list(func(*args, **kwargs))
  File "C:\Users\ZeroCool22\Desktop\Auto\modules\call_queue.py", line 28, in f
    res = func(*args, **kwargs)
  File "C:\Users\ZeroCool22\Desktop\Auto\modules\img2img.py", line 137, in img2img
    processed = modules.scripts.scripts_img2img.run(p, *args)
  File "C:\Users\ZeroCool22\Desktop\Auto\modules\scripts.py", line 317, in run
    processed = script.run(p, *script_args)
  File "C:\Users\ZeroCool22\Desktop\Auto\scripts\sd_upscale.py", line 39, in run
    img = upscaler.scaler.upscale(init_img, 2, upscaler.data_path)
  File "C:\Users\ZeroCool22\Desktop\Auto\modules\upscaler.py", line 64, in upscale
    img = self.do_upscale(img, selected_model)
  File "C:\Users\ZeroCool22\Desktop\Auto\modules\ldsr_model.py", line 54, in do_upscale
    return ldsr.super_resolution(img, ddim_steps, self.scale)
  File "C:\Users\ZeroCool22\Desktop\Auto\modules\ldsr_model_arch.py", line 87, in super_resolution
    model = self.load_model_from_config(half_attention)
  File "C:\Users\ZeroCool22\Desktop\Auto\modules\ldsr_model_arch.py", line 25, in load_model_from_config
    model = instantiate_from_config(config.model)
  File "C:\Users\ZeroCool22\Desktop\Auto\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "C:\Users\ZeroCool22\Desktop\Auto\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 562, in __init__
    self.instantiate_first_stage(first_stage_config)
  File "C:\Users\ZeroCool22\Desktop\Auto\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 614, in instantiate_first_stage
    model = instantiate_from_config(config)
  File "C:\Users\ZeroCool22\Desktop\Auto\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "C:\Users\ZeroCool22\Desktop\Auto\repositories\stable-diffusion-stability-ai\ldm\util.py", line 87, in get_obj_from_str
    return getattr(importlib.import_module(module, package=None), cls)
AttributeError: module 'ldm.models.autoencoder' has no attribute 'VQModelInterface'

@KGUY1
Copy link

KGUY1 commented Nov 29, 2022

same

1 similar comment
@aiforpresident
Copy link

same

@paolodalprato
Copy link

Same issue

@wywywywy
Copy link
Contributor

I've created a new PR to repair this functionality. Can someone please give it a test #5216

@RoyHammerlin
Copy link

heartbroken. just want her back bros

@wywywywy
Copy link
Contributor

wywywywy commented Dec 1, 2022

The trouble is that Stability AI removed all references of VQ from their repo, leaving only IK, and LDSR depends on VQ.

My PR will get it working again at the cost of significant VRAM usage increase.

Sometimes I think that maybe we should give LDSR up and put effort into getting the SD 2.0 4x upscaler working instead, seeing as the SD 2.0 4x upscaler is the spiritual successor to LDSR.

Having said that, it looks like there's something wrong with the current version of SD 2.0 4x upscaler and it has excessive VRAM requirements.

@Ladypoly
Copy link

Ladypoly commented Dec 3, 2022

The PR was merged a few hours ago but on my 2080ti (11GB) I can't use even 2x LDSR anymore due to "out of memory"

@paolodalprato
Copy link

Same here (3080ti)
RuntimeError: CUDA out of memory. Tried to allocate 288.00 MiB (GPU 0; 12.00 GiB total capacity; 11.05 GiB already allocated; 0 bytes free; 11.09 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

@dill-shower
Copy link
Author

Same here (3080ti) RuntimeError: CUDA out of memory. Tried to allocate 288.00 MiB (GPU 0; 12.00 GiB total capacity; 11.05 GiB already allocated; 0 bytes free; 11.09 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Same

@Viexi
Copy link

Viexi commented Dec 3, 2022

LDSR is also broken for me. The code works, but as of a few commits ago I cannot use it due to OOM error. This used to work fine with --medvram enabled

@sampanes
Copy link

sampanes commented Dec 3, 2022

sad day :( same issue

@wywywywy
Copy link
Contributor

wywywywy commented Dec 3, 2022

The PR is only to make it possible again, so that the next lot of work can be carried out.

The next part is to re-implement back all the VQ-specific logic that Stability AI took out... and that'll take a while!

@websubst
Copy link

websubst commented Dec 3, 2022

The PR is only to make it possible again, so that the next lot of work can be carried out.

The next part is to re-implement back all the VQ-specific logic that Stability AI took out... and that'll take a while!

Thanks for your effort. I could "make it work" only by scaling to 512X512 😅. Otherwise, it's:

(GPU 0; 12.00 GiB total capacity; 10.80 GiB already allocated; 0 bytes free; 10.85 GiB reserved in total by PyTorch)

@kalkal11
Copy link

kalkal11 commented Dec 3, 2022

I can't even do a 512x512 upscale, running out of vram on a 3090ti

Maximum is 256x256 even with xformers enabled

@wywywywy
Copy link
Contributor

wywywywy commented Dec 3, 2022

There are 3 jobs remaining to get it fully functional (and better than before).

  1. Make it work with Half precision
  2. Make it work with optimisation (e.g. Xformers)
  3. Reinstate DDPM from the V1 repo, without affecting/breaking anything else. This will make VQ work correctly again, and hence not quantizing unless needed (which will in turn make memory usage manageable)

We could do with some help from an actual ML engineer, so if you can help, please chip in!.

Rather than a regular dev with only surface level understanding like myself. I tried 1 & 2 but couldn't get it to work.

@wywywywy
Copy link
Contributor

wywywywy commented Dec 4, 2022

I've created PR #5415 to apply the point 3 above -

Reinstate DDPM from the V1 repo, without affecting/breaking anything else. This will make VQ work correctly again, and hence not quantizing unless needed (which will in turn make memory usage manageable)

On my setup the VRAM usage has now gone back down to 5GB from 17GB. Can someone give it a test please?

@wywywywy
Copy link
Contributor

wywywywy commented Dec 4, 2022

I also got it to apply Xformers optimization (through modules.sd_hijack.model_hijack.hijack()) but it made no difference whatsoever to the it/s nor the VRAM usage. Not sure why that is.

Anyone any ideas?

@HWiese1980
Copy link

Just to get in line: same here! GeForce 3090TI with 24GB of VRAM, still states out of memory. What's going on?

@dnl13
Copy link

dnl13 commented Dec 6, 2022

@wywywywy
i applied your pr 5415 manually and everything seems to work great.
it's a bit slow when choosed upscaler 1 and 2 ldsr and scale by 4, but that's for sure with my 2070s 8gb
each process took ~8:20 min with 100 timesteps (5.01s/it)
set COMMANDLINE_ARGS=--api --xformers applied
input img size 768x768px

@wywywywy
Copy link
Contributor

wywywywy commented Dec 6, 2022

Thanks for testing. Is the total time taken roughly the same as how it worked in the past?

@dnl13
Copy link

dnl13 commented Dec 6, 2022

the whole process took about 20min in total. unfortunately I can't say anything about the past since I'm only on it and it hasn't worked since then

@wywywywy
Copy link
Contributor

wywywywy commented Dec 6, 2022

I think it's probably about right. Even on my 3090, upscaling a 512x512 by 4x takes a while.

The next PR will have optimisations (like Xformers) enabled, and that might help you a bit.

@wywywywy
Copy link
Contributor

The above PR #5415 has now been merged. So the memory usage should go back to previous working level now.

I've also created a new PR #5586 to further improve it - it allows caching, optimization (e.g. Xformers), and Channels Last memory format. Please give it a test if you have time.

I could not get --medvram and --lowvram to work because how different the LDSR model is to the SD models.

@wywywywy
Copy link
Contributor

PR has now been merged. LDSR should now be a viable option again.

@HWiese1980
Copy link

Yeah, confirmed. Upscaling to x4 with two levels of LDSR takes a good 6 minutes on my 3090ti

@HWiese1980
Copy link

Oh and, yeah, it's working again.

@websubst
Copy link

It works! Thank you so much for fixing it!👍👍

@paolodalprato
Copy link

I confirm, it works, thanks!
Just a note, now LDSR 4X from 1024x1024 -> 4000x4000, before it was 4096x4096

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests