Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Performance 1/6] use_checkpoint = False #15803

Merged
merged 3 commits into from
Jun 8, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion configs/alt-diffusion-inference.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ model:
use_spatial_transformer: True
transformer_depth: 1
context_dim: 768
use_checkpoint: True
use_checkpoint: False
legacy: False

first_stage_config:
Expand Down
2 changes: 1 addition & 1 deletion configs/alt-diffusion-m18-inference.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ model:
use_linear_in_transformer: True
transformer_depth: 1
context_dim: 1024
use_checkpoint: True
use_checkpoint: False
legacy: False

first_stage_config:
Expand Down
2 changes: 1 addition & 1 deletion configs/instruct-pix2pix.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ model:
use_spatial_transformer: True
transformer_depth: 1
context_dim: 768
use_checkpoint: True
use_checkpoint: False
legacy: False

first_stage_config:
Expand Down
2 changes: 1 addition & 1 deletion configs/sd_xl_inpaint.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ model:
params:
adm_in_channels: 2816
num_classes: sequential
use_checkpoint: True
use_checkpoint: False
in_channels: 9
out_channels: 4
model_channels: 320
Expand Down
2 changes: 1 addition & 1 deletion configs/v1-inference.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ model:
use_spatial_transformer: True
transformer_depth: 1
context_dim: 768
use_checkpoint: True
use_checkpoint: False
legacy: False

first_stage_config:
Expand Down
2 changes: 1 addition & 1 deletion configs/v1-inpainting-inference.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ model:
use_spatial_transformer: True
transformer_depth: 1
context_dim: 768
use_checkpoint: True
use_checkpoint: False
legacy: False

first_stage_config:
Expand Down
9 changes: 6 additions & 3 deletions modules/sd_hijack_checkpoint.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,16 +4,19 @@
import ldm.modules.diffusionmodules.openaimodel


# Setting flag=False so that torch skips checking parameters.
# parameters checking is expensive in frequent operations.

def BasicTransformerBlock_forward(self, x, context=None):
return checkpoint(self._forward, x, context)
return checkpoint(self._forward, x, context, flag=False)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The checkpoint here is torch.utils.checkpoint.checkpoint, and it does not have flag=False. I think you confused this with ldm.modules.diffusionmodules.util.checkpoint. The sd_hijack_checkpoint.py already removed the checkpointing in ldm, but we might need to do it on sgm as well.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I understand, what you are looking for is actually

ldm.modules.attention.BasicTransformerBlock.forward = ldm.modules.attention.BasicTransformerBlock._forward

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A closer look indicates that the checkpoint here is only called when training occurs (textual_inversion & hypernetwork) so disabling checkpoint here may be undesirable.



def AttentionBlock_forward(self, x):
return checkpoint(self._forward, x)
return checkpoint(self._forward, x, flag=False)


def ResBlock_forward(self, x, emb):
return checkpoint(self._forward, x, emb)
return checkpoint(self._forward, x, emb, flag=False)


stored = []
Expand Down
8 changes: 8 additions & 0 deletions modules/sd_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -551,6 +551,14 @@ def repair_config(sd_config):
karlo_path = os.path.join(paths.models_path, 'karlo')
sd_config.model.params.noise_aug_config.params.clip_stats_path = sd_config.model.params.noise_aug_config.params.clip_stats_path.replace("checkpoints/karlo_models", karlo_path)

# Do not use checkpoint for inference.
# This helps prevent extra performance overhead on checking parameters.
# The perf overhead is about 100ms/it on 4090 for SDXL.
if hasattr(sd_config.model.params, "network_config"):
sd_config.model.params.network_config.params.use_checkpoint = False
if hasattr(sd_config.model.params, "unet_config"):
sd_config.model.params.unet_config.params.use_checkpoint = False


def rescale_zero_terminal_snr_abar(alphas_cumprod):
alphas_bar_sqrt = alphas_cumprod.sqrt()
Expand Down
2 changes: 1 addition & 1 deletion modules/sd_models_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ def is_using_v_parameterization_for_sd2(state_dict):

with sd_disable_initialization.DisableInitialization():
unet = ldm.modules.diffusionmodules.openaimodel.UNetModel(
use_checkpoint=True,
use_checkpoint=False,
use_fp16=False,
image_size=32,
in_channels=4,
Expand Down