Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update FlowMatchEulerDiscreteScheduler with new design to support SD3 / SD3.5 / Flux moving forward #10511

Open
1 of 2 tasks
ukaprch opened this issue Jan 9, 2025 · 2 comments
Labels
stale Issues that haven't received updates

Comments

@ukaprch
Copy link

ukaprch commented Jan 9, 2025

Model/Pipeline/Scheduler description

Fully support SD3 / SD3.5 / FLUX models using new scheduler design template with a more standardized approach including new parameters to support models. This approach can be utilized in existing schedules to support flow match models.
I have a working FlowMatchEulerDiscreteScheduler schedule in my local library to support this request.

A list of current issues and PR's (and there may be more) may be fully or partially addressed with this request:

#10001
#9982
#10146
#9955
#9951
#9924 <== This current issue

Open source status

  • The model implementation is available.
  • The model weights are available (Only relevant if addition is not a scheduler).

Provide useful links for the implementation

@yiyixuxu @linjiapro @vladmandic @hlky

@hlky
Copy link
Collaborator

hlky commented Jan 9, 2025

Hi @ukaprch. Thanks for your interest in Flow Match scheduling support. A scheduling refactor is in progress: #10146. I've written an overview of the preliminary design below, see #10146 (comment) for more code samples and output examples. I'm working on adding more schedulers and refining the design. There is a lot to consider for the design, we also need to gracefully handle deprecation and minimize downstream effects. We know this is highly anticipated and we appreciate your patience while we work on it 🤗

Scheduling refactor

Motivation

  • We have limited support for Flow Match as we need separate Flow Match variants
    • Likewise we have limited support for EDM
  • Noise schedules like karras, beta are implemented on a per-scheduler basis so there is limited support
  • Flow Match schedulers have to pass model specific base schedule through sigmas

Design

The new design aims to eliminate variants, improve support coverage and allow easier customization/experimentation.

We accomplish this by:

  • Identifying schedule types e.g. Beta, Flow Match that group existing code related to each

Beta

Creates a BetaSchedule of type "linear", "scaled_linear" etc. which create Tensors betas, alphas, alphas_cumprod with shape (num_train_timesteps,). timesteps are created according to timestep_spacing of type "linspace", "leading", "trailing", sigmas are created according to interpolation_type of type "linear", "log_linear".

BetaSchedule(
    beta_end=0.012,
    beta_schedule="scaled_linear",
    beta_start=0.00085,
    timestep_spacing="leading",
)

Flow Match

Creates a FlowMatchSchedule which creates Tensor sigmas with shape (num_inference_steps,) according to the base_schedule and applies shifting with shift value or dynamic_shifting with mu value.

flow_schedule = FlowMatchSchedule(
    shift=13.0,
    use_dynamic_shifting=False,
    base_schedule=FlowMatchSD3(),
)
class FlowMatchCustom:
    def __call__(self, num_inference_steps: int, **kwargs) -> np.ndarray:
        """
        FlowMatchFlux with shifting on end split
        """
        sigmas = np.linspace(1.0, 1 / num_inference_steps, num_inference_steps)
        half = num_inference_steps // 2
        sigmas[half:] = sigmas[half:] * 1.2
        return sigmas

flow_schedule = FlowMatchSchedule(
    use_dynamic_shifting=True,
    base_schedule=FlowMatchCustom(),
)

Custom

class CustomSchedule:
    scale_model_input = False

    def __init__(
        self,
        ...
        **kwargs,
    ):
        ...

    def __call__(
        self,
        num_inference_steps: int = None,
        device: Union[str, torch.device] = None,
        timesteps: Optional[List[int]] = None,
        sigmas: Optional[List[float]] = None,
        sigma_schedule: Optional[Union[KarrasSigmas, ExponentialSigmas, BetaSigmas]] = None,
        ...,
        **kwargs,
    ):
        ...

        return sigmas, timesteps
  • Separating sigma schedules e.g. Exponential, Karras to allow use with any schedule type and easy customizability

Karras

class KarrasSigmas:
    def __init__(
        self,
        sigma_min: Optional[float] = None,
        sigma_max: Optional[float] = None,
        rho: float = 7.0,
        **kwargs,
    ):
        self.sigma_min = sigma_min
        self.sigma_max = sigma_max
        self.rho = rho

    def __call__(self, in_sigmas: torch.Tensor, **kwargs):
        sigma_min = self.sigma_min
        if sigma_min is None:
            sigma_min = in_sigmas[-1].item()
        sigma_max = self.sigma_max
        if sigma_max is None:
            sigma_max = in_sigmas[0].item()

        num_inference_steps = len(in_sigmas)

        rho = self.rho
        ramp = np.linspace(0, 1, num_inference_steps)
        min_inv_rho = sigma_min ** (1 / rho)
        max_inv_rho = sigma_max ** (1 / rho)
        sigmas = (max_inv_rho + ramp * (min_inv_rho - max_inv_rho)) ** rho
        return sigmas

Custom

class ShiftSigmas:
    def __init__(
        self,
        shift: float,
        **kwargs,
    ):
        self.shift = shift

    def __call__(self, in_sigmas: torch.Tensor, **kwargs):
        sigmas = in_sigmas * self.shift
        return sigmas
  • Simplifying existing scheduling_* e.g. scheduling_euler_discrete by moving some common code to a Mixin. EulerDiscreteScheduler etc. will work with any schedule type and sigma schedule. The code is mainly related to the sampling method.
euler = EulerDiscreteScheduler(
    schedule_config=BetaSchedule(
        beta_end=0.012,
        beta_schedule="scaled_linear",
        beta_start=0.00085,
        timestep_spacing="leading",
    ),
    sigma_schedule_config=KarrasSigmas(),
)

flow_match_euler = EulerDiscreteScheduler(
    schedule_config=FlowMatchSchedule(
        shift=13.0,
        use_dynamic_shifting=False,
        base_schedule=FlowMatchSD3(),
    ),
    sigma_schedule_config=None,
)

Copy link

github-actions bot commented Feb 8, 2025

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

@github-actions github-actions bot added the stale Issues that haven't received updates label Feb 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale Issues that haven't received updates
Projects
None yet
Development

No branches or pull requests

2 participants