Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable PyTorch's FakeTensorMode for EulerDiscreteScheduler scheduler #7151

Conversation

thiagocrepaldi
Copy link
Contributor

@thiagocrepaldi thiagocrepaldi commented Feb 29, 2024

PyTorch's FakeTensorMode does not support .numpy() or numpy.array() calls.

This PR replaces sigmas numpy tensor by a PyTorch tensor equivalent

Repro

with torch._subclasses.FakeTensorMode() as fake_mode, ONNXTorchPatcher():
    fake_model = DiffusionPipeline.from_pretrained(model_name, low_cpu_mem_usage=False)

that otherwise would fail with
RuntimeError: .numpy() is not supported for tensor subclasses.

Fixes #7152

@thiagocrepaldi
Copy link
Contributor Author

@hlky @patil-suraj @anton-l

@thiagocrepaldi thiagocrepaldi changed the title Enable FakeTensorMode for EulerDiscreteScheduler scheduler Enable PyTorch's FakeTensorMode for EulerDiscreteScheduler scheduler Feb 29, 2024
@hlky
Copy link
Collaborator

hlky commented Feb 29, 2024

torch.tensor here introduces a warning UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).

self.alphas_cumprod is already a torch.tensor and already in float32 so (((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5).flip(0) will be sufficient.

This fix could also be applied to other schedulers.

@thiagocrepaldi thiagocrepaldi force-pushed the thiagofc/enable-fake-tensor-EulerDiscreteScheduler branch from 1228ebe to 4ef59ed Compare February 29, 2024 23:49
@yiyixuxu
Copy link
Collaborator

yiyixuxu commented Mar 1, 2024

@thiagocrepaldi
I think @hlky's recommendation here makes sense, no?

@yiyixuxu yiyixuxu added the ONNX label Mar 1, 2024
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@thiagocrepaldi
Copy link
Contributor Author

@thiagocrepaldi I think @hlky's recommendation here makes sense, no?

yes, will do now. thanks

@thiagocrepaldi
Copy link
Contributor Author

For other schedulers, we need a way to workaround the absence of torch.interp to fix lines such as sigmas = np.interp(timesteps, np.arange(0, len(sigmas)), sigmas)

Any ideas?

Maybe use the following snippet instead? (from the link above)

def interpolate(x: torch.Tensor, xp: torch.Tensor, fp: torch.Tensor) -> torch.Tensor:
    """One-dimensional linear interpolation for monotonically increasing sample
    points.

    Returns the one-dimensional piecewise linear interpolant to a function with
    given discrete data points :math:`(xp, fp)`, evaluated at :math:`x`.

    Args:
        x: the :math:`x`-coordinates at which to evaluate the interpolated
            values.
        xp: the :math:`x`-coordinates of the data points, must be increasing.
        fp: the :math:`y`-coordinates of the data points, same length as `xp`.

    Returns:
        the interpolated values, same size as `x`.
    """
    m = (fp[:,1:] - fp[:,:-1]) / (xp[:,1:] - xp[:,:-1])  #slope
    b = fp[:, :-1] - (m.mul(xp[:, :-1]) )

    indicies = torch.sum(torch.ge(x[:, :, None], xp[:, None, :]), -1) - 1  #torch.ge:  x[i] >= xp[i] ? true: false
    indicies = torch.clamp(indicies, 0, m.shape[-1] - 1)

    line_idx = torch.linspace(0, indicies.shape[0], 1, device=indicies.device).to(torch.long)
    line_idx = line_idx.expand(indicies.shape)
    # idx = torch.cat([line_idx, indicies] , 0)
    return m[line_idx, indicies].mul(x) + b[line_idx, indicies]

@thiagocrepaldi thiagocrepaldi force-pushed the thiagofc/enable-fake-tensor-EulerDiscreteScheduler branch from 2af35c9 to 56b29c9 Compare March 1, 2024 16:21
@thiagocrepaldi thiagocrepaldi force-pushed the thiagofc/enable-fake-tensor-EulerDiscreteScheduler branch from 56b29c9 to 15c9796 Compare March 4, 2024 15:50
@thiagocrepaldi
Copy link
Contributor Author

Hi folks, from the discussion at #7151 , do you think we can merge this one?

@thiagocrepaldi thiagocrepaldi force-pushed the thiagofc/enable-fake-tensor-EulerDiscreteScheduler branch from b58b3e9 to 15c9796 Compare March 4, 2024 18:28
Thiago Crepaldi added 2 commits March 4, 2024 13:28
PyTorch's FakeTensorMode does not support `.numpy()` or `numpy.array()`
calls.

This PR replaces `sigmas` numpy tensor by a PyTorch tensor equivalent

Repro

```python
with torch._subclasses.FakeTensorMode() as fake_mode, ONNXTorchPatcher():
    fake_model = DiffusionPipeline.from_pretrained(model_name, low_cpu_mem_usage=False)
```

that otherwise would fail with
`RuntimeError: .numpy() is not supported for tensor subclasses.`
@thiagocrepaldi thiagocrepaldi force-pushed the thiagofc/enable-fake-tensor-EulerDiscreteScheduler branch from 15c9796 to c5d6b68 Compare March 4, 2024 18:28
@yiyixuxu yiyixuxu merged commit ca6cdc7 into huggingface:main Mar 4, 2024
15 checks passed
@yiyixuxu
Copy link
Collaborator

yiyixuxu commented Mar 4, 2024

thanks! merged

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support torch._subclksses.EulerDiscreteScheduler scheduler
5 participants