Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inpainting model #17

Open
benedlore opened this issue Aug 15, 2022 · 23 comments
Open

Inpainting model #17

benedlore opened this issue Aug 15, 2022 · 23 comments

Comments

@benedlore
Copy link

Is there a new inpainting model released for researchers or is it still currently the original latent diffusion model that is current most released?

@gregturk
Copy link

So far as I know, inpainting is not a capability that is specific to any particular trained model (e.g. set of network weights). Rather, at the heart of inpainting it is a piece of code that "freezes" one part of the image as it is being generated. There is actually code to do inpainting in the "scripts" directory ("inpaint.py"). I looked it over briefly, and it looks like you just have to supply a mask, which is a PNG file. The puzzling thing about this script is that it takes very few parameters. Maybe they just have hard-wired in some reasonable defaults.

My guess is that somebody could cobble together an inpainting example in a colab notebook without too much trouble. If somebody does this, please let the rest of us know!

@benedlore
Copy link
Author

Ah, so instead of a separate model, it would just be the new SD model itself being used for the inpainting as well? I was under the impression it was a separate model.

@1blackbar
Copy link

No dood, its already in there , you just need weights but theres no GUI for it

@karray
Copy link

karray commented Aug 23, 2022

where can I find the weights?

@scyheidekamp
Copy link

scyheidekamp commented Aug 23, 2022

When trying to run the inpainting script, I'm missing a file called last.ckpt
Is this already available somewhere? Because placing the sd-v1-4.ckpt there doesn't seem to work

@karray
Copy link

karray commented Aug 23, 2022

I found the weights in a nearby repository:

wget -O models/ldm/inpainting_big/last.ckpt https://heibox.uni-heidelberg.de/f/4d9ac7ea40c64582b7c9/?dl=1

@banteg
Copy link

banteg commented Aug 24, 2022

are there any specific requirements for the inpainting model input? the result looked garbled up for me, as if the dimensions were improperly translated.

@Sanster
Copy link

Sanster commented Aug 26, 2022

No dood, its already in there , you just need weights but theres no GUI for it

You can try Lama Cleaner, it integrates multiple inpainting models, including LDM.

image

enzymezoo-code pushed a commit to enzymezoo-code/stable-diffusion that referenced this issue Aug 27, 2022
@karray
Copy link

karray commented Aug 28, 2022

this link doesnt seem to be working anymore, anybody got an updated link?

There is also a download script:

wget -O models/ldm/inpainting_big/model.zip https://ommer-lab.com/files/latent-diffusion/inpainting_big.zip

but I didn't try it.

@javismiles
Copy link

does anybody have a version of this inpainting script that involves also a text prompt so that the masked parts of the image are pushed in the direction of the text prompt, and only those areas, but taking into account the surroundings etc, thank you

@benedlore
Copy link
Author

So hold on, someone said we could use the SD model, but then that didn't work, so people started downloading the old LDM model for inpainting?

@Jellybit
Copy link

So hold on, someone said we could use the SD model, but then that didn't work, so people started downloading the old LDM model for inpainting?

I found something that claims to be using Stable Diffusion. Here's the walkthrough video:
https://www.youtube.com/watch?v=N913hReVxMM

And here's the colab notebook:
https://colab.research.google.com/drive/1R2HJvufacjy7GNrGCwgSE3LbQBk5qcS3?usp=sharing

@karray
Copy link

karray commented Aug 31, 2022

So hold on, someone said we could use the SD model, but then that didn't work, so people started downloading the old LDM model for inpainting?

SD is based on LDM. I guess the inpainting is a legacy example from that project

@krummrey
Copy link

this link doesnt seem to be working anymore, anybody got an updated link?

There is also a download script:

wget -O models/ldm/inpainting_big/model.zip https://ommer-lab.com/files/latent-diffusion/inpainting_big.zip

but I didn't try it.

The link works, but the archive seems to be the checkpoint itself, not an archive.
I renamed it to last.ckpt
The script runs with no errors, but I get a garbled result. But that might be an apple silicon problem...

@jtac
Copy link

jtac commented Aug 31, 2022

So hold on, someone said we could use the SD model, but then that didn't work, so people started downloading the old LDM model for inpainting?

I found something that claims to be using Stable Diffusion. Here's the walkthrough video: https://www.youtube.com/watch?v=N913hReVxMM

And here's the colab notebook: https://colab.research.google.com/drive/1R2HJvufacjy7GNrGCwgSE3LbQBk5qcS3?usp=sharing

there's some problems with that notebook, doesn't work out of the box at least.

@benedlore
Copy link
Author

So to be clear, the inpainting we are all doing is the same, identical inpainting as before with LDM months ago before SD existed, right? The file from the link is the same inpainting checkpoint from months ago with good old LDM I think. I have not checked the notebook yet, but the would be the first thing claiming to use SD I think

@siriux
Copy link

siriux commented Aug 31, 2022

@benedlore I'm not completely sure, but I have the impression that the diffusers library (https://github.com/huggingface/diffusers) uses the main SD model for inpainting with it's own engine.

In this colab (https://colab.research.google.com/drive/1k9dnZDsVzKMk1-ZlBwZPUPVzDYZySmCQ) you can see it in use, and I don't see any other model used for inpainting but "CompVis/stable-diffusion-v1-4".

Here is the source code from the diffusers library for reference (https://github.com/huggingface/diffusers/blob/c7a3b2ed31ce3c49c8f9b84569fa67129bd59fa2/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py).

Therefore, it seems possible to use the v1.4 model for inpainting too, just not with the official SD repo.

@Jellybit
Copy link

Jellybit commented Sep 1, 2022

there's some problems with that notebook, doesn't work out of the box at least.

@jtac , The author updated the notebook with bug fixes here:

https://colab.research.google.com/drive/1R2HJvufacjy7GNrGCwgSE3LbQBk5qcS3?usp=sharing

I tested it, and it works.

@Jellybit
Copy link

Jellybit commented Sep 2, 2022

I found another implementation here:

https://colab.research.google.com/drive/1cd35l21ewU0fwWEnPjY_th5YORmMfZCd#scrollTo=U6Vf4xi_Prtv

It uses this UI which has inpainting as part of it:

https://github.com/hlky/stable-diffusion

@siriux
Copy link

siriux commented Sep 2, 2022

@Jellybit AFAIK the hlky fork doesn't have proper inpainting, just masking. It just performs diffusion as usual, and then applies the mask, but this can create artifacts on the boundaries. Real inpainting should take into account the frozen pixels outside of the mask to avoid seams/artifacts.

This is what is says in the Crop/Mask help:

Masking is not inpainting. You will probably get better results manually masking your images in photoshop instead.

It would be great if they implemented real inpainting because the hlky fork is one of the bests currently available in everything else.

@siriux
Copy link

siriux commented Sep 2, 2022

Actually, I just found this pull request where they do something in between masking and inpainting, it might be interesting to see how it compares to real inpainting.
Sygil-Dev/sygil-webui#308

colemickens pushed a commit to colemickens/stable-diffusion that referenced this issue Sep 15, 2022
The same PR I made last time
@Sanster
Copy link

Sanster commented Sep 23, 2022

Add Stable Diffusion 1.4 Inpainting in Lama Cleaner. It's based on the awesome diffusers library.

image

Original Image Inpainted
image image

@CreamyLong
Copy link

No dood, its already in there , you just need weights but theres no GUI for it

here is a GUI for inpaint
https://github.com/CreamyLong/stable-diffusion/blob/master/scripts/inpaint_gradio.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests