-
Notifications
You must be signed in to change notification settings - Fork 10.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inpainting model #17
Comments
So far as I know, inpainting is not a capability that is specific to any particular trained model (e.g. set of network weights). Rather, at the heart of inpainting it is a piece of code that "freezes" one part of the image as it is being generated. There is actually code to do inpainting in the "scripts" directory ("inpaint.py"). I looked it over briefly, and it looks like you just have to supply a mask, which is a PNG file. The puzzling thing about this script is that it takes very few parameters. Maybe they just have hard-wired in some reasonable defaults. My guess is that somebody could cobble together an inpainting example in a colab notebook without too much trouble. If somebody does this, please let the rest of us know! |
Ah, so instead of a separate model, it would just be the new SD model itself being used for the inpainting as well? I was under the impression it was a separate model. |
No dood, its already in there , you just need weights but theres no GUI for it |
where can I find the weights? |
When trying to run the inpainting script, I'm missing a file called last.ckpt |
I found the weights in a nearby repository: wget -O models/ldm/inpainting_big/last.ckpt https://heibox.uni-heidelberg.de/f/4d9ac7ea40c64582b7c9/?dl=1 |
are there any specific requirements for the inpainting model input? the result looked garbled up for me, as if the dimensions were improperly translated. |
You can try Lama Cleaner, it integrates multiple inpainting models, including LDM. |
2D animations
There is also a download script: wget -O models/ldm/inpainting_big/model.zip https://ommer-lab.com/files/latent-diffusion/inpainting_big.zip but I didn't try it. |
does anybody have a version of this inpainting script that involves also a text prompt so that the masked parts of the image are pushed in the direction of the text prompt, and only those areas, but taking into account the surroundings etc, thank you |
So hold on, someone said we could use the SD model, but then that didn't work, so people started downloading the old LDM model for inpainting? |
I found something that claims to be using Stable Diffusion. Here's the walkthrough video: And here's the colab notebook: |
SD is based on LDM. I guess the inpainting is a legacy example from that project |
The link works, but the archive seems to be the checkpoint itself, not an archive. |
there's some problems with that notebook, doesn't work out of the box at least. |
So to be clear, the inpainting we are all doing is the same, identical inpainting as before with LDM months ago before SD existed, right? The file from the link is the same inpainting checkpoint from months ago with good old LDM I think. I have not checked the notebook yet, but the would be the first thing claiming to use SD I think |
@benedlore I'm not completely sure, but I have the impression that the diffusers library (https://github.com/huggingface/diffusers) uses the main SD model for inpainting with it's own engine. In this colab (https://colab.research.google.com/drive/1k9dnZDsVzKMk1-ZlBwZPUPVzDYZySmCQ) you can see it in use, and I don't see any other model used for inpainting but "CompVis/stable-diffusion-v1-4". Here is the source code from the diffusers library for reference (https://github.com/huggingface/diffusers/blob/c7a3b2ed31ce3c49c8f9b84569fa67129bd59fa2/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py). Therefore, it seems possible to use the v1.4 model for inpainting too, just not with the official SD repo. |
@jtac , The author updated the notebook with bug fixes here: https://colab.research.google.com/drive/1R2HJvufacjy7GNrGCwgSE3LbQBk5qcS3?usp=sharing I tested it, and it works. |
I found another implementation here: https://colab.research.google.com/drive/1cd35l21ewU0fwWEnPjY_th5YORmMfZCd#scrollTo=U6Vf4xi_Prtv It uses this UI which has inpainting as part of it: |
@Jellybit AFAIK the hlky fork doesn't have proper inpainting, just masking. It just performs diffusion as usual, and then applies the mask, but this can create artifacts on the boundaries. Real inpainting should take into account the frozen pixels outside of the mask to avoid seams/artifacts. This is what is says in the Crop/Mask help:
It would be great if they implemented real inpainting because the hlky fork is one of the bests currently available in everything else. |
Actually, I just found this pull request where they do something in between masking and inpainting, it might be interesting to see how it compares to real inpainting. |
The same PR I made last time
Add Stable Diffusion 1.4 Inpainting in Lama Cleaner. It's based on the awesome diffusers library.
|
here is a GUI for inpaint |
Is there a new inpainting model released for researchers or is it still currently the original latent diffusion model that is current most released?
The text was updated successfully, but these errors were encountered: