Skip to content

Commit 1ca19e1

Browse files
committed
Remove legacy examples from inference doc, fix links in main readme
Signed-off-by: Daniel Socek <daniel.socek@intel.com>
1 parent 01ee47b commit 1ca19e1

File tree

4 files changed

+50
-366
lines changed

4 files changed

+50
-366
lines changed

README.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -289,11 +289,11 @@ The following model architectures, tasks and device distributions have been vali
289289

290290
| Architecture | Training | Inference | Tasks |
291291
|:--------------------|:--------:|:---------:|:------|
292-
| Stable Diffusion | :heavy_check_mark: | :heavy_check_mark: | <li>[text-to-image generation](https://github.com/huggingface/optimum-habana/tree/main/examples/stable-diffusion#stable-diffusion)</li><li>[image-to-image generation](https://github.com/huggingface/optimum-habana/tree/main/examples/stable-diffusion#stable-diffusion-based-image-to-image)</li> |
292+
| Stable Diffusion | :heavy_check_mark: | :heavy_check_mark: | <li>[text-to-image generation](https://github.com/huggingface/optimum-habana/tree/main/examples/stable-diffusion)</li><li>[image-to-image generation](https://github.com/huggingface/optimum-habana/tree/main/examples/stable-diffusion)</li> |
293293
| Stable Diffusion XL | :heavy_check_mark: | :heavy_check_mark: | <li>[text-to-image generation](https://github.com/huggingface/optimum-habana/tree/main/examples/stable-diffusion#stable-diffusion-xl-sdxl)</li><li>[image-to-image generation](https://github.com/huggingface/optimum-habana/tree/main/examples/stable-diffusion#stable-diffusion-xl-refiner)</li> |
294-
| Stable Diffusion Depth2img | | <li>Single card</li> | <li>[depth-to-image generation](https://github.com/huggingface/optimum-habana/tree/main/examples/stable-diffusion#depth-to-image-generation)</li> |
295-
| Stable Diffusion 3 | | <li>Single card</li> | <li>[text-to-image generation](https://github.com/huggingface/optimum-habana/tree/main/examples/stable-diffusion#stable-diffusion-3-sd3)</li> |
296-
| LDM3D | | <li>Single card</li> | <li>[text-to-image generation](https://github.com/huggingface/optimum-habana/tree/main/examples/stable-diffusion#latent-diffusion-model-for-3d-ldm3d)</li> |
294+
| Stable Diffusion Depth2img | | <li>Single card</li> | <li>[depth-to-image generation](https://github.com/huggingface/optimum-habana/tree/main/examples/stable-diffusion)</li> |
295+
| Stable Diffusion 3 | | <li>Single card</li> | <li>[text-to-image generation](https://github.com/huggingface/optimum-habana/tree/main/examples/stable-diffusion#stable-diffusion-3-and-35-sd3)</li> |
296+
| LDM3D | | <li>Single card</li> | <li>[text-to-image generation](https://github.com/huggingface/optimum-habana/tree/main/examples/stable-diffusion)</li> |
297297
| FLUX.1 | <li>LoRA</li> | <li>Single card</li> | <li>[text-to-image generation](https://github.com/huggingface/optimum-habana/tree/main/examples/stable-diffusion#flux1)</li><li>[image-to-image generation](https://github.com/huggingface/optimum-habana/tree/main/examples/stable-diffusion#flux1-image-to-image)</li> |
298298
| Text to Video | | <li>Single card</li> | <li>[text-to-video generation](https://github.com/huggingface/optimum-habana/tree/main/examples/stable-diffusion#text-to-video-generation)</li> |
299299
| Image to Video | | <li>Single card</li> | <li>[image-to-video generation](https://github.com/huggingface/optimum-habana/tree/main/examples/stable-diffusion#image-to-video-generation)</li> |

examples/stable-diffusion/README.md

+31
Original file line numberDiff line numberDiff line change
@@ -141,6 +141,37 @@ FLUX in quantization mode by setting runtime variable `QUANT_CONFIG=quantization
141141

142142
To run with FLUX.1-schnell model, a distilled version of FLUX.1 (which is not gated), use `--model_name_or_path black-forest-labs/FLUX.1-schnell`.
143143

144+
## ControlNet
145+
146+
ControlNet was introduced in [Adding Conditional Control to Text-to-Image Diffusion Models](https://huggingface.co/papers/2302.05543)
147+
by Lvmin Zhang and Maneesh Agrawala, enables conditioning the Stable Diffusion model with an additional input image.
148+
This allows for precise control over the composition of generated images using various features such as edges,
149+
pose, depth, and more.
150+
151+
Here is how to generate images conditioned by Canny edge model:
152+
153+
```bash
154+
python text_to_image_generation.py \
155+
--model_name_or_path stable-diffusion-v1-5/stable-diffusion-v1-5 \
156+
--controlnet_model_name_or_path lllyasviel/sd-controlnet-canny \
157+
--prompts "futuristic-looking woman" \
158+
--control_image https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png \
159+
--num_images_per_prompt 28 \
160+
--batch_size 7 \
161+
--image_save_dir /tmp/controlnet_images \
162+
--use_habana \
163+
--use_hpu_graphs \
164+
--gaudi_config Habana/stable-diffusion \
165+
--sdp_on_bf16 \
166+
--bf16
167+
```
168+
169+
You can run inference on multiple HPUs by replacing `python text_to_image_generation.py` with
170+
`python ../gaudi_spawn.py --world_size <number-of-HPUs> text_to_image_generation.py` and adding option `--distributed`.
171+
172+
This ControlNet example will preprocess the input image to derive Canny edges. Alternatively, you can use `--control_preprocessing_type none`
173+
to supply a preprocessed control image directly, enabling many additional use cases.
174+
144175
## Inpainting
145176

146177
Inpainting replaces or edits specific areas of an image. For more details,

0 commit comments

Comments
 (0)