Skip to content

Commit

Permalink
Update some typos
Browse files Browse the repository at this point in the history
  • Loading branch information
hinablue committed Feb 15, 2024
1 parent aa51809 commit 40d7b60
Show file tree
Hide file tree
Showing 8 changed files with 23 additions and 23 deletions.
22 changes: 11 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -509,7 +509,7 @@ masterpiece, best quality, 1boy, in business suit, standing at street, looking b
- `safetensors` is updated. Please see [Upgrade](#upgrade) and update the library.
- Fixed a bug that the training crashes when `network_multiplier` is specified with multi-GPU training. PR [#1084](https://github.com/kohya-ss/sd-scripts/pull/1084) Thanks to fireicewolf!
- Fixed a bug that the training crashes when training ControlNet-LLLite.

- Merge sd-scripts v0.8.2 code update
- [Experimental] The `--fp8_base` option is added to the training scripts for LoRA etc. The base model (U-Net, and Text Encoder when training modules for Text Encoder) can be trained with fp8. PR [#1057](https://github.com/kohya-ss/sd-scripts/pull/1057) Thanks to KohakuBlueleaf!
- Please specify `--fp8_base` in `train_network.py` or `sdxl_train_network.py`.
Expand All @@ -522,15 +522,15 @@ masterpiece, best quality, 1boy, in business suit, standing at street, looking b
- For example, if you train with state A as `1.0` and state B as `-1.0`, you may be able to generate by switching between state A and B depending on the LoRA application rate.
- Also, if you prepare five states and train them as `0.2`, `0.4`, `0.6`, `0.8`, and `1.0`, you may be able to generate by switching the states smoothly depending on the application rate.
- Please specify `network_multiplier` in `[[datasets]]` in `.toml` file.

- Some options are added to `networks/extract_lora_from_models.py` to reduce the memory usage.
- `--load_precision` option can be used to specify the precision when loading the model. If the model is saved in fp16, you can reduce the memory usage by specifying `--load_precision fp16` without losing precision.
- `--load_original_model_to` option can be used to specify the device to load the original model. `--load_tuned_model_to` option can be used to specify the device to load the derived model. The default is `cpu` for both options, but you can specify `cuda` etc. You can reduce the memory usage by loading one of them to GPU. This option is available only for SDXL.

- The gradient synchronization in LoRA training with multi-GPU is improved. PR [#1064](https://github.com/kohya-ss/sd-scripts/pull/1064) Thanks to KohakuBlueleaf!

- The code for Intel IPEX support is improved. PR [#1060](https://github.com/kohya-ss/sd-scripts/pull/1060) Thanks to akx!

- Fixed a bug in multi-GPU Textual Inversion training.

- `.toml` example for network multiplier
Expand All @@ -556,7 +556,7 @@ masterpiece, best quality, 1boy, in business suit, standing at street, looking b

- Fixed a bug that the VRAM usage without Text Encoder training is larger than before in training scripts for LoRA etc (`train_network.py`, `sdxl_train_network.py`).
- Text Encoders were not moved to CPU.

- Fixed typos. Thanks to akx! [PR #1053](https://github.com/kohya-ss/sd-scripts/pull/1053)

* 2024/01/15 (v22.5.0)
Expand All @@ -574,10 +574,10 @@ masterpiece, best quality, 1boy, in business suit, standing at street, looking b
- IPEX library is updated. PR [#1030](https://github.com/kohya-ss/sd-scripts/pull/1030) Thanks to Disty0!
- Fixed a bug that Diffusers format model cannot be saved.
- Fix LoRA config display after load that would sometime hide some of the feilds

* 2024/01/02 (v22.4.1)
- Minor bug fixed and enhancements.

* 2023/12/28 (v22.4.0)
- Fixed to work `tools/convert_diffusers20_original_sd.py`. Thanks to Disty0! PR [#1016](https://github.com/kohya-ss/sd-scripts/pull/1016)
- The issues in multi-GPU training are fixed. Thanks to Isotr0py! PR [#989](https://github.com/kohya-ss/sd-scripts/pull/989) and [#1000](https://github.com/kohya-ss/sd-scripts/pull/1000)
Expand All @@ -592,13 +592,13 @@ masterpiece, best quality, 1boy, in business suit, standing at street, looking b
- The optimizer `PagedAdamW` is added. Thanks to xzuyn! PR [#955](https://github.com/kohya-ss/sd-scripts/pull/955)
- NaN replacement in SDXL VAE is sped up. Thanks to liubo0902! PR [#1009](https://github.com/kohya-ss/sd-scripts/pull/1009)
- Fixed the path error in `finetune/make_captions.py`. Thanks to CjangCjengh! PR [#986](https://github.com/kohya-ss/sd-scripts/pull/986)

* 2023/12/20 (v22.3.1)
- Add goto button to manual caption utility
- Add missing options for various LyCORIS training algorythms
- Add missing options for various LyCORIS training algorithms
- Refactor how feilds are shown or hidden
- Made max value for network and convolution rank 512 except for LyCORIS/LoKr.

* 2023/12/06 (v22.3.0)
- Merge sd-scripts updates:
- `finetune\tag_images_by_wd14_tagger.py` now supports the separator other than `,` with `--caption_separator` option. Thanks to KohakuBlueleaf! PR [#913](https://github.com/kohya-ss/sd-scripts/pull/913)
Expand All @@ -612,4 +612,4 @@ masterpiece, best quality, 1boy, in business suit, standing at street, looking b
- `--ds_ratio` option denotes the ratio of the Deep Shrink. `0.5` means the half of the original latent size for the Deep Shrink.
- `--dst1`, `--dst2`, `--dsd1`, `--dsd2` and `--dsr` prompt options are also available.
- Add GLoRA support
-
-
2 changes: 1 addition & 1 deletion examples/caption.ps1
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# This powershell script will create a text file for each files in the folder
#
# Usefull to create base caption that will be augmented on a per image basis
# Useful to create base caption that will be augmented on a per image basis

$folder = "D:\some\folder\location\"
$file_pattern="*.*"
Expand Down
6 changes: 3 additions & 3 deletions examples/caption_subfolders.ps1
Original file line number Diff line number Diff line change
@@ -1,19 +1,19 @@
# This powershell script will create a text file for each files in the folder
#
# Usefull to create base caption that will be augmented on a per image basis
# Useful to create base caption that will be augmented on a per image basis

$folder = "D:\test\t2\"
$file_pattern="*.*"
$text_fir_file="bigeyes style"

foreach ($file in Get-ChildItem $folder\$file_pattern -File)
foreach ($file in Get-ChildItem $folder\$file_pattern -File)
{
New-Item -ItemType file -Path $folder -Name "$($file.BaseName).txt" -Value $text_fir_file
}

foreach($directory in Get-ChildItem -path $folder -Directory)
{
foreach ($file in Get-ChildItem $folder\$directory\$file_pattern)
foreach ($file in Get-ChildItem $folder\$directory\$file_pattern)
{
New-Item -ItemType file -Path $folder\$directory -Name "$($file.BaseName).txt" -Value $text_fir_file
}
Expand Down
2 changes: 1 addition & 1 deletion examples/kohya_train_db_fixed_with-reg_SDv2 512 base.ps1
Original file line number Diff line number Diff line change
Expand Up @@ -61,4 +61,4 @@ accelerate launch --num_cpu_threads_per_process $num_cpu_threads_per_process tra
--seed=494481440 `
--lr_scheduler=$lr_scheduler

# Add the inference yaml file along with the model for proper loading. Need to have the same name as model... Most likelly "last.yaml" in our case.
# Add the inference yaml file along with the model for proper loading. Need to have the same name as model... Most likely "last.yaml" in our case.
6 changes: 3 additions & 3 deletions library/wd14_caption_gui.py
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ def gradio_wd14_caption_gui_tab(headless=False):
value='.txt',
interactive=True,
)

caption_separator = gr.Textbox(
label='Caption Separator',
value=',',
Expand Down Expand Up @@ -199,11 +199,11 @@ def gradio_wd14_caption_gui_tab(headless=False):
],
value='SmilingWolf/wd-v1-4-convnextv2-tagger-v2',
)

force_download = gr.Checkbox(
label='Force model re-download',
value=False,
info='Usefull to force model re download when switching to onnx',
info='Useful to force model re download when switching to onnx',
)

general_threshold = gr.Slider(
Expand Down
4 changes: 2 additions & 2 deletions localizations/zh-TW.json
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@
"Show frequency of tags for images.": "顯示圖片的標籤頻率。",
"Show tags frequency": "顯示標籤頻率",
"Model": "模型",
"Usefull to force model re download when switching to onnx": "切換到 onnx 時,強制重新下載模型",
"Useful to force model re download when switching to onnx": "切換到 onnx 時,強制重新下載模型",
"Force model re-download": "強制重新下載模型",
"General threshold": "一般閾值",
"Adjust `general_threshold` for pruning tags (less tags, less flexible)": "調整 `general_threshold` 以修剪標籤 (標籤越少,彈性越小)",
Expand Down Expand Up @@ -101,7 +101,7 @@
"folder where the model will be saved": "模型將會被儲存的資料夾路徑",
"Model type": "模型類型",
"Extract LCM": "提取 LCM",
"Verfiy LoRA": "驗證 LoRA",
"Verify LoRA": "驗證 LoRA",
"Path to an existing LoRA network weights to resume training from": "要從中繼續訓練的現有 LoRA 網路權重的路徑",
"Seed": "種子",
"(Optional) eg:1234": " (選填) 例如:1234",
Expand Down
2 changes: 1 addition & 1 deletion setup/debug_info.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,6 @@

# Print VRAM warning if necessary
if gpu_vram_warning:
print('\033[33mWarning: GPU VRAM is less than 8GB and will likelly result in proper operations.\033[0m')
print('\033[33mWarning: GPU VRAM is less than 8GB and will likely result in proper operations.\033[0m')

print(' ')
2 changes: 1 addition & 1 deletion textual_inversion_gui.py
Original file line number Diff line number Diff line change
Expand Up @@ -747,7 +747,7 @@ def ti_tab(
with gr.Row():
weights = gr.Textbox(
label='Resume TI training',
placeholder='(Optional) Path to existing TI embeding file to keep training',
placeholder='(Optional) Path to existing TI embedding file to keep training',
)
weights_file_input = gr.Button(
'📂',
Expand Down

0 comments on commit 40d7b60

Please sign in to comment.