-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unloading ML-Danbooru, is it possible without webui reload? #17
Comments
ok, will check. I never touch that button. Thanks for reporting. |
Removing the line will remove the attribute error, and I'll push that as a quick fix, but then the model wont unload. this exact line was one of the open |
Thanks for your review and quick fix. So with this quick fix 5cbf4f5, the button works for unloading all loaded models except the ML-Danbooru models? Is my understanding correct? |
Edit, actually it seems related to tensorflow only, so it's the models DeepDanbooruInterrogator and the experimental large_batch_interrogate. Any of the others should unload properly. Note that with my implementation you re-read from the db.json, even after a shutdown and reload of stablediffusion webui, including the ui interface. and then the query will read from database. Or is that not enough on windows? It will allow you to retrieve/combine the former interrogation output without the model loading. Let's add a notice upon unloading all button, or unloading a danbooru, exactly for this purpose: |
On SO there is a solution mentioned: from numba import cuda
cuda.select_device(0)
cuda.close() But then the numba documentation is unclear how to reopen the same device again. And reading this SO someone mentions a .close() is unrecoverable, suggests a .reset(). I was thinking about if use_cpu:
import gc
else:
from numba import cuda
...
class DeepDanbooruInterrogator(Interrogator):
...
def unload(self) -> bool:
unloaded = super().unload()
if unloaded:
if use_cpu:
import tensorflow as tf
tf.keras.backend.clear_session()
gc.collect()
else:
device = cuda.get_current_device()
device.reset()
return unloaded But I run on CPU. |
It is little weird. I run a quick test with the latest version, but VRAM usage does not seem change, only chunk of main memory get released after unloaded. I am using nvidia card. |
Hi, you mentioned this toriato#33 In my opinion, it's not able to release GPU memory by I don‘t think If you really want to release |
Thanks, this is why I placed it behind an experimental option in the settings. The Nvidia dependency for numba did occur to me, but at least for nvidia numba could be an option? AMD (ROCm) and cpu are the others, or do even drivers like nouveau or nv function? I'm also not exactly sure what configs do not release propperly, and trying to get an impression, and whether it is vram or ram. thanks for the links, I'll do some more digging. |
One thing to note is that some users do not install Nvidia cuda tookit, but use Torch cuda. But Nvidia cuda tooki is required by numba. That’s why I think numba is not a good idea. In fact, any model related to tensorflow will encounter the problem of not being able to release GPU memory. The reason why other models, such as WD14, can be released normally is because they are onnx models (check this ), and onnxruntime can release them properly.
toriato seems to have tried to release tensorflow memory as well, and I have tried it too, but this seems to be unsolvable. My solution
I usually prefer the second option, because onnx model is faster to start and run. The downside is that you can’t use the latest models, unless you convert new models in time after the model author releases a new version. |
Ah thanks again. I found tf2onnx, which seems to do exactly this. |
You are welcome. |
One option I am considering is implementing a Settings -> Tagger -> checkbox: convert tensorflow models to onnx. There might be checks necessary for updates, maybe keep the tensorflow next to the onnx model; rerun if the tensorflow's mtime, or sha256 changes. |
When click the "Unload all interrogate models" button, the following errors occurred:
After a quick check, this error was caused by a code change (in line 136, \tagger\interrogator.py) within an recent pull request merge - "Manually merged: Support ML-Danbooru https://github.com/picobyte/stable-diffusion-webui-wd14-tagger/pull/6, changes amended from CCRcmcpe's".
Please review the relevant code and fix it, currently tagger can not clean its loaded models up. Thanks.
BTW, my webui env versions:
version: v1.4.1 • python: 3.10.8 • torch: 2.0.0+cu118 • xformers: 0.0.20 • gradio: 3.32.0
The text was updated successfully, but these errors were encountered: