You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jul 17, 2023. It is now read-only.
My gpu don't have enough VRAM, it would be nice to have a way to unload the tagger model from VRAM at will incase I run into "out of memory" error during inference
The text was updated successfully, but these errors were encountered:
There is a bug in Keras where it is not possible to release a model from memory.
Downgrading to keras==2.1.6 may fix the issue, but it may also cause compatibility problems with other packages.
Using subprocess to create a new process may also solve the issue, but it can be too complex.
For now, it seems that the best option is to keep the model in memory, as most users use the Waifu Diffusion model with onnx.
My gpu don't have enough VRAM, it would be nice to have a way to unload the tagger model from VRAM at will incase I run into "out of memory" error during inference
The text was updated successfully, but these errors were encountered: