Skip to content
This repository has been archived by the owner on Jul 17, 2023. It is now read-only.

Allow model unloading from VRAM #31

Closed
vibrantrida opened this issue Dec 24, 2022 · 1 comment
Closed

Allow model unloading from VRAM #31

vibrantrida opened this issue Dec 24, 2022 · 1 comment
Assignees
Labels
enhancement New feature or request

Comments

@vibrantrida
Copy link

My gpu don't have enough VRAM, it would be nice to have a way to unload the tagger model from VRAM at will incase I run into "out of memory" error during inference

@toriato toriato added the enhancement New feature or request label Dec 25, 2022
toriato added a commit that referenced this issue Dec 26, 2022
@toriato toriato self-assigned this Dec 26, 2022
@toriato
Copy link
Owner

toriato commented Dec 26, 2022

There is a bug in Keras where it is not possible to release a model from memory.
Downgrading to keras==2.1.6 may fix the issue, but it may also cause compatibility problems with other packages.
Using subprocess to create a new process may also solve the issue, but it can be too complex.
For now, it seems that the best option is to keep the model in memory, as most users use the Waifu Diffusion model with onnx.

Related issue: keras-team/keras#2102

I will open a new issue and leave it open until I find a way to unload the Keras model.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants