-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
如何手动安装模型 How to install a model manually #26
Comments
When I rolled back the commit to 66b7724 and used wd14-swinv2-v2 it successfully loaded the model |
Do you expect access to huggingface? (Edit: I mean to ask: is it possible on your device?) Does the roll back mean you were able to load an existing model from a past configuration? part of the ML change #6 was a change of names, which may have this as side effect, not sure. |
For some reason, Chinese users need to use a proxy to access huggingface, which means that most of the time hf_hub_download() cannot work properly. |
OK, I see. I guess I kown why. Do not specify the hf_hub_download(
self.repo_id,
self.model_path,
local_dir=mdir
) |
set it through environment variables. Set local_dir to mdir. As suggested for #26
Ok, did that. This originates from f47d5da, please let me know if this fixed it for you. |
@picobyte |
我明白你的意思,下载错过了顶级目录并且正在相互覆盖。
It seems there used to be a 'models/' + repo_id with slashes replaced with double dashes. |
Edit:
It is from the huggingface cache
This is the file we need to use, you can get it by setting |
We can place the 'models/' + repo_id with slashes replaced with double dashes in there manually if it's not already there. |
Do you mean, using If the user needs to use their own model, they can manually place it in the Or do you mean, using |
I meant this
but I have to check, mdir is also used for model.json |
I think what might work is
|
But usually, we don’t specify The code is feasible, it’s just that the cache location is different, it doesn’t affect the normal work |
By the way, it’s better to use the model file through a explicit Because the latter has to call As long as we provide the user with a explicit |
ok I will fix that too, then.
|
I support it‘s a good idea |
okay it's in this pull request. Note that I removed all local_dir. It should work with both environment variables, or default to the For me it seems to work, though I get no updates in the file models/interrogators/model.json anymore. The question is, does this also work behind the proxy or with only local dirs. 好吧,它在 this 拉取请求中。请注意,我删除了所有 local_dir。它应该与两个环境变量一起使用,或者默认为“Path(shared.models_path, 'interrogators')”,但可以在“设置”->“标记器”中进行配置。 对我来说,它似乎有效,尽管我不再在文件 models/interrogators/model.json 中得到任何更新。问题是,这是否也适用于代理服务器或仅适用于本地目录。 |
hmm, from environment_variables, they are not exactly the same. |
Edit: The situation is this, Chinese users can manually download the model by accessing the huggingface web page, but they cannot use So the key to the problem is to determine the model file location in advance, and then users can manually place the file there, so that there is no need to call There is a point worth noting, even if
|
I did introduce an but.. users won't get updates anymore. |
OK, I got it. |
That's fine, thanks for helping me out. I'd have a hard time understanding the problem. However, I'm a bit afraid that changes here might raise problems for others, so also hesitant with this change. Edit: there's the |
I pushed a change on top: d252c2a, See commit message. It unifies the HuggingFace download, gives more download tweak options. If you want to set local_dir instead of the cache_dir, you cane edit the |
So where should I put model . Which dir ? |
Please allow me to wrap it now for it's kind of messy by far
By doing so, these SmilingWolf/ models are good to use without network access to huggingface May someone help us with that or is it can't be done right now? |
My server is not able to download the models from huggingface, so I need to install the models manually.
I would like to know where the model files should be placed, thank you very much.
The text was updated successfully, but these errors were encountered: