We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("THUDM/visualglm-6b", trust_remote_code=True)
model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).quantize(8).half().cuda()
image_path = "./examples/1.jpeg" response, history = model.chat(tokenizer, image_path, "描述这张图片。", history=[]) print(response) response, history = model.chat(tokenizer, image_path, "这张图片可能是在什么场所拍摄的?", history=history) print(response)
运行这个代码报错了,请问是什么原因
The text was updated successfully, but these errors were encountered:
你好,请问这个问题你解决了吗
Sorry, something went wrong.
没看明白,为什么加载chatglm的模型来跑visualglm
我也是报错这个
正解,因为他们用的都是mac的电脑在本地跑,然后参考了 THUDM/ChatGLM-6B#6 这个问题的解决方案
No branches or pull requests
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("THUDM/visualglm-6b", trust_remote_code=True)
model = AutoModel.from_pretrained("THUDM/visualglm-6b", trust_remote_code=True).half().cuda()
按需修改,目前只支持 4/8 bit 量化
model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).quantize(8).half().cuda()
INT8 量化的模型将"THUDM/chatglm-6b-int4"改为"THUDM/chatglm-6b-int8"
model = AutoModel.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True).half().cuda()
image_path = "./examples/1.jpeg"
response, history = model.chat(tokenizer, image_path, "描述这张图片。", history=[])
print(response)
response, history = model.chat(tokenizer, image_path, "这张图片可能是在什么场所拍摄的?", history=history)
print(response)
运行这个代码报错了,请问是什么原因
The text was updated successfully, but these errors were encountered: