Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: chat() got multiple values for argument 'history' #160

Open
wangjingyu001 opened this issue Jul 3, 2023 · 4 comments
Open

TypeError: chat() got multiple values for argument 'history' #160

wangjingyu001 opened this issue Jul 3, 2023 · 4 comments

Comments

@wangjingyu001
Copy link

from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("THUDM/visualglm-6b", trust_remote_code=True)

model = AutoModel.from_pretrained("THUDM/visualglm-6b", trust_remote_code=True).half().cuda()

按需修改,目前只支持 4/8 bit 量化

model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).quantize(8).half().cuda()

INT8 量化的模型将"THUDM/chatglm-6b-int4"改为"THUDM/chatglm-6b-int8"

model = AutoModel.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True).half().cuda()

image_path = "./examples/1.jpeg"
response, history = model.chat(tokenizer, image_path, "描述这张图片。", history=[])
print(response)
response, history = model.chat(tokenizer, image_path, "这张图片可能是在什么场所拍摄的?", history=history)
print(response)

运行这个代码报错了,请问是什么原因

@ENjoy924
Copy link

ENjoy924 commented Jul 6, 2023

你好,请问这个问题你解决了吗

@1049451037
Copy link
Member

没看明白,为什么加载chatglm的模型来跑visualglm

@buptsdz
Copy link

buptsdz commented Oct 29, 2023

我也是报错这个

@JiangNingRicky
Copy link

没看明白,为什么加载chatglm的模型来跑visualglm

正解,因为他们用的都是mac的电脑在本地跑,然后参考了 THUDM/ChatGLM-6B#6 这个问题的解决方案

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants