You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm using a model fine-tuned based on qwen2 (qwen1.5).
When I use bigdl to load the model and execute the generate method, python prompts an error
I'm run with a Intel(R) Data Center GPU Flex 170
model load via AutoModelForCausalLM.from_pretrained(**model_name_or_path**, load_in_4bit=True, optimize_model=True, trust_remote_code=True, use_cache=True)
The following is the error message
I'm using a model fine-tuned based on qwen2 (qwen1.5).
When I use bigdl to load the model and execute the generate method, python prompts an error
I'm run with a Intel(R) Data Center GPU Flex 170
model load via
AutoModelForCausalLM.from_pretrained(**model_name_or_path**, load_in_4bit=True, optimize_model=True, trust_remote_code=True, use_cache=True)
The following is the error message
The text was updated successfully, but these errors were encountered: