-
Notifications
You must be signed in to change notification settings - Fork 444
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
加载本地下载的qwen2.5模型 #425
Comments
您可以将本地的qwen2.5模型启动vllm server,然后用oai接口去访问模型即可。 |
请问这种方法还是需要调用openapi的api来实现吗?还是需要openai api和key吗 |
本地部署的不需要openai api和key,参考这个参数配置,只需要传入本地部署的url即可。 |
成功启动,感谢作者回复!记录一下过程供有需要的朋友参考: |
作者您好,function_calling.py中通过get_chat_model加载模型,请问该如何加载本地下载的qwen模型?我没有找到相关函数,能否给我一个例子,谢谢
The text was updated successfully, but these errors were encountered: