-
-
Notifications
You must be signed in to change notification settings - Fork 6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Can't load vision model microsoft/Phi-3.5-vision-instruct
#7781
Comments
This issue should have been fixed by #7710. export VLLM_VERSION=0.5.4 # vLLM's main branch version is currently set to latest released tag
pip install https://vllm-wheels.s3.us-west-2.amazonaws.com/nightly/vllm-${VLLM_VERSION}-cp38-abi3-manylinux1_x86_64.whl
# You can also access a specific commit
# export VLLM_COMMIT=...
# pip install https://vllm-wheels.s3.us-west-2.amazonaws.com/${VLLM_COMMIT}/vllm-${VLLM_VERSION}-cp38-abi3-manylinux1_x86_64.whl |
I also had this error when trying to start up Phi 3.5 in the CLI running vllm serve microsoft/Phi-3.5-vision-instruct --tensor-parallel-size=2 --disable-log-stats --disable-log-requests --trust-remote-code --max-model-len 4096, any fixes for that? |
It is working fine in custom VM but how to make it work in Serverless inference endpoint |
Closing as this is fixed on the main branch |
Your current environment
The output of `python collect_env.py`
🐛 Describe the bug
Reproduction
Bug
Full Traceback
The text was updated successfully, but these errors were encountered: