-
-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[VLM] Qwen2.5-VL #12604
[VLM] Qwen2.5-VL #12604
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
Co-authored-by: Yixuan Qiao <yixqiao@gmail.com> Signed-off-by: Roger Wang <ywang@roblox.com>
Qwen2 5 vl new vit
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Roger Wang <ywang@roblox.com>
Sure. There you go! |
You should pass a numpy array directly to |
Just fixed. Brilliant thanks for your prompt reply!!! |
@rstone3017 ,have you solved it? i also met this problem |
Can qwenvl2.5-7B run on V100?
transformers 4.49.0.dev0 Meets: |
how can I POST with local_image or local_video? |
You can set |
This should be fixed by #12828, can you try using the latest code? |
Were you able to run this model in bnb quantization? I tried to but failed. #12900 Could you provide any idea or instruction how to fix this? Appreciated |
@MotorBottle I dont think this model is supported with bnb yet. See #12604 (comment) |
@yfllllll have you solved it? i also met this problem |
@MotorBottle Can you try #12944? The BNB support for qwen2.5-vl should be added in that PR. |
Can you give a demo of the local image passed in? |
Confirmed working with #12944. Qwen2.5-VL-7B-Instruct tested. |
vllm serve <model> --allowed-local-media-path /path/to/data from openai import OpenAI
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
model = client.models.list().data[0].id
chat_response = client.chat.completions.create(
model=model,
messages=[
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "file://path/to/data/path/to/image.jpg",
},
},
{"type": "text", "text": "What is in this image?"},
],
}
],
) |
I am using the latest vllm-v0.7.2 docker image, but it failed to serve qwen2.5-vl-7b model. The error message:
Seems the docker image was not built with the required transformer version. |
Yes, you need to manually install |
fixed, thx a lot |
I am hitting this issue when trying to run :
with this script :
|
Traceback (most recent call last): Could you please solve it? I seem to have encountered a similar error. This error will be reported when tp>1. @ywang96 How can I solve it? |
I think this should be fixed by #12828 already, can you pull the latest code and try again? |
Yes, I've pulled the latest code and tried it, I don't know what caused the bug to be stably triggered in tp>1, but there is no problem deploying the 7b model on a single card。 |
Can you open a new issue and show your output of |
Tried to do inference on qwen2.5-VL via vllm 0.7.2 and the current dev transformers but it get this import error: ImportError: cannot import name 'Qwen2_5_VLImageProcessor' from 'transformers.models.qwen2_5_vl' (/usr/local/lib/python3.12/dist-packages/transformers/models/qwen2_5_vl/init.py). Did you mean: 'Qwen2_5_VLProcessor'? Am I doing something wrong or has transformers dev changed again? |
Transformers dev has changed. Please update vLLM and also your local version of the HF Hub repo. |
FIXES: #12486, #12532
TODO:
To run this model before transformers 4.49 release, install transformers from source
pip install git+https://github.com/huggingface/transformers
Co-authored-by: @yixqiao(UC Berkeley) @wulipc(Qwen Team)