-
-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: ZMQError: Address already in use (addr='tcp://127.0.0.1:5570') #7196
Comments
So you have two versions of vllm running on the same machine? |
I will look into this |
In the meantime, you can fall back to the old front end by running with |
Got the same problem, came with ZMQ integration. |
@ccdv-ai can you share any additional information about your setup? Also - does |
tested with v5.4.0, 4 x L40
Have to change the port each time I kill the vllm server because the port isn't released. Also got a No problem with v5.3.post1 |
@ccdv-ai are you setting any environment variables? I have been unable to reproduce your issue so far |
Specifically, did you set |
This will resolve any issues associated with setting |
@WMeng1 I have reproduced your report. Working on a fix. |
fixed by #7205 |
Needs #7222 to land. |
Closing because #7222 landed |
Hi guys, is this fix already in dockerhub? |
Your current environment
🐛 Describe the bug
when I bash xxx.sh like this, the first server start successful. But the second will break and error message is:
I'm not sure where the problem is, maybe at line 110 here? vllm/entrypoints/openai/rpc/server.py line:110
port = get_open_port(envs.VLLM_RPC_PORT)?
The text was updated successfully, but these errors were encountered: