Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

timeout #622

Open
KLL535 opened this issue Dec 29, 2024 · 1 comment
Open

timeout #622

KLL535 opened this issue Dec 29, 2024 · 1 comment

Comments

@KLL535
Copy link

KLL535 commented Dec 29, 2024

I need help!

(venv) C:\python2\qwen2>python web_demo_mm.py
Qwen2VLRotaryEmbedding can now be fully parameterized by passing the model config through the config argument. All other arguments will be removed in v4.46
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:09<00:00, 1.91s/it]
C:\python2\qwen2\venv\lib\site-packages\gradio\components\chatbot.py:242: UserWarning: You have not specified a value for the type parameter. Defaulting to the 'tuples' format for chatbot messages, but this is deprecated and will be removed in a future version of Gradio. Please set type='messages' instead, which uses openai-style dictionaries with 'role' and 'content' keys.
warnings.warn(

To create a public link, set share=True in launch().
C:\python2\qwen2\venv\lib\site-packages\gradio\blocks.py:1780: UserWarning: A function (add_text) returned too many output values (needed: 2, returned: 3). Ignoring extra values.
Output components:
[chatbot, state]
Output values returned:
[[('hello', None)], [('hello', None)], ""]
warnings.warn(
User: hello
Qwen-VL-Chat: Hello! How can I help you today?
C:\python2\qwen2\venv\lib\site-packages\gradio\blocks.py:1780: UserWarning: A function (add_text) returned too many output values (needed: 2, returned: 3). Ignoring extra values.
Output components:
[chatbot, state]
Output values returned:
[[['hello', 'Hello! How can I help you today?'], [('C:\Users\AnY\AppData\Local\Temp\gradio\dec05449633943fe5fecdba0c1adbda0caa51f5c544b2dcc9a0b2ce55f997b89\45738874.jpeg',), None], ('write me an in-depth prompt', None)], [('hello', 'Hello! How can I help you today?'), (('C:\Users\AnY\AppData\Local\Temp\gradio\dec05449633943fe5fecdba0c1adbda0caa51f5c544b2dcc9a0b2ce55f997b89\45738874.jpeg',), None), ('write me an in-depth prompt', None)], ""]
warnings.warn(
User: write me an in-depth prompt
Traceback (most recent call last):
File "C:\python2\qwen2\venv\lib\site-packages\gradio\queueing.py", line 625, in process_events
response = await route_utils.call_process_api(
File "C:\python2\qwen2\venv\lib\site-packages\gradio\route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
File "C:\python2\qwen2\venv\lib\site-packages\gradio\blocks.py", line 2047, in process_api
result = await self.call_function(
File "C:\python2\qwen2\venv\lib\site-packages\gradio\blocks.py", line 1606, in call_function
prediction = await utils.async_iteration(iterator)
File "C:\python2\qwen2\venv\lib\site-packages\gradio\utils.py", line 714, in async_iteration
return await anext(iterator)
File "C:\python2\qwen2\venv\lib\site-packages\gradio\utils.py", line 708, in anext
return await anyio.to_thread.run_sync(
File "C:\python2\qwen2\venv\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "C:\python2\qwen2\venv\lib\site-packages\anyio_backends_asyncio.py", line 2505, in run_sync_in_worker_thread
return await future
File "C:\python2\qwen2\venv\lib\site-packages\anyio_backends_asyncio.py", line 1005, in run
result = context.run(func, *args)
File "C:\python2\qwen2\venv\lib\site-packages\gradio\utils.py", line 691, in run_sync_iterator_async
return next(iterator)
File "C:\python2\qwen2\venv\lib\site-packages\gradio\utils.py", line 852, in gen_wrapper
response = next(iterator)
File "C:\python2\qwen2\web_demo_mm.py", line 189, in predict
for response in call_local_model(model, processor, messages):
File "C:\python2\qwen2\web_demo_mm.py", line 157, in call_local_model
for new_text in streamer:
File "C:\python2\qwen2\venv\lib\site-packages\transformers\generation\streamers.py", line 224, in next
value = self.text_queue.get(timeout=self.timeout)
File "queue.py", line 179, in get
_queue.Empty

@KLL535 KLL535 changed the title No module named '_socket' _queue.Empty Dec 29, 2024
@KLL535
Copy link
Author

KLL535 commented Dec 29, 2024

in web_demo_mm.py
streamer = TextIteratorStreamer(tokenizer, timeout=20.0, skip_prompt=True, skip_special_tokens=True)
correct to
streamer = TextIteratorStreamer(tokenizer, timeout=1000.0, skip_prompt=True, skip_special_tokens=True)

What kind of computers do you have that a 16GB model works in less than 20 seconds?

@KLL535 KLL535 changed the title _queue.Empty timeout Dec 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant