You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the following two astream_chat() implementations, asyncio tasks are being created that are not awaited and also not being referenced after being created:
Some background info:
Started looking at this when I noticed messages such as the following when overloading the LLM behind the application, leading to connection errors:
ERROR: asyncio [12-02-2025 14:07:13] Task exception was never retrieved
future: <Task finished name='Task-794' coro=<Dispatcher.span.<locals>.async_wrapper() done, defined at /usr/local/lib/python3.12/site-packages/llama_index/core/instrumentation/dispatcher.py:349> exception=APIConnectionError('Connection error.')>
This is also a sign that there was a task that never was awaited (not sure if this happened when running into the exception during the exact task linked above, though).
The text was updated successfully, but these errors were encountered:
imo I would really move to using AgentWorkflow for single agents [1] and multi agents [2] -- the implementation in these older agent classes is very janky due to needing to expose a generator return type
That being said, probably the fix here is to store the reference to the task on the StreamingAgentChatResponse object, and handle/clean up it appropriately in there
@logan-markewich As switching over to the workflow-based approach would be non-trivial for us, I've submitted a PR to fix this issue. Let me know what you think.
Bug Description
In the following two
astream_chat()
implementations, asyncio tasks are being created that are not awaited and also not being referenced after being created:llama_index/llama-index-core/llama_index/core/chat_engine/simple.py
Line 205 in adc1beb
llama_index/llama-index-core/llama_index/core/chat_engine/condense_question.py
Line 362 in adc1beb
To my current understanding, this leads to undefined behaviour as the Python Garbage Collector is free to remove these unreachable objects at any point in time:
https://textual.textualize.io/blog/2023/02/11/the-heisenbug-lurking-in-your-async-code/
Version
v0.12.16
Relevant Logs/Tracbacks
Some background info:
Started looking at this when I noticed messages such as the following when overloading the LLM behind the application, leading to connection errors:
This is also a sign that there was a task that never was awaited (not sure if this happened when running into the exception during the exact task linked above, though).
The text was updated successfully, but these errors were encountered: