Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chainlit implementation with tools gives - ValueError: Consecutive empty chunks found. Change max_empty_consecutive_chunk_tolerance to increase empty chunk tolerance #5578

Closed
ipshitag opened this issue Feb 17, 2025 · 4 comments

Comments

@ipshitag
Copy link

What happened?

I followed line by line the code - Agent Chat and Team Chat, and in both cases, the task without function calling worked fine, but the one with function calling returned:

ValueError: Consecutive empty chunks found. Change max_empty_consecutive_chunk_tolerance to increase empty chunk tolerance

This is the full stack:

2025-02-18 00:17:48 - Loaded .env file
2025-02-18 00:17:51 - Your app is available at http://localhost:8000
2025-02-18 00:17:54 - Translated markdown file for en-US not found. Defaulting to chainlit.md.
2025-02-18 00:17:59 - HTTP Request: POST https://aoai--dreamdemo-common.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2024-08-01-preview "HTTP/1.1 200 OK"
2025-02-18 00:17:59 - Consecutive empty chunks found. Change max_empty_consecutive_chunk_tolerance to increase empty chunk tolerance
Traceback (most recent call last):
  File "C:\Users\v-ighosh\Desktop\work pro\agentic\agentic\.venv\lib\site-packages\chainlit\utils.py", line 47, in wrapper
    return await user_function(**params_values)
  File "C:\Users\v-ighosh\Desktop\work pro\agentic\agentic\.venv\lib\site-packages\chainlit\callbacks.py", line 121, in with_parent_id
    await func(message)
  File "C:\Users\v-ighosh\Desktop\work pro\agentic\agentic\app_agent.py", line 58, in chat
    async for msg in agent.on_messages_stream(
  File "C:\Users\v-ighosh\Desktop\work pro\agentic\agentic\.venv\lib\site-packages\autogen_agentchat\agents\_assistant_agent.py", line 405, in on_messages_stream     
    async for chunk in self._model_client.create_stream(
  File "C:\Users\v-ighosh\Desktop\work pro\agentic\agentic\.venv\lib\site-packages\autogen_ext\models\openai\_openai_client.py", line 734, in create_stream
    raise ValueError(
ValueError: Consecutive empty chunks found. Change max_empty_consecutive_chunk_tolerance to increase empty chunk tolerance

What did you expect to happen?

To get the function call happen properly.

How can we reproduce it (as minimally and precisely as possible)?

I did not make any changes to the sample code, so running the same, should reproduce.

AutoGen version

0.7.4

Which package was this bug in

AgentChat

Model used

gpt-4o

Python version

No response

Operating system

No response

Any additional info you think would be helpful for fixing this bug

No response

@ekzhu
Copy link
Collaborator

ekzhu commented Feb 18, 2025

@ipshitag please upgrade your package version to the latest. pip install -U autogen-agentchat autogen-ext[openai].

Also, not sure if this is intended but:

AutoGen version
0.7.4

The latest is 0.4.7. Make sure you have the right package and version installed.

@ekzhu ekzhu added awaiting-op-response Issue or pr has been triaged or responded to and is now awaiting a reply from the original poster and removed needs-triage labels Feb 18, 2025
@ipshitag
Copy link
Author

Thankyou that worked, I am still confused how I ended up with 0.7.4 lol

I was trying to put SelectorGroupChat along with human in loop, code as follows:

    team = SelectorGroupChat(
        [planning_agent, data_assistant, billing_assistant,ticket_assistant,bill_correction_agent],
        model_client=model_client,
        termination_condition=termination,
        selector_prompt=selector_prompt,
        allow_repeated_speaker=True,  # Allow an agent to speak multiple turns in a row.
    )

    # Set the assistant agent in the user session.
    cl.user_session.set("prompt_history", "")  # type: ignore
    cl.user_session.set("team", team)  # type: ignore


@cl.on_message  # type: ignore
async def chat(message: cl.Message) -> None:
    # Get the team from the user session.
    team = cast(SelectorGroupChat, cl.user_session.get("team"))  # type: ignore
    # Streaming response message.
    streaming_response: cl.Message | None = None
    # Stream the messages from the team.
    async for msg in team.run_stream(
        task=[TextMessage(content=message.content, source="user")],
        cancellation_token=CancellationToken(),
    ):
        if isinstance(msg, ModelClientStreamingChunkEvent):
            # Stream the model client response to the user.
            if streaming_response is None:
                # Start a new streaming response.
                streaming_response = cl.Message(content="", author=msg.source)
            await streaming_response.stream_token(msg.content)
        elif streaming_response is not None:
            # Done streaming the model client response.
            # We can skip the current message as it is just the complete message
            # of the streaming response.
            await streaming_response.send()
            # Reset the streaming response so we won't enter this block again
            # until the next streaming response is complete.
            streaming_response = None
        elif isinstance(msg, TaskResult):
            # Send the task termination message.
            final_message = "Task terminated. "
            if msg.stop_reason:
                final_message += msg.stop_reason
            await cl.Message(content=final_message).send()
        else:
            # Skip all other message types.
            pass

If I put user_proxy it starts taking input from terminal, and if i dont, it goes into a loop.

Is there any example available for human in loop type use case?

I dont know if this is the correct thread to ask question, let me know where I should post.

@github-actions github-actions bot removed the awaiting-op-response Issue or pr has been triaged or responded to and is now awaiting a reply from the original poster label Feb 18, 2025
@ekzhu
Copy link
Collaborator

ekzhu commented Feb 19, 2025

@ipshitag user proxy's input function can be customized to not take input from terminal. You are using ChainLit, so you need to use a ChainLit specific input function. There is a fast API example in the sample that has a user proxy. For ChainLit, you may check out: https://docs.chainlit.io/api-reference/ask/ask-for-input. I am not familiar with ChainLit, you need to dig into it a bit. @victordibia may help here.

If you figure it out, we appreciate if you can submit a PR to update the sample.

@ekzhu
Copy link
Collaborator

ekzhu commented Feb 19, 2025

Closing this now. Please use #5610 for the ChainLit + UserProxyAgent

@ekzhu ekzhu closed this as completed Feb 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants