Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[fix][lmi] fix issue where streaming response that finishes with empt… #2726

Merged
merged 1 commit into from
Feb 6, 2025

Conversation

siddvenk
Copy link
Contributor

@siddvenk siddvenk commented Feb 6, 2025

Description

There was a bug in the RollingBatch.java implementation that was recently triggered by one of the integration tests. The issue was regarding how we decide to send a new chunk of data back to the client. For non-streaming cases, the engine will report back with empty tokens until the request is finished. It also reports the HTTP code on this final chunk, which allows us to provide the correct UX to end-users.

But with streaming, it's possible that the final token provided by the engine is actually empty/null. If that's the case, then the request hangs within the model server until it times out.

This PR updates the logic for determining when to send a new chunk of information back to the client.

I ran the failing test tests.py::TestVllm1::test_llama3_1_8b_instruct_tool locally and this fixes the issue. I also validated a couple of other test cases locally to ensure no regression.

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • New feature (non-breaking change which adds functionality)
  • This change requires a documentation update

Checklist:

  • Please add the link of Integration Tests Executor run with related tests.
  • Have you manually built the docker image and verify the change?
  • Have you run related tests? Check how to set up the test environment here; One example would be pytest tests.py -k "TestCorrectnessLmiDist" -m "lmi_dist"
  • Have you added tests that prove your fix is effective or that this feature works?
  • Has code been commented, particularly in hard-to-understand areas?
  • Have you made corresponding changes to the documentation?

Feature/Issue validation/testing

Please describe the Unit or Integration tests that you ran to verify your changes and relevant result summary. Provide instructions so it can be reproduced.
Please also list any relevant details for your test configuration.

  • Test A
    Logs for Test A

  • Test B
    Logs for Test B

@siddvenk siddvenk requested review from zachgk and a team as code owners February 6, 2025 21:50
@@ -342,7 +342,7 @@ void addResponse(byte[] json, Map<String, String> properties) {
break;
}
}
if ((nextToken == null || nextToken.isEmpty()) && code == null) {
if ((nextToken == null || nextToken.isEmpty()) && !last) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For tool calling, the last token might be empty as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the test you put in caught this issue :)

@siddvenk siddvenk merged commit ed821e5 into deepjavalibrary:master Feb 6, 2025
8 checks passed
@siddvenk siddvenk deleted the timeout branch February 6, 2025 22:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants