-
Notifications
You must be signed in to change notification settings - Fork 10.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
server: fix the disappearance of the end of the text when streaming with stop strings #9867
Conversation
examples/server/server.cpp
Outdated
@@ -1083,7 +1083,7 @@ struct server_context { | |||
} | |||
|
|||
// check if there is any token to predict | |||
if (stop_pos == std::string::npos || (!slot.has_next_token && !is_stop_full && stop_pos > 0)) { | |||
if (stop_pos == std::string::npos || is_stop_full || (!slot.has_next_token && !is_stop_full && stop_pos > 0)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this if
will always evaluate to true
:
- if
stop_pos == std::string::npos
->true
- if
stop_pos != std::string::npos
, thenis_stop_full == true
due to line 1075 ->true
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have attempted to simplify this logic in the new commit. The text will not be sent only when the partial match is found. And partial match will be searched for only when the current token is not the last one.
* server: fix the disappearance of the end of the text when streaming with stop strings * simplify "send text" checks
* server: fix the disappearance of the end of the text when streaming with stop strings * simplify "send text" checks
* server: fix the disappearance of the end of the text when streaming with stop strings * simplify "send text" checks
* server: fix the disappearance of the end of the text when streaming with stop strings * simplify "send text" checks
The problem: the end of the text may not be pushed to the API-client when streaming with stop strings enabled.
For example, let's test the following prompt:
The stop string will be
\n
(stopping at the end of a dialog line). The next predicted character should be the question mark (it actually depends on the model and the probabilities, but in this case, let's assume that theseed
is chosen so that the next character is?
).The bug depends heavily on the model's tokenizer. In my case the bug can be seen on
Llama-3.2-3B-Instruct-Q8_0.gguf
model.First, let's try it without streaming:
It gives the correct result:
Now let's try it with streaming:
The question mark disappeared. And it's also not featured in the
stopping_word
. So it's completely lost and the API-client won't be able to restore it.It happens because the next token returned by the model contains both a question mark and the stop string:
?\n\n
. And the current code skips sending the current token completely if it contains the stop string.The change in this PR sends the remainder of the token to the API-client in this case. When
is_stop_full=true
it's safe to send the response, because the stop string and everything after it will already be truncated from thegenerated_text
at this point.The response with this PR applied: