Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Conditional Prompt Inclusion in generate Function for Streaming Efficiency #8359

Closed
1 task done
g-hano opened this issue Sep 11, 2024 · 2 comments
Closed
1 task done
Labels
feature request New feature or request stale Over 90 days of inactivity

Comments

@g-hano
Copy link

g-hano commented Sep 11, 2024

🚀 The feature, motivation and pitch

Title: Conditional Prompt Inclusion in generate Function for Streaming Efficiency

Feature Proposal:

This feature introduces a new parameter, is_return_prompt, to the generate function in vllm/entrypoints/api_server.py. The parameter allows users to conditionally include the prompt in the generated response, addressing inefficiencies observed in streaming scenarios.

Motivation and Pitch:

In the current implementation, the generate function always includes the prompt in its response, whether streaming is enabled or not. This results in inefficiencies, especially in streaming mode, where the prompt is repeatedly included with each token update. This behavior can be redundant and slow down the processing, as users typically do not need to see the prompt after it has been provided to the LLM.

Proposal:

The proposed feature will add an is_return_prompt parameter to the generate function. When is_return_prompt is set to False (the default), the prompt will not be included in the response. When set to True, the prompt will be included as part of the output. This will make the streaming process more efficient and reduce redundancy.

Details:

  • New Parameter: is_return_prompt (default: False)
  • Effect: When is_return_prompt is True, the prompt is included in the response. Otherwise, the prompt is omitted.
  • Use Case: Enhances performance in streaming scenarios by avoiding repeated prompt inclusion, which is particularly useful when processing large amounts of generated text.

Alternatives

No response

Additional context

This feature is particularly relevant for users working with streaming responses, where including the prompt with each token update can hinder performance. The new parameter will provide greater control over the response format, making it more suitable for various use cases and improving overall efficiency.

By incorporating this feature, users can benefit from more streamlined and performant interactions with the generate function, especially in scenarios involving continuous or large-scale text generation.

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
Copy link

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!

@github-actions github-actions bot added the stale Over 90 days of inactivity label Dec 11, 2024
Copy link

This issue has been automatically closed due to inactivity. Please feel free to reopen if you feel it is still relevant. Thank you!

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request New feature or request stale Over 90 days of inactivity
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant