Skip to content

Commit

Permalink
[Bugfix] chat method add_generation_prompt param (vllm-project#7734)
Browse files Browse the repository at this point in the history
  • Loading branch information
brian14708 authored Aug 21, 2024
1 parent 9b73a2f commit d3c002e
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions vllm/entrypoints/llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -353,7 +353,7 @@ def chat(
use_tqdm: bool = True,
lora_request: Optional[LoRARequest] = None,
chat_template: Optional[str] = None,
add_generation_template: bool = True,
add_generation_prompt: bool = True,
) -> List[RequestOutput]:
"""
Generates responses for chat messages.
Expand All @@ -374,7 +374,7 @@ def chat(
lora_request: LoRA request to use for generation, if any.
chat_template: The template to use for structuring the chat.
If not provided, the model's default chat template will be used.
add_generation_template: If True, adds a generation template
add_generation_prompt: If True, adds a generation template
to each message.
Returns:
Expand All @@ -392,7 +392,7 @@ def chat(
tokenizer,
conversations,
chat_template=chat_template,
add_generation_template=add_generation_template)
add_generation_prompt=add_generation_prompt)

return self.generate(
prompts,
Expand Down

0 comments on commit d3c002e

Please sign in to comment.