You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As of v0.6.5, we use xgrammar as the default backend for structured output. However, not all ways of expressing output requirements are supported. This issue is for tracking the list of known cases needed to be resolved for making xgrammar the default in all cases.
Hi @russellb , thanks for raising the issue. XGrammar has a project to enhance the quality of the json schema converter and plan to support most of the features. We will track this issue and enhance it accordingly.
russellb
added a commit
to russellb/vllm
that referenced
this issue
Jan 31, 2025
This commit adds support for using xgrammar with a set of choices.
This can be converted to an EBNF grammar pretty easily, which xgrammar
can work from. This drops a case where we were falling back to outlines.
Part of issue vllm-project#12131
Signed-off-by: Russell Bryant <rbryant@redhat.com>
This commit adds support for using xgrammar with a set of choices.
This can be converted to an EBNF grammar pretty easily, which xgrammar
can work from. This drops a case where we were falling back to outlines.
Part of issue vllm-project#12131
Signed-off-by: Russell Bryant <rbryant@redhat.com>
🐛 Describe the bug
As of v0.6.5, we use xgrammar as the default backend for structured output. However, not all ways of expressing output requirements are supported. This issue is for tracking the list of known cases needed to be resolved for making xgrammar the default in all cases.
Fallback cases can be found here:
vllm/vllm/model_executor/guided_decoding/__init__.py
Lines 40 to 76 in d06e824
minItems
andmaxItems
constraints mlc-ai/xgrammar#160The text was updated successfully, but these errors were encountered: