Skip to content

Debugging model output in llama-server #11389

Answered by ggerganov
Mushoz asked this question in Q&A
Discussion options

You must be logged in to vote

Add -lv 1.

Most likely your are using a small context size (the default is -c 4096). Try increasing it.

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@Mushoz
Comment options

Answer selected by Mushoz
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants