Generate: Update VisionEncoderDecoder test value #27850
Merged
+1
−1
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What does this PR do?
#27351 fixes a bug in beam search: the prompt length was being included in the length penalty computation, and this penalty should only be applied on newly generated tokens (otherwise decoder-only models would often see a big penalty, as the prompt is part of the output)
This PR updates the test results to account for the bug fix. I've double-checked that reverting those changes produces the old results!
(All tests in
RUN_SLOW=1 py.test tests/models/vision_encoder_decoder/test_modeling_vision_encoder_decoder.py -vv
pass)