-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Finetune lora max_seq_length error #1461
Comments
Thanks for sharing. Yeah, this shouldn't happen, and the max sequence length calculation should happen on both the training and validation data not just the training data. Will have to look into this and update. In the meantime, you could rerun the training with |
Thanks! Actually, I think that train.max_seq_length is not enough, the problem comes from litgpt/litgpt/finetune/lora.py Line 247 in 0f3bca7
So I just changed that in my case |
Thanks, fixing it in #1462 |
Should be fixed now. |
I am getting an error when running litgpt finetune_lora
At the beginning of training the max_seq_length is set to 466 because that is the longest sequence in my training set
"The longest sequence length in the train data is 466, the model's maximum sequence length is 466 and context length is 2048"
However, when the training is finished and a final validation is performed in
litgpt/litgpt/finetune/lora.py
Line 214 in 0f3bca7
"Cannot forward sequence of length 473, max seq length is only 466"
There is a at least a sample in the validation set that is longer than the longest one in the training set Does anyone know how to fix this?
This is the traceback I get
The text was updated successfully, but these errors were encountered: