Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

finetune on 4096 context length #16

Open
MohamedAliRashad opened this issue Jun 6, 2023 · 2 comments
Open

finetune on 4096 context length #16

MohamedAliRashad opened this issue Jun 6, 2023 · 2 comments

Comments

@MohamedAliRashad
Copy link

How to finetune Falcon-7B-Instruct on input or outputs of 4096 context length ?
how much VRAM i will need ?

@yuhai-china
Copy link

Does it work to set --cutoff_len=4096?

@richardburleigh
Copy link

richardburleigh commented Jun 25, 2023

Yes, I'm currently fine-tuning the 7B instruct model with 4096 and it's working fine.

Although I'm making the assumption that there's no truncation elsewhere in the code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants