-
-
Notifications
You must be signed in to change notification settings - Fork 381
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Add a token cutoff #94
Comments
Hi, there's already a hardcoded token cutoff of 1500 tokens (to fit in the context window of some models). But making this configurable makes a ton of sense :) Should be easy to implement and include in the next release. Thanks for the suggestion! |
Oh wow, that's a lot of tokens. Hmmm, I have a guess. I'm currently approximating |
I don't wanna share the actual link because it's very explicit, but it was a chapter of a fanfiction from AO3, so very heavy on words |
Turned out, I had a bug in the content truncating logic. Sending a fix now. |
By looking at the commit, was the truncate function sending words only after the 1500th one? |
Yeah, noob mistake :) |
Dont worry, it happens to everyone :) |
The bug only affects bookmarks with content larger than 1500 though, so it's not all bad. I'll mention it in the release note and let people decide whether they want to re-process or not (given that re-processing can be expensive for those using openai for example). |
Seems like the best solution. Thanks for the time |
Hotfix |
Hello. Sometimes, some pages I save contains essays or stories that can span aboce 20k+ tokens. This does not only used much more credits/money when doing a request, but it consumes a lot of time (from 4s to >50s) and the prompt get lost in the way.
Would it be possible to add an env variable so we can set the max number of tokens to send in a single request?
The text was updated successfully, but these errors were encountered: