Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache docs: update #32929

Merged
merged 8 commits into from
Sep 4, 2024
Merged

Cache docs: update #32929

merged 8 commits into from
Sep 4, 2024

Conversation

zucchini-nlp
Copy link
Member

@zucchini-nlp zucchini-nlp commented Aug 22, 2024

What does this PR do?

Updates the docs with the feedback from community, makes some points more clear and fixes the docstring. I will run the doctest locally and see if we can trigger it by CI here

Fixes #32919

@zucchini-nlp zucchini-nlp requested a review from gante August 22, 2024 04:34
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

>>> tokenizer = AutoTokenizer.from_pretrained(model_id)

>>> INITIAL_PROMPT = "You are a helpful assistant. "
>>> prompt_cache = DynamicCache()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would rather promote the use of Static here but both are fine!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, actually I recently found this doesn't work, the copy.deepcopy fails even after making it a torch Module. @gante confirmed that it's expected to fail, so I'll remove this section

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah we need to fix the copying issue (and add a test!)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, the reason was that model forward should be run without grad, otherwise the key/values are non-leaf tensors. Fixed the example and verified it runs

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me know if you have any comments, otherwise will merge

Copy link
Member

@gante gante Sep 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On cache reuse (copying a cache object): #33297

The problem is a bit more complex than no_grad on some cache classes :P

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mmm, does that mean the current deepcopy doesn't copy all tensors from the list when we use dynamic cache?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dynamic cache should be okay with no_grad at the moment 🤗 other caches, however, have objects that can't be copied (e.g. in the offloaded caches, the cuda stream can't be copied)

I'm writing a PR that lifts the no_grad requirement and handles the other corner cases. And, more importantly, adds tests!

Copy link
Member

@gante gante left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thank you for addressing these issues 💪

>>> tokenizer = AutoTokenizer.from_pretrained(model_id)

>>> INITIAL_PROMPT = "You are a helpful assistant. "
>>> prompt_cache = DynamicCache()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah we need to fix the copying issue (and add a test!)

@zucchini-nlp
Copy link
Member Author

Added a slow test for cache copying

@zucchini-nlp zucchini-nlp merged commit ebbe8d8 into huggingface:main Sep 4, 2024
10 checks passed
BernardZach pushed a commit to BernardZach/transformers that referenced this pull request Dec 5, 2024
* some changes

* more updates

* fix cache copy

* nits

* nits

* add tests
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Documentation of SinkCache has bug in example code
4 participants