Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Backend support] Allow num_logits_to_keep as Tensor and change it to logits_to_keep + add flag #35757

Merged
merged 7 commits into from
Jan 23, 2025

Conversation

Cyrilvallez
Copy link
Member

What does this PR do?

As per the title. Allowing num_logits_to_keep as a Tensor allow efficient slicing when using packed tensor format. It will be useful for us in the future as well as we integrate packed format for FA2 path.

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good!
Let's maybe allow for full tensor?

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for making sure it's compile compatible!

@Cyrilvallez Cyrilvallez merged commit d3af76d into main Jan 23, 2025
26 checks passed
@Cyrilvallez Cyrilvallez deleted the tgi-support branch January 23, 2025 08:47
@Cyrilvallez Cyrilvallez changed the title [Backend support] Allow num_logits_to_keep as Tensor + add flag [Backend support] Allow num_logits_to_keep as Tensor and change it to logits_to_keep + add flag Jan 23, 2025
bursteratom pushed a commit to bursteratom/transformers that referenced this pull request Jan 31, 2025
…ggingface#35757)

* support

* Update modeling_utils.py

* style

* most models

* Other models

* fix-copies

* tests + generation utils
dsikka added a commit to vllm-project/llm-compressor that referenced this pull request Feb 11, 2025
## Purpose ##
* SparseGPT
* Fix behavior where `targets` specifies which modules to sparsity, not
which layers to target
  * Fix broken behavior with `_infer_owl_layer_sparsity` and add test
  * Fix owl argument validation
  * Add type hints and abstract methods for clarity
* Pipelines
* Fix bug revealed by decorators added to the llama model definition in
the latest transformers release
    * huggingface/transformers#35757
* For the sequential pipeline, this revealed a bug in
torch.fx._symbolic_trace where wrapped functions were not being handled
properly
    * Future work could involve upstreaming a bug fix
  * Fix issue caused by changes to llama model definition
    * huggingface/transformers#34858
* For the layer sequential pipeline, this challenges the assumption that
each layer input is the previous layer's output (which was known to be a
fragile assumption)
  * Fix issue related to basic pipeline slowdowns and inaccuracy

## Changes ##
* SparseGPT
  * Fully separate `targets` and `sequential_targets`
    * Modify hooks adding logic to reflect this change
  * Fix behavior of `_infer_owl_layer_sparsity` and add test
  * Code clarity
    * Add additional type hints
* Designate `calibrate_module` as an abstract method on the sgpt mixin
* Pipelines
* Sequential pipeline: unwrap model forward function to avoid issues
with pytorch function patching
* Layer Sequential Pipeline: Add `maybe_inject_pos_embeddings` to
sequential pipeline to hackily support models with `position_embeddings`
* Basic Pipeline: Fix `on_sequential_batch_end` to call on the end of
epoch, rather than every batch
    * Calling every batch was likely causing slowdowns

## Followups ##
* Remove deprecated `sequential_update` option from examples and tests

## Testing ##
* Added `tests/llmcompressor/transformers/obcq/test_obcq_owl.py`
* Tested OBCQ+llama with sequential, layer sequential, and basic
pipelines independently

## Regression Evaluations ##
Models were compressed using
`examples/sparse_2of4_quantization_fp8/llama3_8b_2of4.py` without fp8
option

<details><summary>sparsegpt</summary>

Main
```
vllm (pretrained=/home/kyle/llm-compressor/Meta-Llama-3-8B-InstructSparseGPTModifierMAIN,dtype=bfloat16,add_bos_token=True), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: 1
|  Tasks   |Version|Filter|n-shot|Metric|   |Value |   |Stderr|
|----------|------:|------|-----:|------|---|-----:|---|-----:|
|winogrande|      1|none  |     5|acc   |↑  |0.6243|±  |0.0136|
```

This branch

```
vllm (pretrained=/home/kyle/llm-compressor/Meta-Llama-3-8B-InstructSparseGPTModifierFEATURE,dtype=bfloat16,add_bos_token=True), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: 1
|  Tasks   |Version|Filter|n-shot|Metric|   |Value |   |Stderr|
|----------|------:|------|-----:|------|---|-----:|---|-----:|
|winogrande|      1|none  |     5|acc   |↑  |0.6306|±  |0.0136|
```
</details>

To test wanda, the `SparseGPTModifier` was replaced with the
`WandaPruningModifier`

<details><summary>wanda</summary>

Main
```
vllm (pretrained=/home/kyle/llm-compressor/Meta-Llama-3-8B-InstructWandaPruningModifierMAIN,dtype=bfloat16,add_bos_token=True), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: 1
|  Tasks   |Version|Filter|n-shot|Metric|   |Value |   |Stderr|
|----------|------:|------|-----:|------|---|-----:|---|-----:|
|winogrande|      1|none  |     5|acc   |↑  |0.5912|±  |0.0138|
```

This branch
```
vllm (pretrained=/home/kyle/llm-compressor/Meta-Llama-3-8B-InstructWandaPruningModifierFEATURE,dtype=bfloat16,add_bos_token=True), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: 1
|  Tasks   |Version|Filter|n-shot|Metric|   |Value |   |Stderr|
|----------|------:|------|-----:|------|---|-----:|---|-----:|
|winogrande|      1|none  |     5|acc   |↑  |0.5817|±  |0.0139|
```
</details>

---------

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Co-authored-by: Dipika Sikka <dipikasikka1@gmail.com>
elvircrn pushed a commit to elvircrn/transformers that referenced this pull request Feb 13, 2025
…ggingface#35757)

* support

* Update modeling_utils.py

* style

* most models

* Other models

* fix-copies

* tests + generation utils
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants