Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support fp8 t5 encoder in examples #366

Merged
merged 3 commits into from
Nov 28, 2024
Merged

support fp8 t5 encoder in examples #366

merged 3 commits into from
Nov 28, 2024

Conversation

Lay2000
Copy link
Collaborator

@Lay2000 Lay2000 commented Nov 28, 2024

The examples now support T5 encoder FP8 quantization, which could save GPU memory usage without affecting the result quality.

Copy link
Collaborator

@feifeibear feifeibear left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perfect!

@@ -19,10 +20,18 @@ def main():
engine_args = xFuserArgs.from_cli_args(args)
engine_config, input_config = engine_args.create_config()
local_rank = get_world_group().local_rank
text_encoder = T5EncoderModel.from_pretrained(engine_config.model_config.model, subfolder="text_encoder", torch_dtype=torch.float16)
if args.use_fp8_t5_encoder:
from optimum.quanto import freeze, qfloat8, quantize
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add optimum in setup.py

@feifeibear feifeibear merged commit 403f4e5 into main Nov 28, 2024
4 checks passed
@feifeibear feifeibear deleted the support_fp8_t5_encoder branch November 28, 2024 08:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants