Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[lmi][lcnc] fallback to accelerate backend when non text-generation m… #1667

Merged
merged 1 commit into from
Mar 26, 2024

Conversation

siddvenk
Copy link
Contributor

…odel is provided

Description

Fallback to accelerate for non text-generation model architectures.

From my limited testing, non text-generation architectures are not supported for rolling batch. For these models, we fall back to hf accelerate with dynamic batching.

@siddvenk siddvenk requested review from zachgk, frankfliu and a team as code owners March 25, 2024 22:58
@@ -46,6 +51,9 @@ public final class LmiConfigRecommender {
Map.entry("qwen2", "vllm"),
Map.entry("stablelm", "vllm"));

private static final Set<String> OPTIMIZED_TASK_ARCHITECTURES =
Set.of("ForCausalLM", "LMHeadModel", "ForConditionalGeneration");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

@siddvenk siddvenk Mar 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ugh, that's good to know. didn't know about this type of config

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i've added auto_map to fix this use-case

@siddvenk siddvenk merged commit 761664e into deepjavalibrary:master Mar 26, 2024
7 checks passed
@siddvenk siddvenk deleted the auto-engine branch March 26, 2024 21:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants