Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Patch ORTTrainer inference with ONNX Runtime backend #737

Merged
merged 9 commits into from
Feb 2, 2023

Conversation

JingyaHuang
Copy link
Contributor

@JingyaHuang JingyaHuang commented Feb 1, 2023

What does this PR do?

  • Enable loss as output in ORTModelForCausalLM
  • Enable using cache for causal lm task when doing inference in ORTTrainer
  • Fix the ORT inference of ORTSeq2SeqTrainer when predict_with_generate=True

The PR is tested locally, and a PR will be opened for improving the current nightly test.

The caveat #719 always exists, but I won't deal with it because of a lack of bandwidth and need from the community.

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Feb 1, 2023

The documentation is not available anymore as the PR was closed or merged.

Copy link
Contributor

@fxmarty fxmarty left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thank you!

@JingyaHuang JingyaHuang merged commit 334d3cf into main Feb 2, 2023
@JingyaHuang JingyaHuang deleted the fix-ortseq2seqtrainer-infer branch February 2, 2023 23:07
@JingyaHuang JingyaHuang mentioned this pull request Feb 15, 2023
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants