Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support longformer in ORTModel #479

Open
fxmarty opened this issue Nov 17, 2022 · 1 comment
Open

Support longformer in ORTModel #479

fxmarty opened this issue Nov 17, 2022 · 1 comment
Labels
onnxruntime Related to ONNX Runtime

Comments

@fxmarty
Copy link
Contributor

fxmarty commented Nov 17, 2022

Feature request

Longformer takes global_attention_mask as input in the current transformers onnx export. Hence, it is currently not supported with ORTModel.

Motivation

Before going forward, it could be good to benchmark longformer with transformers onnx export vs the custom https://github.com/microsoft/onnxruntime/tree/main/onnxruntime/python/tools/transformers/models/longformer

Your contribution

Can see how we can support, but I'm worried adding custom cases like this to ORTModel adds overhead.

@fxmarty fxmarty added the onnxruntime Related to ONNX Runtime label Nov 17, 2022
@AdriandLiu
Copy link

As an alternative, is that possible/reasonable to use other model's config to optimize longformer? such as put

 {
  "model_type": "bert"
}

in config.json for optimize longformer? Thanks @fxmarty

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
onnxruntime Related to ONNX Runtime
Projects
None yet
Development

No branches or pull requests

2 participants