Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Improvement] Test with onnx models and TensorRT engines. #758

Merged
merged 11 commits into from
Apr 7, 2021

Conversation

irvingzhang0512
Copy link
Contributor

@irvingzhang0512 irvingzhang0512 commented Mar 26, 2021

Fix #614

Description

  • Run tools/test.py with onnx models and TesnorRT engines instead of pytorch models.
  • For now, only support fixed shape and single gpu

Usage

  • export onnx models by tools/pytorch2onnx.py.
  • generate tensorrt engines by official tensorrt tools.
  • Make sure the input tensors of onnx/tensor models and the output tensors of dataset share the same shape.
    • onnx/tensorrt model input shape is set by [--shape ${SHAPE}] in tools/pytorch2onnx.py.
    • The shape of dataset is set by videos_per_gpu(batch size) and test pipelines(all kinds of crop/resize, and num_clips for dense sampling, test augmentation like ThreeCrop/TenCrop/twice_sample, etc. )
    • It is recommended to remove all test augmentations.
      • The input shapes of onnx/tensorrt models should be 1, num_segments, 3, height, width for 2D Recognizers and 1, 1, 3, num_segments, height, width for 3D recognizers.
      • set videos_per_gpu=1 and remove test augmentation pipelines(ThreeCrop/TenCrop/twice_sample, etc.)
data = dict(
    videos_per_gpu=8,  # default value for train/val/test
    workers_per_gpu=4,
    test_dataloader=dict(videos_per_gpu=1), # specific value for test
    train=dict(...),
    val=dict(...),
    test=dict(...),
)
  • Run test script
python tools/test.py /path/to/config.py /path/to/model.onnx --onnx --out test.json

python tools/test.py /path/to/config.py /path/to/model.trt --tensorrt --out test.json

TODO

  • codes in test.py
  • docs in getting started.
  • test all kinds of models
    • tsm
    • i3d
    • tsn
    • slowonly
    • slowfast

@irvingzhang0512 irvingzhang0512 mentioned this pull request Mar 26, 2021
10 tasks
@codecov
Copy link

codecov bot commented Mar 26, 2021

Codecov Report

Merging #758 (513bfc3) into master (a2cbd11) will increase coverage by 0.04%.
The diff coverage is n/a.

❗ Current head 513bfc3 differs from pull request most recent head 8be7a9b. Consider uploading reports for the commit 8be7a9b to get more accurate results
Impacted file tree graph

@@            Coverage Diff             @@
##           master     #758      +/-   ##
==========================================
+ Coverage   85.15%   85.19%   +0.04%     
==========================================
  Files         130      130              
  Lines        9418     9418              
  Branches     1591     1591              
==========================================
+ Hits         8020     8024       +4     
+ Misses       1000      997       -3     
+ Partials      398      397       -1     
Flag Coverage Δ
unittests 85.18% <ø> (+0.03%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmaction/datasets/pipelines/augmentations.py 94.85% <0.00%> (+0.50%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a2cbd11...8be7a9b. Read the comment docs.

@irvingzhang0512
Copy link
Contributor Author

Any comments about this pr? @innerlee

I have no idea how to add unittest about onnx/tensorrt inference.

@dreamerlin dreamerlin requested a review from innerlee March 27, 2021 07:00
@dreamerlin
Copy link
Collaborator

cc. @RunningLeon

@RunningLeon
Copy link

@dreamerlin Currently, we only add unit tests for plugins of TensorRT.

@innerlee
Copy link
Contributor

@congee524 Could you have a try? Will merge this if you can run it successfully :)

@congee524
Copy link
Contributor

OKK

@congee524
Copy link
Contributor

Hey, I have tested the script and passed.
Since we don't support doing average_clip after inferring the class scores in ONNX models, I think it's necessary to add notes in the docstring to remind users.
i.e. don't use ThreeCrops, TenCrops, Twice_sample etc. to test ONNX models that were transformed with our tools/pytorch2onnx.py.

@irvingzhang0512
Copy link
Contributor Author

  1. For the clip_average issue, I think we should add docs for pytorch2onnx.py, not add docs for this pr.

Maybe we should add an option to export onnx with softmax. I'll look into it.

  1. I'll add some docs for the ThreeCrop/TenCrop issue.

@irvingzhang0512 irvingzhang0512 mentioned this pull request Apr 1, 2021
9 tasks
docs/getting_started.md Outdated Show resolved Hide resolved
docs/getting_started.md Outdated Show resolved Hide resolved
@congee524
Copy link
Contributor

Happy April Fool's Day :p

@innerlee innerlee merged commit 6a252b8 into open-mmlab:master Apr 7, 2021
@innerlee
Copy link
Contributor

innerlee commented Apr 7, 2021

Thanks!

@irvingzhang0512 irvingzhang0512 deleted the inference-onnx-tensorrt branch April 12, 2021 09:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Inference after converting from PyTorch to Onnx to TensorRT
5 participants