-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Apache TVM v0.8 Release Note Candidate #9416
Comments
Better to replace "Vulkan backend" at the "major exciting experimental features" section with `Improved Vulkan backend", since the vk backend has been around for a long time by now. |
@masahi @Lunderberg Yeah I totally agree! Would you guys suggest more details like "improved vulkan backends on ..."? Thanks a lot! |
Ok replaced with "Improved Vulkan backend" at the overview, and added following, unusually more details in "Codegen Backends and Runtime" to show off our vk capability. A critical bug fix in SPIRV codegen allows the Vulkan backend to produce correct outputs on more hardwares and drivers. Added support for querying device specific hardware parameters and capabilities, dynamic shapes, irregular ops such as sorting and NMS, UBO, fp16, and vectorization. We can now run complicated models like MaskRCNN on Vulkan end to end. |
Should we wait for PyTorch TVM PR #8777? It should be merged soon. |
Thanks @junrushao1994 ! There are 2 part I think we may need to fix For the Accepted RFCs part,
it should be And for the Frontends part,
I have checked the v0.8 branch, there are 6 pull requests merged, we could list them all like others |
Thank you @masahi for helping edit the description for Vulkan! It looks pretty nice to me :-) Thanks @jiangjiajun for proofreading the PaddlePaddle-related text. Yep these commits were not there a month ago when we collected the initial changelog draft. Thanks to @vinx13, who acted swiftly and added these commits into both the release notes and the changelog today, these PRs you mentioned are included in the latest draft! |
Does this mean we will update v0.8 branch again this week, I merged a new pull request this week early #9428 , will it be in v0.8? |
@jiangjiajun Yes, we will update v0.8 branch and cut a release candidate on Nov 8, 2021. After the cut, we will ask the community and PMC members to test to release, and if there is no regression we will make the release official |
1 similar comment
@jiangjiajun Yes, we will update v0.8 branch and cut a release candidate on Nov 8, 2021. After the cut, we will ask the community and PMC members to test to release, and if there is no regression we will make the release official |
I have tested v0.8 release on microtvm physical hardware for Arduino and Zephyr platforms. It passes the tests for these hardware:
|
The release candidate v0.8.rc0 is approved:
|
Apache TVM v0.8 Release Note
Overview
Apache TVM v0.8 brings several major exciting experimental features, including:
Besides, The community has been working together to refactor and evolve the existing infrastructure, including but not limited to:
Full changelog: https://gist.github.com/junrushao1994/c669905dbc41edc2e691316df49d8562.
Accepted RFCs
The community has adopted a formal RFC process. Below is a list of the formal RFCs accepted by the community since then:
tir.allocate
nodesFeatures and Improvements
TE, TIR, TVMScript
compute-inline
,reverse-compute-inline
,fuse
,split
,rfactor
,storage-align
,vectorize
,unroll
,bind
,reorder
,cache-read
,cache-write
,compute-at
,reverse-compute-at
,decompose-reduction
#8170 #8467 #8544 #8693 #8716 #8767 #8863 #8943 #9041specialize
#8354PointerType
#8017 #8366 #8463AutoTVM, AutoScheduler, Meta Schedule
Operator Coverage
Training
Relay
MicroTVM, AOT, Graph Executor and VM
set_output_zero_copy
in graph executor #8497Arithmetic Analysis
Frontends
Codegen Backends and Runtime
LLVM backend: recover LLVM support on windows; support target feature strings in function attributes; atomic support in NVPTX, ROCm; LLVM compatibility to LLVM 12+ #9305 #9223 #9138 #8860 #8958 #6763 #6698 #6717 #6738 #8293 #6907 #7051
ROCm 3.9 bitcode files search #6865
Vulkan and SPIR-V refactoring and major improvement in codegen and runtime. A critical bug fix in SPIRV codegen allows the Vulkan backend to produce correct outputs on more hardwares and drivers. Added support for querying device specific hardware parameters and capabilities, dynamic shapes, irregular ops such as sorting and NMS, UBO, fp16, and vectorization. We can now run complicated models like MaskRCNN on Vulkan end to end. #8904 #7833 #7717 #7681 #8746 #8813 #7609 #8882 #7607 #7591 #7574 #7572 #7833 #6662 #7969 #8013 #8048 #8098 #8102 #8107 #8127 #8151 #8196 #8320 #8588 #8332 #8333 #8348 #8528
Metal language version upgrade (
MTLLanguageVersion2_3
), better codegen support, int64 support, various bug fixes #7830 #7819 #7714 #7118 #7116 #7105 #7980 #8054 #8175 #8202 #8206 #8313OpenCL, VTA, Verilator: refactored code generator, better error messages, various bug fixes #7834 #7777 #7761 #7100 #6125 #6126 #6191 #7834 #8256 #8257 #8731 #8756 #8973
CUDA: enable
__launch_bounds__
, dynamic shared memory, TensorCore, BF16, half2, NVCC version upgrade #9341 #8678 #7561 #7273 #7146 #7147 #7099 #7065 #7033 #7014 #7907 #7964 #9087 #8135 #8137 #8457 #8466 #8571ARM: CMSIS-NN, Ethos-N #8653 #7628 #8951 #7506 #7443 #7858 #6982 #8795 #8806 #8833 #9147 #9159 #9160 #9162 #9163 #9167 #9209 #9386 #9387
Hexagon: build, compilation, model launcher, more target options and better runtime #7784 #6718 #8821 #8822 #9033 #8823 #8859 #8865 #8915 #8954 #9024 #9025 #8960 #8986 #9010 #9011 #9189 #9220 #9355 #9356
WASM: Update support for latest emcc, add ffi test. #6751
BYOC Integration with Vendor Libraries: TensorRT, ACL, VitisAI
TVMC
--disable-pass
and--config
#7816 #8253Rust Binding
Misc
The text was updated successfully, but these errors were encountered: