forked from apache/tvm
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sync upstream with origin #8
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This PR ties up the last loosen end of the recent CI update.
* Use real output name instead of node_name * Add pytorch max_pool2d_with_indices converter. * Add test for maxpool2d with indices * Add explicit assert for single output * Only consume output (not indices) from max pool 2d with indices * undo change
* Fixed pooling bug. * Added tests and fixed more cases.
* [RELAY][TF] Support symbolic newshape for Reshape * Only need to pass data * Use MakeReshape() in Reshape() * Change newshape to Expr * Create a template for Array<T> * Fuse reshape when newshape is constant * Make newshape Optional * Use bool() of Optional Co-authored-by: Li Xiaoquan <xiaoquan.li@denglin.ai>
…pass (#5422) * [RELAY] Specify additional layouts in convert layout pass * This patch means that you can specify an additional layout, rather than using the layout chosen by default during conversion. * This is specifically useful for external codegen when a 3rd party library needs to target a specific kernel layout for example. Change-Id: I3ef9cf45ead574801870a38af9768f93e29aab10 * Use mapping of op name to list of desired layouts Change-Id: Ibd691a3cb93e73a394f36112668ad52a84c7d5a2 * Fix issue with code block Change-Id: Ibb4e38c05ad4312b7dea845be699b8d5d57e0a94 * Address comments, Improve tutorial Change-Id: Ib824eead329d551c338234de3b2d814693afd0ec * Fix linting Change-Id: Ie9e1891f590b3a7496a56ff8362cdda9d4b5fa75 * Test uses NCHW default layout. Unrelated issue with NHWC. Change-Id: I1c16f0db73db56f5e9536db3fe5eb2624c3b595c * Fix mistake in tutorial Change-Id: I944041245d27af262dc96f1cd8117f1f19272062 * Address multiple comments Change-Id: If33a1e34acd8fc37d1c7797ee189a6448a392672 * Improve tutorial Change-Id: Ib04142c94c7958ab5067947d2ff4c84354e3d0c5 * Fix Clang-format Change-Id: Ieff39e3f0817d22579c68b3287e972a3b0fcfbc8
Signed-off-by: Giuseppe Rossini <giuseppe.rossini@arm.com>
* Previously this function placed a JSON-escaped string containing the JSON-encoded graph.
* Overestimate binary size for microTVM compiled binaries. * Currently uTVM binary section sizes are computed by summing the sizes of all symbols in the section. * This method produces errors because it presumes the linker works in a particular way, rather than analyzing the linked output. * As we intend to move away from linking inside TVM (RFC forthcoming), just using this stopgap to make forward progress until then. * address weberlo comments * fix regression (use 64 bit word size)
* Start on memory planning WIP Move to test_memory_passes.py Work on memory planning Post-rebase and VM changes Plumb through the offsets Basic tests all pass, fix offset to data buffer. Fix compile errors Fix ws Apply suggestions from code review Co-Authored-By: Haichen Shen <shenhaichen@gmail.com> Address CR Update src/runtime/vm/vm.cc Co-Authored-By: Haichen Shen <shenhaichen@gmail.com> Fix another comment Fix lint Fix Fix Fix Lint is done? Fix More fix Trying to debug No clue Fix lint * Fix docs * Disable aggressive constant eval * It works * Fix lint * Found issue with dynamic * Fix the pass, but runtime segfaults * fix scalar tensor, test_any_elemwise passes * Fix split pass * Fix 0-rank issues * Fix * debug * apply Haichen's patch and clean up * lintgit add . * fix serializer and test_tyck_alloc_tensor test * Fix the constant lift pass in presence of closures * Restore old finder * Fix rebase issues * Fix * Fix * Fix issue coercing the shapes incorrectly from i64 to i32 * Fix linting * Fix clang format * Format memory.cc * Fix 0-rank case * Add fix for (0,) shape * Ignore shapes for now * Apply suggestions from code review Co-authored-by: Zhi <5145158+zhiics@users.noreply.github.com> * Update src/runtime/vm/executable.cc Co-authored-by: Zhi <5145158+zhiics@users.noreply.github.com> * Fix * lint Co-authored-by: Zhi Chen <chzhi@amazon.com> Co-authored-by: Zhi <5145158+zhiics@users.noreply.github.com>
- The predictions were not correctly applied after transformation. This leads to normal reduction itervar appearing outside of the loop, which is undefined. See detailed comments. Signed-off-by: Wei Pan <weip@nvidia.com>
* [REFACTOR][IR] Streamline ir/op Registry This PR refactors the attrregistry mechanism in the ir/op into a separate common base. The common base will provide a foundation for other attr related registries such as target and pass. We also streamlines the terminology of the registry API. - Use AttrMap for the column maps returned by the registry - Use RegEntry to refer to the registry entry. * Address review comments
Signed-off-by: Dhruva Ray <dhruvaray@gmail.com>
* fix shfl intrin * improve test_lower_warp_memory_cuda_half_a_warp
Co-authored-by: Zeng Liyong <liyong.zeng@streamcomputing.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Thanks for contributing to TVM! Please refer to guideline https://tvm.apache.org/docs/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from Reviewers by @ them in the pull request thread.