Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

merge #3

Merged
merged 357 commits into from
Mar 19, 2021
Merged

merge #3

merged 357 commits into from
Mar 19, 2021

Conversation

Xuxue1
Copy link
Owner

@Xuxue1 Xuxue1 commented Mar 19, 2021

Thanks for contributing to TVM! Please refer to guideline https://tvm.apache.org/docs/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from Reviewers by @ them in the pull request thread.

comaniac and others added 30 commits January 25, 2021 19:27
…ring (#7317)

* [AutoScheduler] Separate shapes from DAG hash and enable schedule sharing

* Update CI logs

* lint

* fix registry

* add message; fix layout rewrite mismatch

* update message

* support other formats
… not exist (#7308)

* [FIX] Infer input shape in sparse_dense_padded's alter_op if one does not exist

If there are multiple alter_ops in a model, the first alteration does
not run type inference for the subsequent ones. In this case, we don't
have the shape information, so we run the inferencer manually.

* add todo
catching polymorphic type 'struct dmlc::Error' by value
* add more gradients

* add documentation
* In #5921, resource_handle was added as a parameter to
   TVMBackendPackedCFunc, which is the typedef for functions called by
   LibraryModule's function lookup.
 * It appears TVM_DLL_EXPORT_TYPED_FUNC was overlooked in that PR,
   although there don't seem to be any runtime affects known so
   far. However, making this definition proper to avoid any compiler
   warnings/debug tool problems.
 * See also https://discuss.tvm.apache.org/t/rfc-misra-c-changes-for-rpc-support/7098/5
* Add cumsum relay/topi op

* relay tests working

* add torch frontend converter

* fix for importing detr

* fix bad merge

* begin cuda cumsum

* support non innermost axis

* support rank higher than 3

* making binop parameter

* fix overflow issue in thrust scan

* generic binop parameter working

* relay test working

* fixed for bool input

* remove pytorch change

* fix pylint

* doc update

* Update python/tvm/topi/cumsum.py

Co-authored-by: Tristan Konolige <[email protected]>

* Update tests/python/relay/test_op_level3.py

Co-authored-by: Tristan Konolige <[email protected]>

* add example outputs

* add supported input and output dtype in thrust log

* adding more loop var names

* fix cpplint

* fix missing check for the cuda target in nms thrust sort

* parallelize cpu cumsum

* making binop argument tir function

* update doc for binop

* doc update

Co-authored-by: Tristan Konolige <[email protected]>
…ORT_PACKED_FUNC macros in packed_func.h. This is a patch PR for #7388. (#7343)

Co-authored-by: JC Li <[email protected]>
…#7342)

If the user has a dmlc-core directory next to the tvm directory, this
dmlc-core directory will be incorrectly used when compiling files with
cc.py.
…c is wrapping (#7287)

* [PRNG] Add check to PRNG to make sure that unsigned integer arithmetic is wrapping

* Add threefry_test_wrapping: a manual test for wrapping unsigned arithmetic.

* fix test to actually run on the target

* formatting

* lint
* add conversion for detr

* remove explicit broadcast_to before batched matmul

* use take with wrap mode

* add test for transformer and negative indices

* add sort and argsort

* add logical_and

* support masked_select

* add gpu targets to masked_select test

* improve sort conversion
* [AutoScheduler] Enable schedule sharing in dispatch context

* Update python/tvm/auto_scheduler/dispatcher.py
* add post nms topk to max_out_size rewrite

* add argsort conversion

* scatter pattern first cut

* matching seems to working

* dup matching fixed

* add converter

* conversion seems working

* add reshape, use take

* remove pytorch argsort converter

* update test

* add doc
* fix unstable compute

* fix

* fix

* lint

* sort linear equation

* sort inequalities

* fix

* fix find

* lint

* fix find

* lint
* Add test for array loop.

* Fixed scalar issue.

* Formatting.

* Fix injective schedule for dynamic shapes.
… wildcard, allow grouping via dominator analysis (#7355)
* Various fixes to get nRF5340 working. Not yet there.

* nRF5340 test runs locally.

* Various fixes to get nRF5340 working. Not yet there.

* nRF5340 test runs locally.

* Add `nrfjprog --recover` for nRF5340DK

* Cleanup.

* Remove debugging code.

* Revert submodule update.

* Remove debugging code.

* Fix comment.

* Remove -keys argument.

* Adding some debugging code

* Fix passing west command to ZephyrFlasher.

* Various fixes to get nRF5340 working. Not yet there.

* nRF5340 test runs locally.

* Add `nrfjprog --recover` for nRF5340DK

* Cleanup.

* Various fixes to get nRF5340 working. Not yet there.

* nRF5340 test runs locally.

* Remove debugging code.

* Fix comment.

* Remove -keys argument.

* Fix merge.
* [Frontend][Tensorflow] Sparse dense matmul adjoint option added

* [1] Review comments handled

* [2] Review comments handled

* [3] Review comments handled
When using pattern with attr of functions, such attrs
mostly does not exist for op node. Therefore, hasattr
check has to be done for op nodes.

Change-Id: Ia313ab34be95ccc793c32fd8e5e5ef566b78685b
* [RUNTIME] Improve error messages for TypedPackedFunc

- TypedPackedFunc now prints the function name when the incorrect number
  of arguments is passed.
- TypedPackedFunc now prints the function name and which argument when
  an argument cannot be converted to the correct type.

* check argument conversion by template deducing argument types

* switch from template approach to TVMMovableArgValueWithContext

* move passes back into cc files

* remove error message prefixes

* Remove TVM_ICHECK_TYPE_CODE. Rename name to optional_name.

* revert changes to module pass for later PR

* reverted too much

* documentation

* formatting

* more docs

* unify error message language. TypedPackedFunc contrustor that does not take a name

* Update include/tvm/runtime/packed_func.h

Co-authored-by: Junru Shao <[email protected]>

Co-authored-by: Junru Shao <[email protected]>
* fix an error in the dynamic Full Type Relation

* Add Diagnostic Errors to Broadcast Type Relations
…ed bugs. (#7364)

* Add testing for datatypes and fix related bugs.

* Fix lint issue in onnx.
* use lowercase for verilator runtime registry function

* lint fix

* update comment
tkonolige and others added 29 commits March 11, 2021 20:25
Moves the auto scheduler with matmul example into the tutorial,
expands to follow the flow of the larger getting started tutorial.
Indended to follow the AutoTVM tutorial on matrix multiplication.
* Allow tvmc compile --target options to accept dots
 * Adds testing for dot separator in quoted and unquoted
   values
 * Add an "unquoting" conditional so that quoted and
   unquoted strings look the same when parsed
…ases (#7654)

* Adds comments to document the regex being used to parse the
   --target=value string
 * Concatenate test cases without reducing the number of asserts
   or number of actual tests
…ile (#7663)

* When we use file with --target, the validation in place was only
   checking whether it was a valid path. For the case in which the
   path is a directory, it causes a crash when tvmc then tries to
   open the path.
 * This fix moved the check to be strictly for files, not only a valid
   path
Co-authored-by: Akira Maruoka <[email protected]>
* Add OpAttrContext class which allows to temporarily change an attribute of an operator

Change-Id: I19b809a105ea8769e56bd89e028e090959a08728

* Replace TempOpAttr with OpAttrContext in arm_compute_lib.py

Change-Id: I1c42dd6a29e765b06ce28192397016efeea2e82a
* [Relay][Pass] Simplify consecutive transpose/layout_transform

* lint

* fix

* support negative

* comment
* Mergepath sort with odd-even block sort

* fix lint, add test

* respond to review comments

* speed up tests by reducing dtype skews

* fix bad rebase

* change threading to support vulkan

* fix lint

* only sort if the data is non-empty

* fix lint again

* fix for vk

* move if to higher scope

* fix typo

Co-authored-by: Masahiro Masuda <masahi@[email protected]>
* Getting Started with TVM: TVMC Tutorial

An update of the TVMC tutorial, follows the introduction
and installation sections of the new getting started tutorial

* Update tutorials/get_started/tvmc_command_line_driver.py

Co-authored-by: Leandro Nunes <[email protected]>

* Style and formatting fixes

Co-authored-by: Leandro Nunes <[email protected]>
* [Torch] Remove unnecessary reshapes for batch_matmul

* lint

* fix

* reorder

* lint
* add graph runtime cuGraph poc

* lint format

* add unittest

* fix review comments

* Update CMakeLists.txt

Co-authored-by: Cody Yu <[email protected]>

* build cuda graph runtime in gpu test

* Revert "build cuda graph runtime in gpu test"

This reverts commit f286711.

* rename cuGraph to CUDA Graph

* rename cuda_graph

* rename cuda_graph

* lint format

* Update src/runtime/graph/graph_runtime_factory.cc

Co-authored-by: Cody Yu <[email protected]>

* Update python/tvm/testing.py

Co-authored-by: Cody Yu <[email protected]>

* fix lint error

* remove unnecessary warn

* add test, fix lint

* fix lint W0223

Co-authored-by: Cody Yu <[email protected]>
* [TOPI] Dense cuda schedule support dynamic dimension

* [TOPI] batch_matmul cublas te computation support dynamism

* [Frontend] tensorflow frontend: dynamic support for BatchMatmul

* [TOPI] nn batch_matmul te computation support dynamism

* fix CI

* Update python/tvm/topi/nn/batch_matmul.py

Co-authored-by: Cody Yu <[email protected]>

* Update python/tvm/topi/cuda/batch_matmul.py

Co-authored-by: Cody Yu <[email protected]>

* remove concat_dynamic_shape function

* update topi dense op integer checking

* fix ci

* Update python/tvm/relay/frontend/tensorflow.py

Co-authored-by: Cody Yu <[email protected]>

* Update batch_matmul.py

* [Frontend] add test for batch_matmul in dynamic shaped case

Co-authored-by: Cody Yu <[email protected]>
* Relax simulated qnn tests to prevent flakiness.

* Change name of helper to make pytest happy.
* add support for optional args for frontends tvmc

* remove unnecessary comments

* Add changes suggested by Matt W. via PR

Co-authored-by: Jocelyn <[email protected]>
* [RUNTIME] Add libbacktrace for backtraces with line numbers

Co-authored-by: Robert Kimball <[email protected]>
@Xuxue1 Xuxue1 merged commit dab3740 into Xuxue1:master Mar 19, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.