diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index 59825d69d0d4..fd27294ecc46 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -1 +1 @@ -Thanks for contributing to TVM! Please refer to guideline https://docs.tvm.ai/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from [Reviewers](https://github.com/dmlc/tvm/blob/master/CONTRIBUTORS.md#reviewers) by @ them in the pull request thread. +Thanks for contributing to TVM! Please refer to guideline https://docs.tvm.ai/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from [Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers) by @ them in the pull request thread. diff --git a/CONTRIBUTORS.md b/CONTRIBUTORS.md index 027c465aaab0..f198e86893ee 100644 --- a/CONTRIBUTORS.md +++ b/CONTRIBUTORS.md @@ -112,7 +112,7 @@ We do encourage everyone to work anything they are interested in. - [Lianmin Zheng](https://github.com/merrymercy): @merrymercy ## List of Contributors -- [Full List of Contributors](https://github.com/dmlc/tvm/graphs/contributors) +- [Full List of Contributors](https://github.com/apache/incubator-tvm/graphs/contributors) - To contributors: please add your name to the list. - [Qiao Zhang](https://github.com/zhangqiaorjc) - [Haolong Zhang](https://github.com/haolongzhangm) diff --git a/apps/android_deploy/README.md b/apps/android_deploy/README.md index a786738ea9e8..0a81ffd26d6a 100644 --- a/apps/android_deploy/README.md +++ b/apps/android_deploy/README.md @@ -34,7 +34,7 @@ Alternatively, you may execute Docker image we provide wich contains the require ### Build APK -Before you build the Android application, please refer to [TVM4J Installation Guide](https://github.com/dmlc/tvm/blob/master/jvm/README.md) and install tvm4j-core to your local maven repository. You can find tvm4j dependency declare in `app/build.gradle`. Modify it if it is necessary. +Before you build the Android application, please refer to [TVM4J Installation Guide](https://github.com/apache/incubator-tvm/blob/master/jvm/README.md) and install tvm4j-core to your local maven repository. You can find tvm4j dependency declare in `app/build.gradle`. Modify it if it is necessary. ``` dependencies { @@ -124,7 +124,7 @@ If everything goes well, you will find compile tools in `/opt/android-toolchain- Follow instruction to get compiled version model for android target [here.](http://docs.tvm.ai/deploy/android.html) -Copied these compiled model deploy_lib.so, deploy_graph.json and deploy_param.params to apps/android_deploy/app/src/main/assets/ and modify TVM flavor changes on [java](https://github.com/dmlc/tvm/blob/master/apps/android_deploy/app/src/main/java/ml/dmlc/tvm/android/demo/MainActivity.java#L81) +Copied these compiled model deploy_lib.so, deploy_graph.json and deploy_param.params to apps/android_deploy/app/src/main/assets/ and modify TVM flavor changes on [java](https://github.com/apache/incubator-tvm/blob/master/apps/android_deploy/app/src/main/java/ml/dmlc/tvm/android/demo/MainActivity.java#L81) `CPU Verison flavor` ``` diff --git a/apps/android_rpc/README.md b/apps/android_rpc/README.md index 1f2a46a8589c..39a15808245d 100644 --- a/apps/android_rpc/README.md +++ b/apps/android_rpc/README.md @@ -28,7 +28,7 @@ You will need JDK, [Android NDK](https://developer.android.com/ndk) and an Andro We use [Gradle](https://gradle.org) to build. Please follow [the installation instruction](https://gradle.org/install) for your operating system. -Before you build the Android application, please refer to [TVM4J Installation Guide](https://github.com/dmlc/tvm/blob/master/jvm/README.md) and install tvm4j-core to your local maven repository. You can find tvm4j dependency declare in `app/build.gradle`. Modify it if it is necessary. +Before you build the Android application, please refer to [TVM4J Installation Guide](https://github.com/apache/incubator-tvm/blob/master/jvm/README.md) and install tvm4j-core to your local maven repository. You can find tvm4j dependency declare in `app/build.gradle`. Modify it if it is necessary. ``` dependencies { @@ -146,7 +146,7 @@ android 1 1 0 ``` -Then checkout [android\_rpc/tests/android\_rpc\_test.py](https://github.com/dmlc/tvm/blob/master/apps/android_rpc/tests/android_rpc_test.py) and run, +Then checkout [android\_rpc/tests/android\_rpc\_test.py](https://github.com/apache/incubator-tvm/blob/master/apps/android_rpc/tests/android_rpc_test.py) and run, ```bash # Specify the RPC tracker @@ -157,7 +157,7 @@ export TVM_NDK_CC=/opt/android-toolchain-arm64/bin/aarch64-linux-android-g++ python android_rpc_test.py ``` -This will compile TVM IR to shared libraries (CPU, OpenCL and Vulkan) and run vector addition on your Android device. To verify compiled TVM IR shared libraries on OpenCL target set `'test_opencl = True'` and on Vulkan target set `'test_vulkan = True'` in [tests/android_rpc_test.py](https://github.com/dmlc/tvm/blob/master/apps/android_rpc/tests/android_rpc_test.py), by default on CPU target will execute. +This will compile TVM IR to shared libraries (CPU, OpenCL and Vulkan) and run vector addition on your Android device. To verify compiled TVM IR shared libraries on OpenCL target set `'test_opencl = True'` and on Vulkan target set `'test_vulkan = True'` in [tests/android_rpc_test.py](https://github.com/apache/incubator-tvm/blob/master/apps/android_rpc/tests/android_rpc_test.py), by default on CPU target will execute. On my test device, it gives following results. ```bash diff --git a/apps/benchmark/README.md b/apps/benchmark/README.md index 93eb94e8b847..8fce04238afd 100644 --- a/apps/benchmark/README.md +++ b/apps/benchmark/README.md @@ -20,7 +20,7 @@ ## Results -See results on wiki page https://github.com/dmlc/tvm/wiki/Benchmark +See results on wiki page https://github.com/apache/incubator-tvm/wiki/Benchmark ## How to Reproduce @@ -78,7 +78,7 @@ python3 -m tvm.exec.rpc_tracker `python3 -m tvm.exec.rpc_server --tracker=10.77.1.123:9190 --key=rk3399`, where 10.77.1.123 is the IP address of the tracker. * For Android device - * Build and install tvm RPC apk on your device [Help](https://github.com/dmlc/tvm/tree/master/apps/android_rpc). + * Build and install tvm RPC apk on your device [Help](https://github.com/apache/incubator-tvm/tree/master/apps/android_rpc). Make sure you can pass the android rpc test. Then you have alreadly known how to register. 3. Verify the device registration diff --git a/apps/sgx/README.md b/apps/sgx/README.md index 10dbcd94c586..b01cc80390de 100644 --- a/apps/sgx/README.md +++ b/apps/sgx/README.md @@ -39,7 +39,7 @@ Check out the `/tvm/install/ubuntu_install_sgx.sh` for the commands to get these If using Docker, start by running ``` -git clone --recursive https://github.com/dmlc/tvm.git +git clone --recursive https://github.com/apache/incubator-tvm.git docker run --rm -it -v $(pwd)/tvm:/mnt tvmai/ci-cpu /bin/bash ``` then, in the container diff --git a/conda/tvm-libs/meta.yaml b/conda/tvm-libs/meta.yaml index e3422a2174ef..faea61445f79 100644 --- a/conda/tvm-libs/meta.yaml +++ b/conda/tvm-libs/meta.yaml @@ -43,6 +43,6 @@ requirements: - {{ pin_compatible('cudnn', lower_bound='7.6.0', max_pin='x') }} # [cuda] about: - home: https://github.com/dmlc/tvm + home: https://github.com/apache/incubator-tvm license: Apache2 summary: a low level domain specific language for compiling tensor computation pipelines \ No newline at end of file diff --git a/conda/tvm/meta.yaml b/conda/tvm/meta.yaml index 78a95cbde194..fe4752e321ef 100644 --- a/conda/tvm/meta.yaml +++ b/conda/tvm/meta.yaml @@ -58,7 +58,7 @@ test: - python -m pytest -v tests/python/integration about: - home: https://github.com/dmlc/tvm + home: https://github.com/apache/incubator-tvm license: Apache-2.0 license_family: Apache summary: a low level domain specific language for compiling tensor computation pipelines diff --git a/docker/Dockerfile.demo_android b/docker/Dockerfile.demo_android index 6f8720c9eb3e..0c6b3165c5b3 100644 --- a/docker/Dockerfile.demo_android +++ b/docker/Dockerfile.demo_android @@ -56,7 +56,7 @@ RUN git clone https://github.com/KhronosGroup/OpenCL-Headers /usr/local/OpenCL-H # Build TVM RUN cd /usr && \ - git clone --depth=1 https://github.com/dmlc/tvm --recursive && \ + git clone --depth=1 https://github.com/apache/incubator-tvm --recursive && \ cd /usr/tvm && \ mkdir -p build && \ cd build && \ diff --git a/docker/Dockerfile.demo_opencl b/docker/Dockerfile.demo_opencl index ec2154efe5b2..573b8356f6f5 100644 --- a/docker/Dockerfile.demo_opencl +++ b/docker/Dockerfile.demo_opencl @@ -62,7 +62,7 @@ RUN echo "Cloning TVM source & submodules" ENV TVM_PAR_DIR="/usr" RUN mkdir -p TVM_PAR_DIR && \ cd ${TVM_PAR_DIR} && \ - git clone --depth=1 https://github.com/dmlc/tvm --recursive + git clone --depth=1 https://github.com/apache/incubator-tvm --recursive #RUN git submodule update --init --recursive diff --git a/docker/install/install_tvm_cpu.sh b/docker/install/install_tvm_cpu.sh index efe2d21975b7..e2840653fa74 100755 --- a/docker/install/install_tvm_cpu.sh +++ b/docker/install/install_tvm_cpu.sh @@ -21,7 +21,7 @@ set -u set -o pipefail cd /usr -git clone --depth=1 https://github.com/dmlc/tvm --recursive +git clone --depth=1 https://github.com/apache/incubator-tvm --recursive cd /usr/tvm # checkout a hash-tag git checkout 4b13bf668edc7099b38d463e5db94ebc96c80470 diff --git a/docker/install/install_tvm_gpu.sh b/docker/install/install_tvm_gpu.sh index e91cd9a6176e..bb51e0150342 100755 --- a/docker/install/install_tvm_gpu.sh +++ b/docker/install/install_tvm_gpu.sh @@ -21,7 +21,7 @@ set -u set -o pipefail cd /usr -git clone --depth=1 https://github.com/dmlc/tvm --recursive +git clone --depth=1 https://github.com/apache/incubator-tvm --recursive cd /usr/tvm # checkout a hash-tag git checkout 4b13bf668edc7099b38d463e5db94ebc96c80470 diff --git a/docs/contribute/community.rst b/docs/contribute/community.rst index d252eedd0659..f6ea51419f6a 100644 --- a/docs/contribute/community.rst +++ b/docs/contribute/community.rst @@ -20,7 +20,7 @@ TVM Community Guideline ======================= -TVM adopts the Apache style model and governs by merit. We believe that it is important to create an inclusive community where everyone can use, contribute to, and influence the direction of the project. See `CONTRIBUTORS.md `_ for the current list of contributors. +TVM adopts the Apache style model and governs by merit. We believe that it is important to create an inclusive community where everyone can use, contribute to, and influence the direction of the project. See `CONTRIBUTORS.md `_ for the current list of contributors. diff --git a/docs/contribute/document.rst b/docs/contribute/document.rst index 5df43adbfdb8..0c429d65ee3a 100644 --- a/docs/contribute/document.rst +++ b/docs/contribute/document.rst @@ -68,7 +68,7 @@ Be careful to leave blank lines between sections of your documents. In the above case, there has to be a blank line before `Parameters`, `Returns` and `Examples` in order for the doc to be built correctly. To add a new function to the doc, we need to add the `sphinx.autodoc `_ -rules to the `docs/api/python `_). +rules to the `docs/api/python `_). You can refer to the existing files under this folder on how to add the functions. @@ -96,7 +96,7 @@ to add comments about code logics to improve readability. Write Tutorials --------------- We use the `sphinx-gallery `_ to build python tutorials. -You can find the source code under `tutorials `_ quite self explanatory. +You can find the source code under `tutorials `_ quite self explanatory. One thing that worth noting is that the comment blocks are written in reStructuredText instead of markdown so be aware of the syntax. The tutorial code will run on our build server to generate the document page. diff --git a/docs/deploy/android.md b/docs/deploy/android.md index 78d67a76b756..b71d515d549e 100644 --- a/docs/deploy/android.md +++ b/docs/deploy/android.md @@ -38,5 +38,5 @@ deploy_lib.so, deploy_graph.json, deploy_param.params will go to android target. ## TVM Runtime for Android Target -Refer [here](https://github.com/dmlc/tvm/blob/master/apps/android_deploy/README.md#build-and-installation) to build CPU/OpenCL version flavor TVM runtime for android target. -From android java TVM API to load model & execute can be referred at this [java](https://github.com/dmlc/tvm/blob/master/apps/android_deploy/app/src/main/java/ml/dmlc/tvm/android/demo/MainActivity.java) sample source. +Refer [here](https://github.com/apache/incubator-tvm/blob/master/apps/android_deploy/README.md#build-and-installation) to build CPU/OpenCL version flavor TVM runtime for android target. +From android java TVM API to load model & execute can be referred at this [java](https://github.com/apache/incubator-tvm/blob/master/apps/android_deploy/app/src/main/java/ml/dmlc/tvm/android/demo/MainActivity.java) sample source. diff --git a/docs/deploy/cpp_deploy.md b/docs/deploy/cpp_deploy.md index 3fc5732613c3..3a99846c0820 100644 --- a/docs/deploy/cpp_deploy.md +++ b/docs/deploy/cpp_deploy.md @@ -18,7 +18,7 @@ Deploy TVM Module using C++ API =============================== -We provide an example on how to deploy TVM modules in [apps/howto_deploy](https://github.com/dmlc/tvm/tree/master/apps/howto_deploy) +We provide an example on how to deploy TVM modules in [apps/howto_deploy](https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy) To run the example, you can use the following command @@ -34,17 +34,17 @@ The only thing we need is to link to a TVM runtime in your target platform. TVM provides a minimum runtime, which costs around 300K to 600K depending on how much modules we use. In most cases, we can use ```libtvm_runtime.so``` that comes with the build. -If somehow you find it is hard to build ```libtvm_runtime```, checkout [tvm_runtime_pack.cc](https://github.com/dmlc/tvm/tree/master/apps/howto_deploy/tvm_runtime_pack.cc). +If somehow you find it is hard to build ```libtvm_runtime```, checkout [tvm_runtime_pack.cc](https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/tvm_runtime_pack.cc). It is an example all in one file that gives you TVM runtime. You can compile this file using your build system and include this into your project. -You can also checkout [apps](https://github.com/dmlc/tvm/tree/master/apps/) for example applications build with TVM on iOS, Android and others. +You can also checkout [apps](https://github.com/apache/incubator-tvm/tree/master/apps/) for example applications build with TVM on iOS, Android and others. Dynamic Library vs. System Module --------------------------------- TVM provides two ways to use the compiled library. -You can checkout [prepare_test_libs.py](https://github.com/dmlc/tvm/tree/master/apps/howto_deploy/prepare_test_libs.py) -on how to generate the library and [cpp_deploy.cc](https://github.com/dmlc/tvm/tree/master/apps/howto_deploy/cpp_deploy.cc) on how to use them. +You can checkout [prepare_test_libs.py](https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/prepare_test_libs.py) +on how to generate the library and [cpp_deploy.cc](https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/cpp_deploy.cc) on how to use them. - Store library as a shared library and dynamically load the library into your project. - Bundle the compiled library into your project in system module mode. diff --git a/docs/deploy/index.rst b/docs/deploy/index.rst index 0dd1886b7ff2..0f4401c47391 100644 --- a/docs/deploy/index.rst +++ b/docs/deploy/index.rst @@ -38,7 +38,7 @@ on a Linux based embedded system such as Raspberry Pi: .. code:: bash - git clone --recursive https://github.com/dmlc/tvm + git clone --recursive https://github.com/apache/incubator-tvm cd tvm mkdir build cp cmake/config.cmake build diff --git a/docs/deploy/nnvm.md b/docs/deploy/nnvm.md index 7299b3fae0db..4040de35ea54 100644 --- a/docs/deploy/nnvm.md +++ b/docs/deploy/nnvm.md @@ -144,7 +144,7 @@ This process need few additional options as given below to NNVM build. Module export require additional options for not to compile but save as ```lib.export_library (path, fcompile=False)``` The output of above API is a tar compressed file containing object file ```(lib.o)``` and cpp source file ```(devc.cc)``` which embeds device blob. Thease two files should be compiled along with other files or objects while building c++ application. -Please refer to [Makefile](https://github.com/dmlc/tvm/tree/master/apps/howto_deploy/Makefile#L32) for a reference. +Please refer to [Makefile](https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/Makefile#L32) for a reference. The c++ code to load this system module require the below change. diff --git a/docs/dev/inferbound.rst b/docs/dev/inferbound.rst index 1b74cabf01cc..d9fedf8296ef 100644 --- a/docs/dev/inferbound.rst +++ b/docs/dev/inferbound.rst @@ -19,7 +19,7 @@ InferBound Pass ******************************************* -The InferBound pass is run after normalize, and before ScheduleOps `build_module.py `_. The main job of InferBound is to create the bounds map, which specifies a Range for each IterVar in the program. These bounds are then passed to ScheduleOps, where they are used to set the extents of For loops, see `MakeLoopNest `_, and to set the sizes of allocated buffers (`BuildRealize `_), among other uses. +The InferBound pass is run after normalize, and before ScheduleOps `build_module.py `_. The main job of InferBound is to create the bounds map, which specifies a Range for each IterVar in the program. These bounds are then passed to ScheduleOps, where they are used to set the extents of For loops, see `MakeLoopNest `_, and to set the sizes of allocated buffers (`BuildRealize `_), among other uses. The output of InferBound is a map from IterVar to Range: @@ -50,9 +50,9 @@ Therefore, let's review the Range and IterVar classes: }; } -Note that IterVarNode also contains a Range ``dom``. This ``dom`` may or may not have a meaningful value, depending on when the IterVar was created. For example, when ``tvm.compute`` is called, an `IterVar is created `_ for each axis and reduce axis, with dom's equal to the shape supplied in the call to ``tvm.compute``. +Note that IterVarNode also contains a Range ``dom``. This ``dom`` may or may not have a meaningful value, depending on when the IterVar was created. For example, when ``tvm.compute`` is called, an `IterVar is created `_ for each axis and reduce axis, with dom's equal to the shape supplied in the call to ``tvm.compute``. -On the other hand, when ``tvm.split`` is called, `IterVars are created `_ for the inner and outer axes, but these IterVars are not given a meaningful ``dom`` value. +On the other hand, when ``tvm.split`` is called, `IterVars are created `_ for the inner and outer axes, but these IterVars are not given a meaningful ``dom`` value. In any case, the ``dom`` member of an IterVar is never modified during InferBound. However, keep in mind that the ``dom`` member of an IterVar is sometimes used as default value for the Ranges InferBound computes. @@ -114,7 +114,7 @@ Tensors haven't been mentioned yet, but in the context of TVM, a Tensor represen int value_index; }; -In the Operation class declaration above, we can see that each operation also has a list of InputTensors. Thus the stages of the schedule form a DAG, where each stage is a node in the graph. There is an edge in the graph from Stage A to Stage B, if the operation of Stage B has an input tensor whose source operation is the op of Stage A. Put simply, there is an edge from A to B, if B consumes a tensor produced by A. See the diagram below. This graph is created at the beginning of InferBound, by a call to `CreateReadGraph `_. +In the Operation class declaration above, we can see that each operation also has a list of InputTensors. Thus the stages of the schedule form a DAG, where each stage is a node in the graph. There is an edge in the graph from Stage A to Stage B, if the operation of Stage B has an input tensor whose source operation is the op of Stage A. Put simply, there is an edge from A to B, if B consumes a tensor produced by A. See the diagram below. This graph is created at the beginning of InferBound, by a call to `CreateReadGraph `_. .. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/master/images/docs/inferbound/stage_graph.png :align: center diff --git a/docs/dev/nnvm_overview.md b/docs/dev/nnvm_overview.md index a34d71846918..b4a8ee7ccb9f 100644 --- a/docs/dev/nnvm_overview.md +++ b/docs/dev/nnvm_overview.md @@ -19,7 +19,7 @@ # NNVM Design Overview NNVM is a reusable graph IR stack for deep learning systems. It provides useful API to construct, represent and transform computation graphs to get most high-level optimization needed in deep learning. -As a part of TVM stack for deep learning, NNVM also provides a shared compiler for deep learning frameworks to optimize, compile and deploy into different hardware backends via [TVM](https://github.com/dmlc/tvm) +As a part of TVM stack for deep learning, NNVM also provides a shared compiler for deep learning frameworks to optimize, compile and deploy into different hardware backends via [TVM](https://github.com/apache/incubator-tvm) ## Key Requirements and Design Choices diff --git a/docs/dev/relay_add_pass.rst b/docs/dev/relay_add_pass.rst index 8c2fc9a21469..0910f9cfecf4 100644 --- a/docs/dev/relay_add_pass.rst +++ b/docs/dev/relay_add_pass.rst @@ -399,8 +399,8 @@ information about the pass manager interface can be found in :ref:`relay-pass-in Relay's standard passes are listed in `include/tvm/relay/transform.h`_ and implemented in `src/relay/pass/`_. -.. _include/tvm/relay/transform.h: https://github.com/dmlc/tvm/blob/master/include/tvm/relay/transform.h +.. _include/tvm/relay/transform.h: https://github.com/apache/incubator-tvm/blob/master/include/tvm/relay/transform.h -.. _src/relay/pass: https://github.com/dmlc/tvm/tree/master/src/relay/pass +.. _src/relay/pass: https://github.com/apache/incubator-tvm/tree/master/src/relay/pass -.. _src/relay/pass/fold_constant.cc: https://github.com/dmlc/tvm/blob/master/src/relay/pass/fold_constant.cc +.. _src/relay/pass/fold_constant.cc: https://github.com/apache/incubator-tvm/blob/master/src/relay/pass/fold_constant.cc diff --git a/docs/dev/relay_pass_infra.rst b/docs/dev/relay_pass_infra.rst index 98de347734ba..5c937454e41e 100644 --- a/docs/dev/relay_pass_infra.rst +++ b/docs/dev/relay_pass_infra.rst @@ -631,14 +631,14 @@ For more pass infra related examples in Python and C++, please refer to .. _Relay module: https://docs.tvm.ai/langref/relay_expr.html#module-and-global-functions -.. _include/tvm/relay/transform.h: https://github.com/dmlc/tvm/blob/master/include/tvm/relay/transform.h +.. _include/tvm/relay/transform.h: https://github.com/apache/incubator-tvm/blob/master/include/tvm/relay/transform.h -.. _src/relay/pass/pass_manager.cc: https://github.com/dmlc/tvm/blob/master/src/relay/pass/pass_manager.cc +.. _src/relay/pass/pass_manager.cc: https://github.com/apache/incubator-tvm/blob/master/src/relay/pass/pass_manager.cc -.. _src/relay/pass/fold_constant.cc: https://github.com/dmlc/tvm/blob/master/src/relay/pass/fold_constant.cc +.. _src/relay/pass/fold_constant.cc: https://github.com/apache/incubator-tvm/blob/master/src/relay/pass/fold_constant.cc -.. _python/tvm/relay/transform.py: https://github.com/dmlc/tvm/blob/master/python/tvm/relay/transform.py +.. _python/tvm/relay/transform.py: https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/transform.py -.. _tests/python/relay/test_pass_manager.py: https://github.com/dmlc/tvm/blob/master/tests/python/relay/test_pass_manager.py +.. _tests/python/relay/test_pass_manager.py: https://github.com/apache/incubator-tvm/blob/master/tests/python/relay/test_pass_manager.py -.. _tests/cpp/relay_transform_sequential.cc: https://github.com/dmlc/tvm/blob/master/tests/cpp/relay_transform_sequential.cc +.. _tests/cpp/relay_transform_sequential.cc: https://github.com/apache/incubator-tvm/blob/master/tests/cpp/relay_transform_sequential.cc diff --git a/docs/dev/runtime.rst b/docs/dev/runtime.rst index 3efb71d6ae30..ca50d62ef661 100644 --- a/docs/dev/runtime.rst +++ b/docs/dev/runtime.rst @@ -43,7 +43,7 @@ PackedFunc `PackedFunc`_ is a simple but elegant solution we find to solve the challenges listed. The following code block provides an example in C++ -.. _PackedFunc: https://github.com/dmlc/tvm/blob/master/include/tvm/runtime/packed_func.h +.. _PackedFunc: https://github.com/apache/incubator-tvm/blob/master/include/tvm/runtime/packed_func.h .. code:: c @@ -129,9 +129,9 @@ which allows us to embed the PackedFunc into any languages. Besides python, so f `java`_ and `javascript`_. This philosophy of embedded API is very like Lua, except that we don't have a new language but use C++. -.. _minimum C API: https://github.com/dmlc/tvm/blob/master/include/tvm/runtime/c_runtime_api.h -.. _java: https://github.com/dmlc/tvm/tree/master/jvm -.. _javascript: https://github.com/dmlc/tvm/tree/master/web +.. _minimum C API: https://github.com/apache/incubator-tvm/blob/master/include/tvm/runtime/c_runtime_api.h +.. _java: https://github.com/apache/incubator-tvm/tree/master/jvm +.. _javascript: https://github.com/apache/incubator-tvm/tree/master/web One fun fact about PackedFunc is that we use it for both compiler and deployment stack. @@ -139,7 +139,7 @@ One fun fact about PackedFunc is that we use it for both compiler and deployment - All TVM's compiler pass functions are exposed to frontend as PackedFunc, see `here`_ - The compiled module also returns the compiled function as PackedFunc -.. _here: https://github.com/dmlc/tvm/tree/master/src/api +.. _here: https://github.com/apache/incubator-tvm/tree/master/src/api To keep the runtime minimum, we isolated the IR Node support from the deployment runtime. The resulting runtime takes around 200K - 600K depending on how many runtime driver modules (e.g., CUDA) get included. @@ -160,7 +160,7 @@ TVM defines the compiled object as `Module`_. The user can get the compiled function from Module as PackedFunc. The generated compiled code can dynamically get function from Module in runtime. It caches the function handle in the first call and reuses in subsequent calls. We use this to link device code and callback into any PackedFunc(e.g., python) from generated code. -.. _Module: https://github.com/dmlc/tvm/blob/master/include/tvm/runtime/module.h +.. _Module: https://github.com/apache/incubator-tvm/blob/master/include/tvm/runtime/module.h The ModuleNode is an abstract class that can be implemented by each type of device. So far we support modules for CUDA, Metal, OpenCL and loading dynamic shared libraries. This abstraction makes introduction @@ -276,17 +276,17 @@ Each argument in PackedFunc contains a union value `TVMValue`_ and a type code. This design allows the dynamically typed language to convert to the corresponding type directly, and statically typed language to do runtime type checking during conversion. -.. _TVMValue: https://github.com/dmlc/tvm/blob/master/include/tvm/runtime/c_runtime_api.h#L122 +.. _TVMValue: https://github.com/apache/incubator-tvm/blob/master/include/tvm/runtime/c_runtime_api.h#L122 The relevant files are - `packed_func.h`_ for C++ API - `c_runtime_api.cc`_ for C API and how to provide callback. -.. _packed_func.h: https://github.com/dmlc/tvm/blob/master/include/tvm/runtime/packed_func.h -.. _c_runtime_api.cc: https://github.com/dmlc/tvm/blob/master/src/runtime/c_runtime_api.cc#L262 +.. _packed_func.h: https://github.com/apache/incubator-tvm/blob/master/include/tvm/runtime/packed_func.h +.. _c_runtime_api.cc: https://github.com/apache/incubator-tvm/blob/master/src/runtime/c_runtime_api.cc#L262 To support extension types, we used a registry system to register type related information, like support of any in C++, see `Extension types`_ for more details. -.. _Extension types: https://github.com/dmlc/tvm/tree/master/apps/extension +.. _Extension types: https://github.com/apache/incubator-tvm/tree/master/apps/extension diff --git a/docs/faq.md b/docs/faq.md index e587c5591d38..3161e3bff082 100644 --- a/docs/faq.md +++ b/docs/faq.md @@ -45,5 +45,5 @@ TVM's relation to libDNN cuDNN TVM can incorporate these library as external calls. One goal of TVM is to be able to generate high performing kernels. We will evolve TVM an incremental manner as we learn from the technics of manual kernel crafting and add these as primitives in DSL. -See also [TVM Operator Inventory](https://github.com/dmlc/tvm/tree/master/topi) for +See also [TVM Operator Inventory](https://github.com/apache/incubator-tvm/tree/master/topi) for recipes of operators in TVM. diff --git a/docs/frontend/tensorflow.rst b/docs/frontend/tensorflow.rst index 436a888b03b8..33cb7d44eb26 100644 --- a/docs/frontend/tensorflow.rst +++ b/docs/frontend/tensorflow.rst @@ -57,7 +57,7 @@ Export TensorFlow frontend expects a frozen protobuf (.pb) or saved model as input. It currently does not support checkpoint (.ckpt). The graphdef needed by the TensorFlow frontend can be extracted from the active session, or by using the `TFParser`_ helper class. -.. _TFParser: https://github.com/dmlc/tvm/blob/master/python/tvm/relay/frontend/tensorflow_parser.py +.. _TFParser: https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/frontend/tensorflow_parser.py The model should be exported with a number of transformations to prepare the model for inference. It is also important to set ```add_shapes=True```, as this will embed the output shapes of each node into the graph. Here is one function to export a model as a protobuf given a session: @@ -97,7 +97,7 @@ Import the Model Explicit Shape: ~~~~~~~~~~~~~~~ -To ensure shapes can be known throughout the entire graph, pass the ```shape``` argument to ```from_tensorflow```. This dictionary maps input names to input shapes. Please refer to these `test cases `_ for examples. +To ensure shapes can be known throughout the entire graph, pass the ```shape``` argument to ```from_tensorflow```. This dictionary maps input names to input shapes. Please refer to these `test cases `_ for examples. Data Layout ~~~~~~~~~~~ diff --git a/docs/install/docker.rst b/docs/install/docker.rst index eb7331c0a1b7..fe2bb6a3e1ab 100644 --- a/docs/install/docker.rst +++ b/docs/install/docker.rst @@ -29,7 +29,7 @@ First, clone TVM repo to get the auxiliary scripts .. code:: bash - git clone --recursive https://github.com/dmlc/tvm + git clone --recursive https://github.com/apache/incubator-tvm We can then use the following command to launch a `tvmai/demo-cpu` image. @@ -69,5 +69,5 @@ with ``localhost`` when pasting it into browser. Docker Source ------------- -Check out ``_ if you are interested in +Check out ``_ if you are interested in building your own docker images. diff --git a/docs/install/from_source.rst b/docs/install/from_source.rst index 01708a1f7f73..e723687766bf 100644 --- a/docs/install/from_source.rst +++ b/docs/install/from_source.rst @@ -29,7 +29,7 @@ To get started, clone TVM repo from github. It is important to clone the submodu .. code:: bash - git clone --recursive https://github.com/dmlc/tvm + git clone --recursive https://github.com/apache/incubator-tvm For windows users who use github tools, you can open the git shell, and type the following command. diff --git a/docs/install/nnpack.md b/docs/install/nnpack.md index 035cf6029f09..3c97332b5eb8 100644 --- a/docs/install/nnpack.md +++ b/docs/install/nnpack.md @@ -85,7 +85,7 @@ sudo ldconfig ## Build TVM with NNPACK support ```bash -git clone --recursive https://github.com/dmlc/tvm +git clone --recursive https://github.com/apache/incubator-tvm ``` * Set `set(USE_NNPACK ON)` in config.cmake. diff --git a/docs/langref/relay_expr.rst b/docs/langref/relay_expr.rst index 4a999a95f851..1fd39bc90a3d 100644 --- a/docs/langref/relay_expr.rst +++ b/docs/langref/relay_expr.rst @@ -267,7 +267,7 @@ Operators An operator is a primitive operation, such as :code:`add` or :code:`conv2d`, not defined in the Relay language. Operators are declared in the global operator registry in C++. Many common operators are backed by TVM's -Tensor Operator Inventory (`TOPI `__). +Tensor Operator Inventory (`TOPI `__). To register an operator a user must provide an implementation of the operator, its type, and any other desired metadata. diff --git a/docs/vta/install.md b/docs/vta/install.md index 02c50fbba481..3d2f11fa5b9c 100644 --- a/docs/vta/install.md +++ b/docs/vta/install.md @@ -103,7 +103,7 @@ Because the direct board-to-computer connection prevents the board from directly mkdir sshfs xilinx@192.168.2.99:/home/xilinx cd -git clone --recursive https://github.com/dmlc/tvm +git clone --recursive https://github.com/apache/incubator-tvm # When finished, you can leave the moutpoint and unmount the directory cd ~ sudo umount @@ -375,7 +375,7 @@ Once the compilation completes, the generated bitstream can be found under `/vta/hardware/intel`. diff --git a/jvm/README.md b/jvm/README.md index c5996313447b..6b52f6b4a832 100644 --- a/jvm/README.md +++ b/jvm/README.md @@ -175,4 +175,4 @@ Server server = new Server(proxyHost, proxyPort, "key"); server.start(); ``` -You can also use `StandaloneServerProcessor` and `ConnectProxyServerProcessor` to build your own RPC server. Refer to [Android RPC Server](https://github.com/dmlc/tvm/blob/master/apps/android_rpc/app/src/main/java/ml/dmlc/tvm/tvmrpc/RPCProcessor.java) for more details. \ No newline at end of file +You can also use `StandaloneServerProcessor` and `ConnectProxyServerProcessor` to build your own RPC server. Refer to [Android RPC Server](https://github.com/apache/incubator-tvm/blob/master/apps/android_rpc/app/src/main/java/ml/dmlc/tvm/tvmrpc/RPCProcessor.java) for more details. \ No newline at end of file diff --git a/jvm/pom.xml b/jvm/pom.xml index 150c3a00a894..797acf58f6db 100644 --- a/jvm/pom.xml +++ b/jvm/pom.xml @@ -7,7 +7,7 @@ tvm4j-parent 0.0.1-SNAPSHOT TVM4J Package - Parent - https://github.com/dmlc/tvm/tree/master/jvm + https://github.com/apache/incubator-tvm/tree/master/jvm TVM4J Package Distributed (Deep) Machine Learning Community @@ -22,7 +22,7 @@ scm:git:git@github.com:dmlc/tvm.git scm:git:git@github.com:dmlc/tvm.git - https://github.com/dmlc/tvm + https://github.com/apache/incubator-tvm diff --git a/nnvm/tutorials/deploy_model_on_mali_gpu.py b/nnvm/tutorials/deploy_model_on_mali_gpu.py index da7a6356432b..13ed59fe2935 100644 --- a/nnvm/tutorials/deploy_model_on_mali_gpu.py +++ b/nnvm/tutorials/deploy_model_on_mali_gpu.py @@ -53,7 +53,7 @@ # # .. code-block:: bash # -# git clone --recursive https://github.com/dmlc/tvm +# git clone --recursive https://github.com/apache/incubator-tvm # cd tvm # cp cmake/config.cmake . # sed -i "s/USE_OPENCL OFF/USE_OPENCL ON/" config.cmake diff --git a/nnvm/tutorials/deploy_model_on_rasp.py b/nnvm/tutorials/deploy_model_on_rasp.py index fb9905e16aa3..7acaf4ad9094 100644 --- a/nnvm/tutorials/deploy_model_on_rasp.py +++ b/nnvm/tutorials/deploy_model_on_rasp.py @@ -52,7 +52,7 @@ # # .. code-block:: bash # -# git clone --recursive https://github.com/dmlc/tvm +# git clone --recursive https://github.com/apache/incubator-tvm # cd tvm # make runtime -j4 # diff --git a/nnvm/tutorials/tune_nnvm_arm.py b/nnvm/tutorials/tune_nnvm_arm.py index 9de76e643fa5..d61130b852cc 100644 --- a/nnvm/tutorials/tune_nnvm_arm.py +++ b/nnvm/tutorials/tune_nnvm_arm.py @@ -31,7 +31,7 @@ these operators, it will query this log file to get the best knob values. We also released pre-tuned parameters for some arm devices. You can go to -`ARM CPU Benchmark `_ +`ARM CPU Benchmark `_ to see the results. """ @@ -157,7 +157,7 @@ def get_network(name, batch_size): # (replace :code:`[HOST_IP]` with the IP address of your host machine) # # * For Android: -# Follow this `readme page `_ to +# Follow this `readme page `_ to # install tvm rpc apk on the android device. Make sure you can pass the android rpc test. # Then you have already registred your device. During tuning, you have to go to developer option # and enable "Keep screen awake during changing" and charge your phone to make it stable. diff --git a/nnvm/tutorials/tune_nnvm_cuda.py b/nnvm/tutorials/tune_nnvm_cuda.py index 3c17d5dd7ff3..be3f79992cb6 100644 --- a/nnvm/tutorials/tune_nnvm_cuda.py +++ b/nnvm/tutorials/tune_nnvm_cuda.py @@ -31,7 +31,7 @@ these operators, it will query this log file to get the best knob values. We also released pre-tuned parameters for some NVIDIA GPUs. You can go to -`NVIDIA GPU Benchmark `_ +`NVIDIA GPU Benchmark `_ to see the results. """ diff --git a/nnvm/tutorials/tune_nnvm_mobile_gpu.py b/nnvm/tutorials/tune_nnvm_mobile_gpu.py index df7ab1a8f9a2..8946dc1833bd 100644 --- a/nnvm/tutorials/tune_nnvm_mobile_gpu.py +++ b/nnvm/tutorials/tune_nnvm_mobile_gpu.py @@ -31,7 +31,7 @@ these operators, it will query this log file to get the best knob values. We also released pre-tuned parameters for some arm devices. You can go to -`Mobile GPU Benchmark `_ +`Mobile GPU Benchmark `_ to see the results. """ @@ -157,7 +157,7 @@ def get_network(name, batch_size): # (replace :code:`[HOST_IP]` with the IP address of your host machine) # # * For Android: -# Follow this `readme page `_ to +# Follow this `readme page `_ to # install tvm rpc apk on the android device. Make sure you can pass the android rpc test. # Then you have already registred your device. During tuning, you have to go to developer option # and enable "Keep screen awake during changing" and charge your phone to make it stable. diff --git a/python/setup.py b/python/setup.py index cb1bf40dfa7b..ad14df1e9c43 100644 --- a/python/setup.py +++ b/python/setup.py @@ -156,7 +156,7 @@ def get_package_data_files(): package_dir={'tvm': 'tvm'}, package_data={'tvm': get_package_data_files()}, distclass=BinaryDistribution, - url='https://github.com/dmlc/tvm', + url='https://github.com/apache/incubator-tvm', ext_modules=config_cython(), **setup_kwargs) diff --git a/python/tvm/_ffi/libinfo.py b/python/tvm/_ffi/libinfo.py index a8ef9cfb2df3..851a41a3ef28 100644 --- a/python/tvm/_ffi/libinfo.py +++ b/python/tvm/_ffi/libinfo.py @@ -54,7 +54,7 @@ def find_lib_path(name=None, search_path=None, optional=False): """ use_runtime = os.environ.get("TVM_USE_RUNTIME_LIB", False) - # See https://github.com/dmlc/tvm/issues/281 for some background. + # See https://github.com/apache/incubator-tvm/issues/281 for some background. # NB: This will either be the source directory (if TVM is run # inplace) or the install directory (if TVM is installed). diff --git a/rust/frontend/Cargo.toml b/rust/frontend/Cargo.toml index fa05c5695ebd..c6b56800ef59 100644 --- a/rust/frontend/Cargo.toml +++ b/rust/frontend/Cargo.toml @@ -20,8 +20,8 @@ name = "tvm-frontend" version = "0.1.0" license = "Apache-2.0" description = "Rust frontend support for TVM" -repository = "https://github.com/dmlc/tvm" -homepage = "https://github.com/dmlc/tvm" +repository = "https://github.com/apache/incubator-tvm" +homepage = "https://github.com/apache/incubator-tvm" readme = "README.md" keywords = ["rust", "tvm", "nnvm"] categories = ["api-bindings", "science"] diff --git a/rust/frontend/README.md b/rust/frontend/README.md index 4e11dd94994d..b77a4bd156ef 100644 --- a/rust/frontend/README.md +++ b/rust/frontend/README.md @@ -17,7 +17,7 @@ # TVM Runtime Frontend Support -This crate provides an idiomatic Rust API for [TVM](https://github.com/dmlc/tvm) runtime frontend. Currently this requires **Nightly Rust** and tested on `rustc 1.32.0-nightly` +This crate provides an idiomatic Rust API for [TVM](https://github.com/apache/incubator-tvm) runtime frontend. Currently this requires **Nightly Rust** and tested on `rustc 1.32.0-nightly` ## What Does This Crate Offer? diff --git a/rust/frontend/src/context.rs b/rust/frontend/src/context.rs index d147871a3968..e45f49b6dbf3 100644 --- a/rust/frontend/src/context.rs +++ b/rust/frontend/src/context.rs @@ -50,7 +50,7 @@ use tvm_common::ffi; use crate::{function, TVMArgValue}; /// Device type can be from a supported device name. See the supported devices -/// in [TVM](https://github.com/dmlc/tvm). +/// in [TVM](https://github.com/apache/incubator-tvm). /// /// ## Example /// diff --git a/rust/frontend/src/lib.rs b/rust/frontend/src/lib.rs index 9bf982ee29c8..208696c5a3f8 100644 --- a/rust/frontend/src/lib.rs +++ b/rust/frontend/src/lib.rs @@ -17,7 +17,7 @@ * under the License. */ -//! [TVM](https://github.com/dmlc/tvm) is a compiler stack for deep learning systems. +//! [TVM](https://github.com/apache/incubator-tvm) is a compiler stack for deep learning systems. //! //! This crate provides an idiomatic Rust API for TVM runtime frontend. //! diff --git a/rust/macros/Cargo.toml b/rust/macros/Cargo.toml index 15773b625be9..f44d86e6accd 100644 --- a/rust/macros/Cargo.toml +++ b/rust/macros/Cargo.toml @@ -20,7 +20,7 @@ name = "tvm-macros" version = "0.1.0" license = "Apache-2.0" description = "Proc macros used by the TVM crates." -repository = "https://github.com/dmlc/tvm" +repository = "https://github.com/apache/incubator-tvm" readme = "README.md" keywords = ["tvm"] authors = ["TVM Contributors"] diff --git a/rust/runtime/Cargo.toml b/rust/runtime/Cargo.toml index 3c81a93c9bbf..34acc77899e9 100644 --- a/rust/runtime/Cargo.toml +++ b/rust/runtime/Cargo.toml @@ -20,7 +20,7 @@ name = "tvm-runtime" version = "0.1.0" license = "Apache-2.0" description = "A static TVM runtime" -repository = "https://github.com/dmlc/tvm" +repository = "https://github.com/apache/incubator-tvm" readme = "README.md" keywords = ["tvm", "nnvm"] categories = ["api-bindings", "science"] diff --git a/rust/runtime/src/threading.rs b/rust/runtime/src/threading.rs index eb2f418473ed..3f25309741ec 100644 --- a/rust/runtime/src/threading.rs +++ b/rust/runtime/src/threading.rs @@ -296,7 +296,7 @@ pub(crate) fn sgx_join_threads() { ocall_packed!("__sgx_thread_group_join__", 0); } -// @see https://github.com/dmlc/tvm/issues/988 for information on why this function is used. +// @see https://github.com/apache/incubator-tvm/issues/988 for information on why this function is used. #[no_mangle] pub extern "C" fn TVMBackendParallelBarrier(_task_id: usize, penv: *const TVMParallelGroupEnv) { let barrier: &Arc = unsafe { &*((*penv).sync_handle as *const Arc) }; diff --git a/tests/python/relay/test_op_level2.py b/tests/python/relay/test_op_level2.py index 982161d9899f..487cb650cb8d 100644 --- a/tests/python/relay/test_op_level2.py +++ b/tests/python/relay/test_op_level2.py @@ -142,7 +142,7 @@ def run_test_conv2d(dtype, out_dtype, scale, dshape, kshape, x, w, (1, 1), "SAME")) # CUDA is disabled for 'direct' schedule: - # https://github.com/dmlc/tvm/pull/3070#issuecomment-486597553 + # https://github.com/apache/incubator-tvm/pull/3070#issuecomment-486597553 # group conv2d dshape = (1, 32, 18, 18) kshape = (32, 4, 3, 3) diff --git a/tests/python/unittest/test_graph_tuner_core.py b/tests/python/unittest/test_graph_tuner_core.py index 7dc2e3da29fa..1d8e2efda5b2 100644 --- a/tests/python/unittest/test_graph_tuner_core.py +++ b/tests/python/unittest/test_graph_tuner_core.py @@ -18,7 +18,7 @@ # NOTE: We name this test file to start with test_graph_tuner # to make it execute after zero_rank tensor test cases. This # helps avoid topi arithmetic operator overloading issue: -# https://github.com/dmlc/tvm/issues/3240. +# https://github.com/apache/incubator-tvm/issues/3240. # TODO: restore the file name after this issue is resolved. import os import copy diff --git a/tests/python/unittest/test_graph_tuner_utils.py b/tests/python/unittest/test_graph_tuner_utils.py index 67596a7bbb97..397ea235ecbf 100644 --- a/tests/python/unittest/test_graph_tuner_utils.py +++ b/tests/python/unittest/test_graph_tuner_utils.py @@ -18,7 +18,7 @@ # NOTE: We name this test file to start with test_graph_tuner # to make it execute after zero_rank tensor test cases. This # helps avoid topi arithmetic operator overloading issue: -# https://github.com/dmlc/tvm/issues/3240 +# https://github.com/apache/incubator-tvm/issues/3240 # TODO: restore the file name after this issue is resolved. import tvm diff --git a/topi/python/setup.py b/topi/python/setup.py index f43e22e5ccf6..683717931378 100644 --- a/topi/python/setup.py +++ b/topi/python/setup.py @@ -115,7 +115,7 @@ def get_lib_path(): "decorator", ], packages=find_packages(), - url='https://github.com/dmlc/tvm', + url='https://github.com/apache/incubator-tvm', **setup_kwargs) diff --git a/topi/python/topi/x86/conv2d.py b/topi/python/topi/x86/conv2d.py index 925cb37d9e47..9ea93cd6b647 100644 --- a/topi/python/topi/x86/conv2d.py +++ b/topi/python/topi/x86/conv2d.py @@ -110,7 +110,7 @@ def _declaration_conv(cfg, data, kernel, strides, padding, dilation, layout, out kh, kw, _, _ = get_const_tuple(kernel.shape) if layout == 'HWCN': return nn.conv2d_hwcn(data, kernel, strides, padding, dilation, out_dtype) - # FIXME - https://github.com/dmlc/tvm/issues/4122 + # FIXME - https://github.com/apache/incubator-tvm/issues/4122 # _declaration_conv_nhwc_pack expects kernel layout to be HWOI. However, the tests use HWIO # layout. Commenting until we have clarity about the nhwc_pack implementation from the author. # elif layout == 'NHWC' and kh == 1 and kw == 1 and kernel.dtype == "int8": diff --git a/topi/python/topi/x86/conv2d_avx_1x1.py b/topi/python/topi/x86/conv2d_avx_1x1.py index 2a81dcc495d3..9726f3d8d4f9 100644 --- a/topi/python/topi/x86/conv2d_avx_1x1.py +++ b/topi/python/topi/x86/conv2d_avx_1x1.py @@ -251,7 +251,7 @@ def _schedule_conv_nhwc_pack_int8(s, cfg, data, conv_out, last): packing of weight to make the address access be friendly to int8 intrinsic """ - # FIXME - https://github.com/dmlc/tvm/issues/3598 + # FIXME - https://github.com/apache/incubator-tvm/issues/3598 # pylint: disable=unreachable return s diff --git a/tutorials/autotvm/tune_relay_arm.py b/tutorials/autotvm/tune_relay_arm.py index 3079b5a5ae0c..fe2f94c2ad89 100644 --- a/tutorials/autotvm/tune_relay_arm.py +++ b/tutorials/autotvm/tune_relay_arm.py @@ -31,7 +31,7 @@ these operators, it will query this log file to get the best knob values. We also released pre-tuned parameters for some arm devices. You can go to -`ARM CPU Benchmark `_ +`ARM CPU Benchmark `_ to see the results. """ @@ -149,7 +149,7 @@ def get_network(name, batch_size): # (replace :code:`[HOST_IP]` with the IP address of your host machine) # # * For Android: -# Follow this `readme page `_ to +# Follow this `readme page `_ to # install the TVM RPC APK on the android device. Make sure you can pass the android rpc test. # Then you have already registred your device. During tuning, you have to go to developer option # and enable "Keep screen awake during changing" and charge your phone to make it stable. diff --git a/tutorials/autotvm/tune_relay_cuda.py b/tutorials/autotvm/tune_relay_cuda.py index 9a3d971a1bc5..efce3cb9d832 100644 --- a/tutorials/autotvm/tune_relay_cuda.py +++ b/tutorials/autotvm/tune_relay_cuda.py @@ -31,7 +31,7 @@ these operators, it will query this log file to get the best knob values. We also released pre-tuned parameters for some NVIDIA GPUs. You can go to -`NVIDIA GPU Benchmark `_ +`NVIDIA GPU Benchmark `_ to see the results. """ diff --git a/tutorials/autotvm/tune_relay_mobile_gpu.py b/tutorials/autotvm/tune_relay_mobile_gpu.py index 804d2322241a..b9fb3b188570 100644 --- a/tutorials/autotvm/tune_relay_mobile_gpu.py +++ b/tutorials/autotvm/tune_relay_mobile_gpu.py @@ -31,7 +31,7 @@ these operators, it will query this log file to get the best knob values. We also released pre-tuned parameters for some arm devices. You can go to -`Mobile GPU Benchmark `_ +`Mobile GPU Benchmark `_ to see the results. """ @@ -150,7 +150,7 @@ def get_network(name, batch_size): # (replace :code:`[HOST_IP]` with the IP address of your host machine) # # * For Android: -# Follow this `readme page `_ to +# Follow this `readme page `_ to # install TVM RPC APK on the android device. Make sure you can pass the android RPC test. # Then you have already registred your device. During tuning, you have to go to developer option # and enable "Keep screen awake during changing" and charge your phone to make it stable. diff --git a/tutorials/cross_compilation_and_rpc.py b/tutorials/cross_compilation_and_rpc.py index d5429dfa281b..5ae6eae77457 100644 --- a/tutorials/cross_compilation_and_rpc.py +++ b/tutorials/cross_compilation_and_rpc.py @@ -49,7 +49,7 @@ # # .. code-block:: bash # -# git clone --recursive https://github.com/dmlc/tvm +# git clone --recursive https://github.com/apache/incubator-tvm # cd tvm # make runtime -j2 # diff --git a/tutorials/frontend/deploy_model_on_android.py b/tutorials/frontend/deploy_model_on_android.py index 9969d0788ba0..d4d1fe263ed3 100644 --- a/tutorials/frontend/deploy_model_on_android.py +++ b/tutorials/frontend/deploy_model_on_android.py @@ -46,7 +46,7 @@ # # .. code-block:: bash # -# git clone --recursive https://github.com/dmlc/tvm +# git clone --recursive https://github.com/apache/incubator-tvm # cd tvm # docker build -t tvm.demo_android -f docker/Dockerfile.demo_android ./docker # docker run --pid=host -h tvm -v $PWD:/workspace \ @@ -105,7 +105,7 @@ # --------------------------------------- # Now we can register our Android device to the tracker. # -# Follow this `readme page `_ to +# Follow this `readme page `_ to # install TVM RPC APK on the android device. # # Here is an example of config.mk. I enabled OpenCL and Vulkan. @@ -138,7 +138,7 @@ # # .. note:: # -# At this time, don't forget to `create a standalone toolchain `_ . +# At this time, don't forget to `create a standalone toolchain `_ . # # for example # diff --git a/tutorials/frontend/deploy_model_on_rasp.py b/tutorials/frontend/deploy_model_on_rasp.py index d19805bdc2fb..10869997497d 100644 --- a/tutorials/frontend/deploy_model_on_rasp.py +++ b/tutorials/frontend/deploy_model_on_rasp.py @@ -52,7 +52,7 @@ # # .. code-block:: bash # -# git clone --recursive https://github.com/dmlc/tvm +# git clone --recursive https://github.com/apache/incubator-tvm # cd tvm # mkdir build # cp cmake/config.cmake build diff --git a/vta/apps/tsim_example/README.md b/vta/apps/tsim_example/README.md index 0b8a359ba0ec..99740222869b 100644 --- a/vta/apps/tsim_example/README.md +++ b/vta/apps/tsim_example/README.md @@ -62,7 +62,7 @@ https://www.veripool.org/projects/verilator/wiki/Installing ## Setup in TVM 1. Install `verilator` and `sbt` as described above -2. Get tvm `git clone https://github.com/dmlc/tvm.git` +2. Get tvm `git clone https://github.com/apache/incubator-tvm.git` 3. Build [tvm](https://docs.tvm.ai/install/from_source.html#build-the-shared-library) ## How to run VTA TSIM examples diff --git a/web/README.md b/web/README.md index 72addb2b92a9..d8127f2fa69c 100644 --- a/web/README.md +++ b/web/README.md @@ -82,7 +82,7 @@ This will create ```build/libtvm_web_runtime.bc``` and ```build/libtvm_web_runti The general idea is to use TVM as normally and set target to be ```llvm -target=asmjs-unknown-emscripten -system-lib```. -The following code snippet from [tests/web/prepare_test_libs.py](https://github.com/dmlc/tvm/tree/master/tests/web/prepare_test_libs.py) demonstrate +The following code snippet from [tests/web/prepare_test_libs.py](https://github.com/apache/incubator-tvm/tree/master/tests/web/prepare_test_libs.py) demonstrate the compilation process. ```python @@ -114,7 +114,7 @@ The result js library is a library that contains both TVM runtime and the compil ## Run the Generated Library -The following code snippet from [tests/web/test_module_load.js](https://github.com/dmlc/tvm/tree/master/tests/web/test_module_load.js) demonstrate +The following code snippet from [tests/web/test_module_load.js](https://github.com/apache/incubator-tvm/tree/master/tests/web/test_module_load.js) demonstrate how to run the compiled library. ```js