Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upstream dmlc 20190311 #13

Merged
merged 93 commits into from
Mar 12, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
93 commits
Select commit Hold shift + click to select a range
421b927
Fix fusion bug when call symbol that is not an operator. (#2630)
jroesch Feb 20, 2019
f409b69
[RUNTIME][NDArray] Allowing External Libraries to Subclass NDArrays (…
junrushao Feb 21, 2019
a1c2c43
Fix pylint 2.2.2 gripes. (#2642)
mshawcroft Feb 21, 2019
9284d6e
add MXNet converter for where operator for both NNVM and Relay (#2647)
haojin2 Feb 22, 2019
b84379a
[Quantization][RELAY] Add check against NCHWc ops in the quantization…
eqy Feb 22, 2019
8bb160a
Stop pylint complaining about useless import alias. (#2655)
mshawcroft Feb 22, 2019
3adb276
Explicitly disable pylint warning subprocess-popen-preexec-fn (#2656)
mshawcroft Feb 22, 2019
5876fc9
[RELAY][PASS]use attribute registration style in the mac count pass (…
yidawang Feb 22, 2019
b199401
[Relay] fix anf for reference and pattern matching (#2637)
MarisaKirisame Feb 22, 2019
f1adf2c
fix lint (#2649)
were Feb 22, 2019
e0ec87d
[RELAY/OP] Gradient of relay level1 ops (#2633)
ZihengJiang Feb 22, 2019
21f4f2d
Update community.rst
tqchen Feb 22, 2019
239face
[Relay] GNF (#2492)
MarisaKirisame Feb 22, 2019
0516127
add committer (#2661)
icemelon Feb 23, 2019
e457cd7
[Relay/TOPI][OP] Add arange op in Relay and TOPI (#2621)
icemelon Feb 23, 2019
d555b88
Fix -Wreturn-std-move and -Wself-assign-overloaded (#2669)
junrushao Feb 24, 2019
5f1e59d
[Relay] add more function to prelude (#2660)
MarisaKirisame Feb 25, 2019
722bcc8
[BUILD] Simplify after bind device type (#2670)
tqchen Feb 25, 2019
16623e7
[Hybrid Script] Add `max_num_threads` (#2672)
were Feb 26, 2019
f713ba0
fix (#2674)
MarisaKirisame Feb 26, 2019
526e692
[Relay] fix error in ANF (too agressively inline atomic expression an…
MarisaKirisame Feb 26, 2019
8d66e4d
Add CONCATENATION to tflite frontend, support Inception V3 (#2643)
ariwaranosai Feb 26, 2019
85dd805
[AUTOTVM][Bugfix] Fix history loader for heterogeneous execution
imorinaga Feb 27, 2019
a39f27a
[Graph Runtime] Run_individual for benchmarking individual layers (#2…
hlu1 Feb 27, 2019
1b70315
REGION op removed from topi and added in darkent frontend (#2275)
siju-samuel Feb 27, 2019
d85e780
yolo reorg op for relay (#1941)
siju-samuel Feb 27, 2019
8332af8
[Relay] Ensure nested higher-order functions are treated correctly (#…
slyubomirsky Feb 27, 2019
abad345
[Relay] add more descriptive error for checked_type (#2652)
MarisaKirisame Feb 27, 2019
38794e1
[Relay] Port param dict save/load from NNVM (#2620)
weberlo Feb 27, 2019
d5f6064
add converter for MXNet slice in nnvm and relay (#2662)
haojin2 Feb 27, 2019
9c2a4e1
[PYLINT] Disable consider-using-get (#2654)
mshawcroft Feb 27, 2019
5fcb16f
[DOC] CoreML frontend tutorial (#2667)
kazum Feb 27, 2019
8614a7c
Support mean in NNVM to Relay converter. (#2680)
lixiaoquan Feb 27, 2019
87dba82
Stop pylint complaining about unnecessary return statement. (#2684)
mshawcroft Feb 27, 2019
fad2597
[RUST] Fix typo (#2681)
take-cheeze Feb 27, 2019
6fee9f6
Handle Select in IntSetEvaluator (#2687)
derisavi Feb 27, 2019
6897874
[CODEGEN LLVM GPU] Initialize llvm before lookup for the target (#2683)
denis0x0D Feb 27, 2019
d0e5254
[RELAY] Fix get_int_tuple for shape like '(1001,)' (#2691)
lixiaoquan Feb 28, 2019
bce740d
[AUTOTVM] tweak `sample_int` implementation (#2677)
eqy Feb 28, 2019
6dd5cba
[Lang] Layout in TVM node system (#2509)
yzhliu Feb 28, 2019
86d746f
[DOC] Using External Libraries in Relay (#2694)
SiNZeRo Feb 28, 2019
7cfd579
[RELAY][PASS] Enable switching CanonicalizeOps in pass_enabled (#2696)
vinx13 Feb 28, 2019
462c88a
Docker updates (#2702)
mshawcroft Feb 28, 2019
97b35c3
[Relay][Doc] Separate arguments types formatting with comma (#2690)
wweic Feb 28, 2019
d68ab9a
[DOC] MXNet frontend tutorial (#2688)
kazum Feb 28, 2019
6f193a9
Few docs fixes (#2703)
ruslo Feb 28, 2019
27475e7
Pin pylint version 2.2.2 (#2698)
mshawcroft Mar 1, 2019
e8d54a0
[Relay] fix checkwellform (#2705)
MarisaKirisame Mar 1, 2019
cc10159
support MXNet _minimum and _maximum (#2709)
haojin2 Mar 1, 2019
851fbbd
[TOPI][Relay] Fix default `out_dtype` for `conv2d_NCHWc` and Relay (#…
eqy Mar 1, 2019
240910c
Improve task_lint.sh robustness (#2711)
mshawcroft Mar 1, 2019
548dcd9
Docker build script robustness (#2710)
mshawcroft Mar 1, 2019
e2ec7bd
[Doc] Relay tutorial - Deploy the Pretrained Model on Raspberry Pi (#…
makihiro Mar 1, 2019
6fae462
Defined a common base class for TensorComputeOp and ComputeOp (#2587)
derisavi Mar 1, 2019
70b6687
[Relay/TOPI][Op] Add batch_matmul in relay and TOPI (#2561)
icemelon Mar 1, 2019
a681e06
[ARITH] Analyzer Infra, ConstIntBound, Modular (#2668)
tqchen Mar 2, 2019
f8b3ccf
[EXPR] ir_operator.h->expr_operator.h Centralize const folder logic (…
tqchen Mar 3, 2019
be84194
[RELAY][PASS] Common subexpression elimination (#2639)
vinx13 Mar 3, 2019
215aedb
[Tensorflow, NNVM, TOPI] Support for logical operators (#2453)
ashutoshparkhi Mar 3, 2019
76e83df
[Relay][Frontend] Add a few mxnet ops in relay frontend (#2704)
icemelon Mar 3, 2019
674d9aa
[Relay][Frontend] Add slice axis op in mxnet converter (#2706)
icemelon Mar 4, 2019
8ce998e
[DOCS] Fix tutorial (#2724)
imorinaga Mar 4, 2019
1f04aed
[Relay] Higher order reverse mode automatic differentiation that work…
MarisaKirisame Mar 4, 2019
f63975f
Fix compilation on XCode 10 (#2731)
ajtulloch Mar 4, 2019
1e2dc64
[DOCKER] Pin pylint==1.9.4 (#2727)
mshawcroft Mar 4, 2019
ccfe87d
Docs: pip dependencies for testing (#2728)
ruslo Mar 4, 2019
e4c2fc6
[COMMUNITY] @sgrechanik-h -> Reviewer (#2732)
ZihengJiang Mar 5, 2019
bcce07d
use LLVM linker (#2713)
mnboos Mar 5, 2019
6988c4d
[RELAY][OP] Faster-RCNN Proposal OP (#2725)
vinx13 Mar 5, 2019
638e7e6
[Relay][Frontend][Bugfix] Fix bug in converting slice_axis when axis …
icemelon Mar 5, 2019
8b99056
[VERSION] Update to 0.6.dev (#2736)
ZihengJiang Mar 6, 2019
29e0d2d
[Relay][TOPI][OP] intel_graphics conv2d alterlayout support relay, ad…
Laurawly Mar 6, 2019
8d1032f
[RUNTIME][OPENCL] clFinish before releasing memory (#2737)
kazum Mar 7, 2019
17100df
[Bugfix][Relay][Frontend] Fix bug in mxnet converter for slick_like (…
icemelon Mar 7, 2019
9547cbb
Improve NNVM to Relay conversion (#2734)
kazum Mar 8, 2019
6dbd2d7
[Relay] Add logical operators (#2743)
Mar 9, 2019
be89cc1
Fix vmlal.s16 code generation for int8 x int8 -> int32 (#2748)
ajtulloch Mar 9, 2019
90197ba
revert PR#2420 nms changes (#2747)
Laurawly Mar 9, 2019
534818c
[Relay][Quantization] Speed-aware quantization scheme improvement (#2…
vinx13 Mar 9, 2019
829c179
[RUNTIME][OPENCL] set type_key even when platform is not available (#…
kazum Mar 9, 2019
274c401
[DLPACK] fix flaky ctypes support (#2759)
tqchen Mar 9, 2019
daf9e80
Improvements to the conda build (#2742)
Mar 9, 2019
6f94a1a
[COMMUNITY] @kevinthesun -> committer (#2760)
tqchen Mar 10, 2019
f197307
[WIN] Fix a bug in find_llvm when specify llvm-config (#2758)
leeexyz Mar 10, 2019
0c343c2
fix typo in backend interpreter (#2752)
MarisaKirisame Mar 10, 2019
c8eb7d9
[ARITH] Analyzer RewriteSimplifier: add/sub/mul/div/mod (#2722)
tqchen Mar 10, 2019
c96bd9a
Add the new logical operators to the doc. (#2761)
Mar 10, 2019
2fb9f51
update relay python api doc (#2766)
yongwww Mar 11, 2019
145698e
[Relay/TOPI][Frontend] Add tile and repeat operators in Relay and TOP…
Laurawly Mar 11, 2019
d3a8aa9
[relay][frontend] TensorFlow saved model support (#2586)
yongwww Mar 11, 2019
5aa6faa
[Object Detection] Gluoncv SSD support on CPU (#2353)
kevinthesun Mar 11, 2019
0128af8
Implement flop support for int8 models (#2776)
ajtulloch Mar 11, 2019
cc12f7d
[Relay] Improve more operator mxnet frontend importer (#2772)
oovm Mar 11, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
3 changes: 3 additions & 0 deletions CONTRIBUTORS.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,10 +19,12 @@ We do encourage everyone to work anything they are interested in.
- [Yizhi Liu](https://github.com/yzhliu) (PMC): @yzhliu - jvm, topi, relay
- [Masahiro Masuda](https://github.com/masahi): @masahi - topi, relay
- [Thierry Moreau](https://github.com/tmoreau89) (PMC): @tmoreau89 - vta
- [Jared Roesch](https://github.com/jroesch): @jroesch - relay
- [Siva](https://github.com/srkreddy1238): @srkreddy1238 - frontends, golang
- [Haichen Shen](https://github.com/icemelon9) (PMC): @icemelon9 - relay, topi
- [Zhixun Tan](https://github.com/phisiart): @phisiart - opengl, web
- [Leyuan Wang](https://github.com/Laurawly): @Laurawly: - topi
- [Yao Wang](https://github.com/kevinthesun): @kevinthesun: - topi, vision
- [Eddie Yan](https://github.com/eqy): @eqy - runtime, autotvm, rpc, topi
- [Lianmin Zheng](https://github.com/merrymercy) (PMC): @merrymercy - autotvm, topi, relay

Expand All @@ -32,6 +34,7 @@ We do encourage everyone to work anything they are interested in.
- [Tianqi Chen](https://github.com/tqchen): @tqchen
- [Liangfu Chen](https://github.com/liangfu): @liangfu
- [Zhi Chen](https://github.com/zhiics): @zhiics
- [Sergei Grechanik](https://github.com/sgrechanik-h): @sgrechanik-h
- [Nick Hynes](https://github.com/nhynes): @nhynes
- [Yuwei Hu](https://github.com/Huyuwei): @Huyuwei
- [Yizhi Liu](https://github.com/yzhliu) : @yzhliu
Expand Down
2 changes: 1 addition & 1 deletion apps/extension/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ PKG_CFLAGS = -std=c++11 -O2 -fPIC\
-I${TVM_ROOT}/3rdparty/dlpack/include\
-I${TVM_ROOT}/3rdparty/HalideIR/src

PKG_LDFLAGS =-L${TVM_ROOT}/lib
PKG_LDFLAGS =-L${TVM_ROOT}/build
UNAME_S := $(shell uname -s)

ifeq ($(UNAME_S), Darwin)
Expand Down
29 changes: 28 additions & 1 deletion apps/extension/python/tvm_ext/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ def __init__(self, handle):
def __del__(self):
# You can also call your own customized
# deleter if you can free it via your own FFI.
tvm.nd.free_extension_handle(self.handle, 17)
tvm.nd.free_extension_handle(self.handle, self.__class__._tvm_tcode)

@property
def _tvm_handle(self):
Expand All @@ -42,3 +42,30 @@ def __getitem__(self, idx):

# Register IntVec extension on python side.
tvm.register_extension(IntVec, IntVec)


nd_create = tvm.get_global_func("tvm_ext.nd_create")
nd_add_two = tvm.get_global_func("tvm_ext.nd_add_two")
nd_get_addtional_info = tvm.get_global_func("tvm_ext.nd_get_addtional_info")

class NDSubClass(tvm.nd.NDArrayBase):
"""Example for subclassing TVM's NDArray infrastructure.

By inheriting TMV's NDArray, external libraries could
leverage TVM's FFI without any modification.
"""
# Should be consistent with the type-trait set in the backend
_array_type_code = 1

@staticmethod
def create(addtional_info):
return nd_create(addtional_info)

@property
def addtional_info(self):
return nd_get_addtional_info(self)

def __add__(self, other):
return nd_add_two(self, other)

tvm.register_extension(NDSubClass, NDSubClass)
85 changes: 84 additions & 1 deletion apps/extension/src/tvm_ext.cc
Original file line number Diff line number Diff line change
Expand Up @@ -7,24 +7,87 @@
#include <tvm/runtime/packed_func.h>
#include <tvm/runtime/module.h>
#include <tvm/runtime/registry.h>
#include <tvm/runtime/ndarray.h>
#include <tvm/packed_func_ext.h>
#include <tvm/runtime/device_api.h>

namespace tvm_ext {
using IntVector = std::vector<int>;
class NDSubClass;
} // namespace tvm_ext

namespace tvm {
namespace runtime {
template<>
struct extension_class_info<tvm_ext::IntVector> {
struct extension_type_info<tvm_ext::IntVector> {
static const int code = 17;
};
template<>
struct array_type_info<tvm_ext::NDSubClass> {
static const int code = 1;
};
} // namespace tvm
} // namespace runtime

using namespace tvm;
using namespace tvm::runtime;

namespace tvm_ext {
/*!
* \brief A subclass of TVM's NDArray.
*
* To use this extension, an external library should
*
* 1) Inherit TVM's NDArray and NDArray container,
* and define the trait `array_type_info` for this class.
*
* 2) Define a constructor in the inherited class that accepts
* a pointer to TVM's Container, which is nullable.
*
* 3) On Python frontend, inherit `tvm.nd.NDArrayBase`,
* define the class attribute `_array_type_code` consistent to
* the C++ type trait, and register the subclass using `tvm.register_extension`.
*/
class NDSubClass : public tvm::runtime::NDArray {
public:
class SubContainer : public NDArray::Container {
public:
SubContainer(int addtional_info) :
addtional_info_(addtional_info) {
array_type_code_ = array_type_info<NDSubClass>::code;
}
static bool Is(NDArray::Container *container) {
SubContainer *c = static_cast<SubContainer*>(container);
return c->array_type_code_ == array_type_info<NDSubClass>::code;
}
int addtional_info_{0};
};
NDSubClass(NDArray::Container *container) {
if (container == nullptr) {
data_ = nullptr;
return;
}
CHECK(SubContainer::Is(container));
container->IncRef();
data_ = container;
}
~NDSubClass() {
this->reset();
}
NDSubClass AddWith(const NDSubClass &other) const {
SubContainer *a = static_cast<SubContainer*>(data_);
SubContainer *b = static_cast<SubContainer*>(other.data_);
CHECK(a != nullptr && b != nullptr);
return NDSubClass(new SubContainer(a->addtional_info_ + b->addtional_info_));
}
int get_additional_info() const {
SubContainer *self = static_cast<SubContainer*>(data_);
CHECK(self != nullptr);
return self->addtional_info_;
}
};
} // namespace tvm_ext

namespace tvm_ext {

TVM_REGISTER_EXT_TYPE(IntVector);
Expand Down Expand Up @@ -64,6 +127,26 @@ TVM_REGISTER_GLOBAL("device_api.ext_dev")
.set_body([](TVMArgs args, TVMRetValue *rv) {
*rv = (*tvm::runtime::Registry::Get("device_api.cpu"))();
});

TVM_REGISTER_GLOBAL("tvm_ext.nd_create")
.set_body([](TVMArgs args, TVMRetValue *rv) {
int addtional_info = args[0];
*rv = NDSubClass(new NDSubClass::SubContainer(addtional_info));
});

TVM_REGISTER_GLOBAL("tvm_ext.nd_add_two")
.set_body([](TVMArgs args, TVMRetValue *rv) {
NDSubClass a = args[0];
NDSubClass b = args[1];
*rv = a.AddWith(b);
});

TVM_REGISTER_GLOBAL("tvm_ext.nd_get_addtional_info")
.set_body([](TVMArgs args, TVMRetValue *rv) {
NDSubClass a = args[0];
*rv = a.get_additional_info();
});

} // namespace tvm_ext

// External function exposed to runtime.
Expand Down
16 changes: 16 additions & 0 deletions apps/extension/tests/test_ext.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ def test_sym_add():
c = tvm_ext.sym_add(a, b)
assert c.a == a and c.b == b


def test_ext_vec():
ivec = tvm_ext.ivec_create(1, 2, 3)
assert(isinstance(ivec, tvm_ext.IntVec))
Expand All @@ -44,6 +45,7 @@ def ivec_cb(v2):

tvm.convert(ivec_cb)(ivec)


def test_extract_ext():
fdict = tvm.extract_ext_funcs(tvm_ext._LIB.TVMExtDeclare)
assert fdict["mul"](3, 4) == 12
Expand All @@ -68,7 +70,21 @@ def check_llvm():
check_llvm()


def test_nd_subclass():
a = tvm_ext.NDSubClass.create(addtional_info=3)
b = tvm_ext.NDSubClass.create(addtional_info=5)
c = a + b
d = a + a
e = b + b
assert(a.addtional_info == 3)
assert(b.addtional_info == 5)
assert(c.addtional_info == 8)
assert(d.addtional_info == 6)
assert(e.addtional_info == 10)


if __name__ == "__main__":
test_nd_subclass()
test_extern_call()
test_ext_dev()
test_ext_vec()
Expand Down
5 changes: 3 additions & 2 deletions cmake/util/FindLLVM.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,9 @@ macro(find_llvm use_llvm)
execute_process(COMMAND ${LLVM_CONFIG} --cxxflags
OUTPUT_VARIABLE __llvm_cxxflags)
execute_process(COMMAND ${LLVM_CONFIG} --version
COMMAND cut -b 1,3
OUTPUT_VARIABLE TVM_LLVM_VERSION)
OUTPUT_VARIABLE __llvm_version)
# llvm version
string(REGEX REPLACE "^([^.]+)\.([^.])+\.[^.]+.*$" "\\1\\2" TVM_LLVM_VERSION ${__llvm_version})
# definitions
string(REGEX MATCHALL "(^| )-D[A-Za-z0-9_]*" LLVM_DEFINITIONS ${__llvm_cxxflags})
# include dir
Expand Down
20 changes: 20 additions & 0 deletions conda/cross-linux.cmake
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# this one is important
set(CMAKE_SYSTEM_NAME Linux)
set(CMAKE_PLATFORM Linux)
#this one not so much
set(CMAKE_SYSTEM_VERSION 1)

# specify the cross compiler
set(CMAKE_C_COMPILER $ENV{CC})

# where is the target environment
set(CMAKE_FIND_ROOT_PATH $ENV{PREFIX} $ENV{BUILD_PREFIX}/$ENV{HOST}/sysroot)

# search for programs in the build host directories
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
# for libraries and headers in the target directories
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)

# god-awful hack because it seems to not run correct tests to determine this:
set(__CHAR_UNSIGNED___EXITCODE 1)
4 changes: 2 additions & 2 deletions conda/nnvm/meta.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
{% set version = "0.5.dev" %}
{% set version = "0.6.dev" %}

package:
name: nnvm
Expand All @@ -8,7 +8,7 @@ source:
path: ../..

build:
number: 1
number: 0
skip: True # [win]

requirements:
Expand Down
4 changes: 2 additions & 2 deletions conda/topi/meta.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
{% set version = "0.5.dev" %}
{% set version = "0.6.dev" %}

package:
name: topi
Expand All @@ -8,7 +8,7 @@ source:
path: ../..

build:
number: 1
number: 0

requirements:
host:
Expand Down
26 changes: 23 additions & 3 deletions conda/tvm-libs/build.sh
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
#!/bin/bash

# Fix for OSX build to hide the clang LLVM
rm -f ${BUILD_PREFIX}/bin/llvm-config
rm -rf ${BUILD_PREFIX}/lib/cmake

set -e

if [ -z "$PREFIX" ]; then
Expand All @@ -9,13 +13,29 @@ fi
if [ -z "$cuda" ] || [ "$cuda" == "False" ]; then
CUDA_OPT=""
else
CUDA_OPT="-DUSE_CUDA=ON"
CUDA_OPT="-DUSE_CUDA=ON -DUSE_CUBLAS=ON"
fi

if [ "$target_platform" == "osx-64" ]; then
# macOS 64 bits
METAL_OPT="" # Conda can only target 10.9 for now
TOOLCHAIN_OPT=""
else
METAL_OPT=""
if [ "$target_platform" == "linux-64" ]; then
# Linux 64 bits
TOOLCHAIN_OPT="-DCMAKE_TOOLCHAIN_FILE=${RECIPE_DIR}/../cross-linux.cmake"
else
# Windows (or 32 bits, which we don't support)
METAL_OPT=""
TOOLCHAIN_OPT=""
fi
fi

rm -rf build || true
mkdir -p build
cd build
cmake $CUDA_OPT -DUSE_LLVM=ON -DINSTALL_DEV=ON -DCMAKE_INSTALL_PREFIX="$PREFIX" ..
make -j4 VERBOSE=1
cmake $METAL_OPT $CUDA_OPT -DUSE_LLVM=ON -DINSTALL_DEV=ON -DCMAKE_INSTALL_PREFIX="$PREFIX" $TOOLCHAIN_OPT ..
make -j${CPU_COUNT} VERBOSE=1
make install
cd ..
16 changes: 6 additions & 10 deletions conda/tvm-libs/meta.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
{% set version = "0.5.dev" %}
{% set version = "0.6.dev" %}

package:
name: tvm-libs
Expand All @@ -8,21 +8,17 @@ source:
path: ../..

build:
number: 1
number: 0
string: cuda{{ cuda_version }}_{{ PKG_BUILDNUM }} # [cuda]

requirements:
build:
- {{ compiler('cxx') }} # [linux]
- llvmdev ==6.0.0 # [osx]
host:
# The OS X build will require some manual setup or it will break
# See https://conda.io/docs/user-guide/tasks/build-packages/compiler-tools.html#macos-sdk
# It is also ass-backward because of llvm brokeness when mixed with the
# conda OS X compiler
- {{ compiler('cxx') }} # [osx]
# See https://docs.conda.io/projects/conda-build/en/latest/source/resources/compiler-tools.html#macos-sdk
- {{ compiler('cxx') }}
host:
- cmake
- llvmdev ==6.0.0 # [linux]
- llvmdev ==6.0.0
- zlib # [linux]
run:
- {{ pin_compatible('cudatoolkit', lower_bound=cuda_version, max_pin='x.x') }} # [cuda]
Expand Down
4 changes: 2 additions & 2 deletions conda/tvm/meta.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
{% set version = "0.5.dev" %}
{% set version = "0.6.dev" %}

package:
name: tvm
Expand All @@ -8,7 +8,7 @@ source:
path: ../..

build:
number: 1
number: 0

requirements:
build:
Expand Down
2 changes: 1 addition & 1 deletion docker/Dockerfile.ci_gpu
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ COPY install/ubuntu_install_sphinx.sh /install/ubuntu_install_sphinx.sh
RUN bash /install/ubuntu_install_sphinx.sh

# Fix recommonmark to latest version
RUN git clone https://github.com/rtfd/recommonmark
RUN git clone --depth=1 https://github.com/rtfd/recommonmark
RUN cd recommonmark; python3 setup.py install

# Enable doxygen for c++ doc build
Expand Down
2 changes: 1 addition & 1 deletion docker/Dockerfile.ci_lint
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@ RUN apt-get update && apt-get install -y sudo wget
COPY install/ubuntu_install_python.sh /install/ubuntu_install_python.sh
RUN bash /install/ubuntu_install_python.sh
RUN apt-get install -y doxygen graphviz
RUN pip3 install cpplint pylint mypy
RUN pip3 install cpplint pylint==1.9.4 mypy
Loading