Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Runtime] MISRA-C compliant TVM runtime #3934

Merged
merged 34 commits into from
Mar 9, 2020
Merged

Conversation

liangfu
Copy link
Member

@liangfu liangfu commented Sep 11, 2019

This PR implements MISRA-C compliant TVM runtime proposed in #3159 .

See timing improvements in the following table:

Create SetInput Run GetOutput Destroy
Existing Runtime 4.75 ms 0.50 ms 8.17 ms 0.01 ms 0.27 ms
Proposed Runtime 3.40 ms 0.25 ms 7.04 ms 0.00 ms 0.30 ms

@tqchen @ajtulloch @nhynes Please review.

@tqchen
Copy link
Member

tqchen commented Sep 13, 2019

also cc @ajtulloch Now that we have quite a few runtimes, I wonder if it makes sense to consolidate some of them, for example, i do think it makes sense to make uTVM runtime MISRA-C compliant

@liangfu
Copy link
Member Author

liangfu commented Sep 21, 2019

This PR also contains a rewritten version of JSON parser and NDArray reader, which is relative independent and concise. I would like to propose a replacement for the picojson that is currently used as a third-party dependency (introduced in #3567) . Does this make sense ?

@ajtulloch
Copy link
Contributor

@liangfu absolutely, this would reduce code size substantially as well I believe. I suspect that it would make sense to remove the 'uTVM runtime' and replace it with the MISRA-C one - as long as you're comfortable with code size being an important consideration in the development of the MISRA-C one?

@liangfu liangfu changed the title [Runtime] MISRA-C compliant TVM runtime [WIP][Runtime] MISRA-C compliant TVM runtime Sep 27, 2019
@tqchen
Copy link
Member

tqchen commented Oct 14, 2019

@liangfu @ajtulloch would be great if we can followup on this topic. e.g. try to compare the standalone utvm and misra-c runtime, and review this part of the proposed code.

One related question is that is it still OK to use C++? I see most of the code are implemented in pure C, which is fine, but perhaps we could still benefit from namespace and simple class(no virtual methods).

@liangfu
Copy link
Member Author

liangfu commented Oct 14, 2019

The major intention to implement this in pure C is to maximize portability. (Some vendors didn't offer C++ compiler for its vision processor, partly for the sake of misra-c compliance, e.g. Vision SDK from TI.) On the other hand, this reduces binary size at the cost of flexibility.

@tqchen
Copy link
Member

tqchen commented Oct 14, 2019

OK, I think we could go with c API, just remember to keep naming style consistent with the current C API as in c_runtime_api.h

@liangfu
Copy link
Member Author

liangfu commented Oct 14, 2019

Sure, I would update the naming accordingly, and add test cases as well.

@tqchen
Copy link
Member

tqchen commented Feb 6, 2020

ping @liangfu please see if you are interested in continue pushing this thread

@liangfu
Copy link
Member Author

liangfu commented Feb 6, 2020

@tqchen sure, I can continue to push this thread, although this has been suspended for a while.

@liangfu
Copy link
Member Author

liangfu commented Feb 15, 2020

@tqchen For now, the implement reproduces apps/bundle_deploy with complete runtime implemented in pure C. Note that the build_mode.py script generates apps/bundle_deploy_c/build/bridge.c to load functions in the compiled object (model.o), so that runtime could load all the compiled functions. In addition, to keep the pure C code OOP, an additional underscore is used in member functions for structs.

Please take a review.

@liangfu liangfu requested a review from tqchen February 15, 2020 16:30
@ajtulloch
Copy link
Contributor

This looks really neat, well done @liangfu.

@liangfu
Copy link
Member Author

liangfu commented Feb 19, 2020

@ajtulloch Thanks for the comment. In addition, I was also trying to integrate this with your implement of the standalone uTVM runtime. Hopefully, we would have a successful removal of picojson as an external dependency.

@liangfu liangfu changed the title [WIP][Runtime] MISRA-C compliant TVM runtime [Runtime] MISRA-C compliant TVM runtime Feb 20, 2020
Change-Id: I027ddff15c31fb4da0bd0e461427dce619de1f93
@liangfu
Copy link
Member Author

liangfu commented Mar 5, 2020

Also is this MISRA-C compliant going to work on 32bit and 64bit systems?

I have tested this on both 32-bit ARM board, and 64-bit Linux PC, they produced exactly the same result.

One additional suggestion I had was to test that the demo builds and runs in CI;

Nice suggestion! I've made this runs in CI.

Please take anther look.

liangfu added 2 commits March 5, 2020 15:57
Change-Id: I5ad5bb8426468aac9fc8d074e56ddea358a7fd91
Change-Id: Ic2e82fb3051b6c254ef32a964f976b61e3e5fe4d
@tmoreau89
Copy link
Contributor

Excellent, thanks for adding this to CI so quickly! I was able to reproduce the demo by typing in make demo; it ran for the most part successfully, but I got an illegal instruction error in the end:

python3 build_model.py -o build
INFO:root:Model file not found. Downloading to /Users/moreau/.mxnet/models/mobilenet0.25-9f83e440.params.
Downloading /Users/moreau/.mxnet/models/mobilenet0.25-9f83e440.zip from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
INFO:autotvm:Download pre-tuned parameters package from https://raw.githubusercontent.com/uwsampl/tvm-distro/master/tophub/llvm_v0.04.log
...100%, 0.02 MB, 121 KB/s, 0 seconds passed
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op sqrt
INFO:compile_engine:Use implementation injective.cpu for op divide
INFO:compile_engine:Use implementation injective.cpu for op multiply
INFO:compile_engine:Use implementation injective.cpu for op expand_dims
INFO:compile_engine:Use implementation injective.cpu for op negative
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op sqrt
INFO:compile_engine:Use implementation injective.cpu for op divide
INFO:compile_engine:Use implementation injective.cpu for op multiply
INFO:compile_engine:Use implementation injective.cpu for op expand_dims
INFO:compile_engine:Use implementation injective.cpu for op negative
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op sqrt
INFO:compile_engine:Use implementation injective.cpu for op divide
INFO:compile_engine:Use implementation injective.cpu for op multiply
INFO:compile_engine:Use implementation injective.cpu for op expand_dims
INFO:compile_engine:Use implementation injective.cpu for op negative
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op sqrt
INFO:compile_engine:Use implementation injective.cpu for op divide
INFO:compile_engine:Use implementation injective.cpu for op multiply
INFO:compile_engine:Use implementation injective.cpu for op expand_dims
INFO:compile_engine:Use implementation injective.cpu for op negative
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op sqrt
INFO:compile_engine:Use implementation injective.cpu for op divide
INFO:compile_engine:Use implementation injective.cpu for op multiply
INFO:compile_engine:Use implementation injective.cpu for op expand_dims
INFO:compile_engine:Use implementation injective.cpu for op negative
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op sqrt
INFO:compile_engine:Use implementation injective.cpu for op divide
INFO:compile_engine:Use implementation injective.cpu for op multiply
INFO:compile_engine:Use implementation injective.cpu for op expand_dims
INFO:compile_engine:Use implementation injective.cpu for op negative
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op squeeze
INFO:compile_engine:Use implementation injective.cpu for op expand_dims
INFO:compile_engine:Use implementation injective.cpu for op multiply
INFO:compile_engine:Use implementation injective.cpu for op multiply
INFO:compile_engine:Use implementation injective.cpu for op squeeze
INFO:compile_engine:Use implementation injective.cpu for op expand_dims
INFO:compile_engine:Use implementation injective.cpu for op multiply
INFO:compile_engine:Use implementation injective.cpu for op multiply
INFO:compile_engine:Use implementation injective.cpu for op squeeze
INFO:compile_engine:Use implementation injective.cpu for op expand_dims
INFO:compile_engine:Use implementation injective.cpu for op multiply
INFO:compile_engine:Use implementation injective.cpu for op multiply
INFO:compile_engine:Use implementation injective.cpu for op multiply
INFO:compile_engine:Use implementation injective.cpu for op squeeze
INFO:compile_engine:Use implementation injective.cpu for op expand_dims
INFO:compile_engine:Use implementation injective.cpu for op multiply
INFO:compile_engine:Use implementation injective.cpu for op multiply
INFO:compile_engine:Use implementation injective.cpu for op multiply
INFO:compile_engine:Use implementation injective.cpu for op squeeze
INFO:compile_engine:Use implementation injective.cpu for op expand_dims
INFO:compile_engine:Use implementation injective.cpu for op multiply
INFO:compile_engine:Use implementation injective.cpu for op multiply
INFO:compile_engine:Use implementation injective.cpu for op multiply
INFO:compile_engine:Use implementation injective.cpu for op squeeze
INFO:compile_engine:Use implementation injective.cpu for op expand_dims
INFO:compile_engine:Use implementation injective.cpu for op multiply
INFO:compile_engine:Use implementation injective.cpu for op multiply
INFO:compile_engine:Use implementation injective.cpu for op multiply
WARNING:autotvm:Cannot find config for target=llvm --system-lib, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 3, 224, 224), 'float32'), ('TENSOR', (8, 3, 3, 3), 'float32'), (2, 2), (1, 1, 1, 1), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm --system-lib, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 8, 112, 112), 'float32'), ('TENSOR', (8, 1, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm --system-lib, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 8, 112, 112), 'float32'), ('TENSOR', (16, 8, 1, 1), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm --system-lib, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 16, 112, 112), 'float32'), ('TENSOR', (16, 1, 3, 3), 'float32'), (2, 2), (1, 1, 1, 1), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm --system-lib, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 16, 56, 56), 'float32'), ('TENSOR', (32, 16, 1, 1), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm --system-lib, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 32, 56, 56), 'float32'), ('TENSOR', (32, 1, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm --system-lib, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 32, 56, 56), 'float32'), ('TENSOR', (32, 32, 1, 1), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm --system-lib, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 32, 56, 56), 'float32'), ('TENSOR', (32, 1, 3, 3), 'float32'), (2, 2), (1, 1, 1, 1), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm --system-lib, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 32, 28, 28), 'float32'), ('TENSOR', (64, 32, 1, 1), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm --system-lib, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 64, 28, 28), 'float32'), ('TENSOR', (64, 1, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm --system-lib, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 64, 28, 28), 'float32'), ('TENSOR', (64, 64, 1, 1), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm --system-lib, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 64, 28, 28), 'float32'), ('TENSOR', (64, 1, 3, 3), 'float32'), (2, 2), (1, 1, 1, 1), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm --system-lib, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 64, 14, 14), 'float32'), ('TENSOR', (128, 64, 1, 1), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm --system-lib, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 128, 14, 14), 'float32'), ('TENSOR', (128, 1, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm --system-lib, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 128, 14, 14), 'float32'), ('TENSOR', (128, 128, 1, 1), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm --system-lib, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 128, 14, 14), 'float32'), ('TENSOR', (128, 1, 3, 3), 'float32'), (2, 2), (1, 1, 1, 1), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm --system-lib, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 128, 7, 7), 'float32'), ('TENSOR', (256, 128, 1, 1), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm --system-lib, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 256, 7, 7), 'float32'), ('TENSOR', (256, 1, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm --system-lib, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 256, 7, 7), 'float32'), ('TENSOR', (256, 256, 1, 1), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
INFO:compile_engine:Use implementation injective.cpu for op layout_transform
INFO:compile_engine:Use implementation injective.cpu for op expand_dims
INFO:compile_engine:Use implementation injective.cpu for op layout_transform
INFO:compile_engine:Use implementation injective.cpu for op layout_transform
INFO:compile_engine:Use implementation injective.cpu for op layout_transform
INFO:compile_engine:Use implementation injective.cpu for op expand_dims
INFO:compile_engine:Use implementation injective.cpu for op layout_transform
INFO:compile_engine:Use implementation injective.cpu for op layout_transform
INFO:compile_engine:Use implementation injective.cpu for op layout_transform
INFO:compile_engine:Use implementation injective.cpu for op expand_dims
INFO:compile_engine:Use implementation injective.cpu for op layout_transform
INFO:compile_engine:Use implementation injective.cpu for op layout_transform
INFO:compile_engine:Use implementation injective.cpu for op layout_transform
INFO:compile_engine:Use implementation injective.cpu for op layout_transform
INFO:compile_engine:Use implementation injective.cpu for op expand_dims
INFO:compile_engine:Use implementation injective.cpu for op layout_transform
INFO:compile_engine:Use implementation injective.cpu for op layout_transform
INFO:compile_engine:Use implementation injective.cpu for op layout_transform
INFO:compile_engine:Use implementation injective.cpu for op layout_transform
INFO:compile_engine:Use implementation injective.cpu for op expand_dims
INFO:compile_engine:Use implementation injective.cpu for op layout_transform
INFO:compile_engine:Use implementation injective.cpu for op layout_transform
INFO:compile_engine:Use implementation injective.cpu for op layout_transform
INFO:compile_engine:Use implementation injective.cpu for op layout_transform
INFO:compile_engine:Use implementation injective.cpu for op expand_dims
INFO:compile_engine:Use implementation injective.cpu for op layout_transform
INFO:compile_engine:Use implementation injective.cpu for op layout_transform
INFO:compile_engine:Use implementation injective.cpu for op layout_transform
INFO:compile_engine:Use implementation softmax.cpu for op nn.softmax
WARNING:autotvm:Cannot find config for target=llvm --system-lib, workload=('dense_nopack.x86', ('TENSOR', (1, 256), 'float32'), ('TENSOR', (1000, 256), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression.
INFO:compile_engine:Use implementation dense_nopack.x86 for op nn.dense
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op layout_transform
INFO:compile_engine:Use implementation injective.cpu for op nn.batch_flatten
INFO:compile_engine:Use implementation injective.cpu for op nn.batch_flatten
INFO:compile_engine:Use implementation adaptive_pool.cpu for op nn.global_avg_pool2d
INFO:compile_engine:Use implementation conv2d_NCHWc.x86 for op nn.contrib_conv2d_NCHWc
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op nn.relu
INFO:compile_engine:Use implementation depthwise_conv2d_NCHWc.x86 for op nn.contrib_depthwise_conv2d_NCHWc
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op nn.relu
INFO:compile_engine:Use implementation conv2d_NCHWc.x86 for op nn.contrib_conv2d_NCHWc
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op nn.relu
INFO:compile_engine:Use implementation depthwise_conv2d_NCHWc.x86 for op nn.contrib_depthwise_conv2d_NCHWc
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op nn.relu
INFO:compile_engine:Use implementation conv2d_NCHWc.x86 for op nn.contrib_conv2d_NCHWc
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op nn.relu
INFO:compile_engine:Use implementation depthwise_conv2d_NCHWc.x86 for op nn.contrib_depthwise_conv2d_NCHWc
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op nn.relu
INFO:compile_engine:Use implementation conv2d_NCHWc.x86 for op nn.contrib_conv2d_NCHWc
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op nn.relu
INFO:compile_engine:Use implementation depthwise_conv2d_NCHWc.x86 for op nn.contrib_depthwise_conv2d_NCHWc
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op nn.relu
INFO:compile_engine:Use implementation conv2d_NCHWc.x86 for op nn.contrib_conv2d_NCHWc
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op nn.relu
INFO:compile_engine:Use implementation depthwise_conv2d_NCHWc.x86 for op nn.contrib_depthwise_conv2d_NCHWc
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op nn.relu
INFO:compile_engine:Use implementation conv2d_NCHWc.x86 for op nn.contrib_conv2d_NCHWc
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op nn.relu
INFO:compile_engine:Use implementation depthwise_conv2d_NCHWc.x86 for op nn.contrib_depthwise_conv2d_NCHWc
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op nn.relu
INFO:compile_engine:Use implementation conv2d_NCHWc.x86 for op nn.contrib_conv2d_NCHWc
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op nn.relu
INFO:compile_engine:Use implementation depthwise_conv2d_NCHWc.x86 for op nn.contrib_depthwise_conv2d_NCHWc
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op nn.relu
INFO:compile_engine:Use implementation conv2d_NCHWc.x86 for op nn.contrib_conv2d_NCHWc
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op nn.relu
INFO:compile_engine:Use implementation depthwise_conv2d_NCHWc.x86 for op nn.contrib_depthwise_conv2d_NCHWc
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op nn.relu
INFO:compile_engine:Use implementation conv2d_NCHWc.x86 for op nn.contrib_conv2d_NCHWc
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op nn.relu
INFO:compile_engine:Use implementation depthwise_conv2d_NCHWc.x86 for op nn.contrib_depthwise_conv2d_NCHWc
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op nn.relu
INFO:compile_engine:Use implementation conv2d_NCHWc.x86 for op nn.contrib_conv2d_NCHWc
INFO:compile_engine:Use implementation injective.cpu for op add
INFO:compile_engine:Use implementation injective.cpu for op nn.relu
INFO:compile_engine:Use implementation injective.cpu for op layout_transform
Downloading from url https://homes.cs.washington.edu/~moreau/media/vta/cat.jpg to /Users/moreau/Documents/Projects/tvm-misra/apps/bundle_deploy/build/cat.png
...100%, 0.12 MB, 2633 KB/s, 0 seconds passed
x (1, 3, 224, 224)
xxd -i build/graph.json  > build/graph.json.c
xxd -i build/params.bin  > build/params.bin.c
g++ -std=c++14 -O2 -fPIC -I/Users/moreau/Documents/Projects/tvm-misra/include -I/Users/moreau/Documents/Projects/tvm-misra/3rdparty/dmlc-core/include -I/Users/moreau/Documents/Projects/tvm-misra/3rdparty/dlpack/include -o build/demo  demo.cc -ldl
g++ -shared -std=c++14 -O2 -fPIC -I/Users/moreau/Documents/Projects/tvm-misra/include -I/Users/moreau/Documents/Projects/tvm-misra/3rdparty/dmlc-core/include -I/Users/moreau/Documents/Projects/tvm-misra/3rdparty/dlpack/include -fvisibility=hidden -o build/bundle.so  bundle.cc runtime.cc build/model.o -pthread
gcc -shared -std=c99 -O2 -fPIC -I/Users/moreau/Documents/Projects/tvm-misra/include -I/Users/moreau/Documents/Projects/tvm-misra/3rdparty/dmlc-core/include -I/Users/moreau/Documents/Projects/tvm-misra/3rdparty/dlpack/include -fvisibility=hidden -o build/bundle_c.so  bundle.c runtime.c build/model.o -pthread
In file included from runtime.c:47:
In file included from ./../../src/runtime/crt/crt_runtime_api.c:28:
In file included from ./../../src/runtime/crt/graph_runtime.h:31:
./../../src/runtime/crt/packed_func.h:105:3: warning: redefinition of typedef 'TVMPackedFunc' is a C11 feature [-Wtypedef-redefinition]
} TVMPackedFunc;
  ^
./../../src/runtime/crt/module.h:31:30: note: previous definition is here
typedef struct TVMPackedFunc TVMPackedFunc;
                             ^
In file included from runtime.c:47:
./../../src/runtime/crt/crt_runtime_api.c:82:12: warning: expression result unused [-Wunused-value]
    status -1;
    ~~~~~~ ^~
In file included from runtime.c:48:
./../../src/runtime/crt/crt_backend_api.c:55:82: warning: format string is not a string literal (potentially insecure) [-Wformat-security]
  snprintf(g_fexecs[g_fexecs_count].name, sizeof(g_fexecs[g_fexecs_count].name), name);
                                                                                 ^~~~
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/secure/_stdio.h:57:62: note: expanded from macro 'snprintf'
  __builtin___snprintf_chk (str, len, 0, __darwin_obsz(str), __VA_ARGS__)
                                                             ^~~~~~~~~~~
./../../src/runtime/crt/crt_backend_api.c:55:82: note: treat the string as an argument to avoid this
  snprintf(g_fexecs[g_fexecs_count].name, sizeof(g_fexecs[g_fexecs_count].name), name);
                                                                                 ^
                                                                                 "%s",
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/secure/_stdio.h:57:62: note: expanded from macro 'snprintf'
  __builtin___snprintf_chk (str, len, 0, __darwin_obsz(str), __VA_ARGS__)
                                                             ^
In file included from runtime.c:49:
./../../src/runtime/crt/graph_runtime.c:542:84: warning: format specifies type 'int' but the argument has type 'size_t' (aka 'unsigned long') [-Wformat]
      fprintf(stderr, "fail to create for node with idx=%d, storage_id=%d\n", idx, storage_id);
                                                                       ~~          ^~~~~~~~~~
                                                                       %zu
In file included from runtime.c:51:
./../../src/runtime/crt/ndarray.c:95:13: warning: format specifies type 'long' but the argument has type 'int64_t' (aka 'long long') [-Wformat]
            data_byte_size, (num_elems * elem_bytes));
            ^~~~~~~~~~~~~~
./../../src/runtime/crt/ndarray.c:95:29: warning: format specifies type 'long' but the argument has type 'long long' [-Wformat]
            data_byte_size, (num_elems * elem_bytes));
                            ^~~~~~~~~~~~~~~~~~~~~~~~
6 warnings generated.
build/demo build/bundle.so build/cat.bin
The maximum position in output vector is: 278, with max-value 0.613490.
timing: 5.07 ms (create), 0.74 ms (set_input), 3.60 ms (run), 0.01 ms (get_output), 0.10 ms (destroy)
build/demo build/bundle_c.so build/cat.bin
make: *** [demo] Illegal instruction: 4```

@tmoreau89
Copy link
Contributor

Also @u99127 and @weberlo may be interested in this PR

Change-Id: Ide466a5cf21cf8bb990dcd4a1189ba17594e3c51
@tqchen
Copy link
Member

tqchen commented Mar 7, 2020

@liangfu The e2e code cannot be run as part of the unittest because the CPU module do not have mxnet installed. Let us create a simple function(e.g. elemwise add) that makes use of the runtime as the test function.

@tmoreau89
Copy link
Contributor

ah, could we update the dockerfile to install mxnet? or will that be too wasteful in terms of CPU cycles

@tqchen
Copy link
Member

tqchen commented Mar 7, 2020

In terms of testing the runtime out, using simple add one example would be sufficient.

@tqchen
Copy link
Member

tqchen commented Mar 8, 2020

@liangfu would be great if you can try to update(use simple testing as in other all in one deployment that does not depend on mxnet) :)

@liangfu
Copy link
Member Author

liangfu commented Mar 8, 2020

sure

liangfu added 2 commits March 9, 2020 11:47
Change-Id: Ie0dfd0ade6be4665b4384db7d260a6c69b35010f
Change-Id: Ie7fbc16b4b0b9509918d986a841f443900813bef
@liangfu
Copy link
Member Author

liangfu commented Mar 9, 2020

@tmoreau89 @tqchen A simple test case has been added, and the testing results are successful. Please take another look.

@tmoreau89
Copy link
Contributor

Thank you @liangfu, this PR is pending on @tqchen 's approval

@tqchen tqchen merged commit 450f716 into apache:master Mar 9, 2020
@tqchen
Copy link
Member

tqchen commented Mar 9, 2020

Thanks @liangfu @tmoreau89 @ajtulloch , this is now merged!

@mehrdadh
Copy link
Member

Great work. @liangfu have you considered using "system lib" approach since dlopen is banned in some environments?

@mikeseven
Copy link
Contributor

@liangfu awesome work, very useful to our use-cases.

trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Apr 16, 2020
* implement of MISRA-C compliant TVM runtime;

* working on bundle_deploy_c demo

* move header files into include dir

* fix compatibility issues

* fix compatibility issues

* resolve most of the warnings and errros

* implement c_backend_api

* introduce bridge

* working well

* move to header files and bundle.c into src/runtime/crt

* clean up

* satisfy linter

* clean up

* test with the cat image

* remove synset

* refactoring

* refactoring

* refactoring

* initial crt_runtime_api.c

* improved compatibility with g++

* using exposed API in c_runtime_api.h

* call from c_runtime_api.h

* clean up

* lint

* merge into apps/bundle_deploy directory

Change-Id: I51904db81b8589e65d107d8ca77b47452e3812b5

* make the demo runs in ci

Change-Id: I2c24f8b592508833d3555311c2b24d1931f19385

* address review comments

Change-Id: I027ddff15c31fb4da0bd0e461427dce619de1f93

* release

Change-Id: I5ad5bb8426468aac9fc8d074e56ddea358a7fd91

* fix ci testing

Change-Id: Ic2e82fb3051b6c254ef32a964f976b61e3e5fe4d

* add test case for misra c runtime

Change-Id: Ie0dfd0ade6be4665b4384db7d260a6c69b35010f

* fread files in testing to avoid calling xxd

Change-Id: Ie7fbc16b4b0b9509918d986a841f443900813bef
zhiics pushed a commit to neo-ai/tvm that referenced this pull request Apr 17, 2020
* implement of MISRA-C compliant TVM runtime;

* working on bundle_deploy_c demo

* move header files into include dir

* fix compatibility issues

* fix compatibility issues

* resolve most of the warnings and errros

* implement c_backend_api

* introduce bridge

* working well

* move to header files and bundle.c into src/runtime/crt

* clean up

* satisfy linter

* clean up

* test with the cat image

* remove synset

* refactoring

* refactoring

* refactoring

* initial crt_runtime_api.c

* improved compatibility with g++

* using exposed API in c_runtime_api.h

* call from c_runtime_api.h

* clean up

* lint

* merge into apps/bundle_deploy directory

Change-Id: I51904db81b8589e65d107d8ca77b47452e3812b5

* make the demo runs in ci

Change-Id: I2c24f8b592508833d3555311c2b24d1931f19385

* address review comments

Change-Id: I027ddff15c31fb4da0bd0e461427dce619de1f93

* release

Change-Id: I5ad5bb8426468aac9fc8d074e56ddea358a7fd91

* fix ci testing

Change-Id: Ic2e82fb3051b6c254ef32a964f976b61e3e5fe4d

* add test case for misra c runtime

Change-Id: Ie0dfd0ade6be4665b4384db7d260a6c69b35010f

* fread files in testing to avoid calling xxd

Change-Id: Ie7fbc16b4b0b9509918d986a841f443900813bef
@ritabeczi
Copy link

ritabeczi commented Aug 14, 2023

May I ask some more details like which MISRA rules(MISRA-C 2004, MISRA-C++-2008, MISRA-C-2012) are fixed other than what is mentioned on the #3159 . Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants