Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[VTA][Relay] Relay Compilation + AutoTVM compatible operator libraries for VTA #3135

Merged
merged 126 commits into from
Jun 28, 2019
Merged
Show file tree
Hide file tree
Changes from 125 commits
Commits
Show all changes
126 commits
Select commit Hold shift + click to select a range
72ec86c
autotvm support for conv2d operator
tmoreau89 May 3, 2019
e9c995b
removing progileOnly option
tmoreau89 May 7, 2019
2571386
removing unsupported layer
tmoreau89 May 7, 2019
77e9191
fixing bare metal test build
tmoreau89 May 7, 2019
f87417a
refactoring resnet WIP
tmoreau89 May 9, 2019
72f7c40
VTA topi support fix for NNVM
tmoreau89 May 10, 2019
bb8093d
fixing resnet18 tutorial to work with TOPI
tmoreau89 May 10, 2019
5f78355
adding bitpacking support by Marissa
tmoreau89 May 14, 2019
25c8897
no support for bitpacking below 8bits for now
tmoreau89 May 14, 2019
d15c97f
bitpacking annotations
tmoreau89 May 14, 2019
51463ff
fix
tmoreau89 May 14, 2019
8bea368
relay topi integtation for vta
tmoreau89 May 14, 2019
3f31c6c
operator tagging for broadcast
tmoreau89 May 14, 2019
e7f1049
invalid shape error
tmoreau89 May 14, 2019
52c19f4
relay graph pack pass
tmoreau89 May 14, 2019
ab01f07
test script for relay to vta compilation
tmoreau89 May 14, 2019
32773ad
adding nnvm graphpack:
tmoreau89 May 15, 2019
73036e3
clean up of script
tmoreau89 May 16, 2019
44b3e50
adding rpc server with fleet server registration
tmoreau89 May 17, 2019
049118c
adding license
tmoreau89 May 17, 2019
d304f64
increasing allocatable buffer size
tmoreau89 May 20, 2019
b22b96c
adding bitstream programming in conv2d test; support for getting remo…
tmoreau89 May 21, 2019
2aed8e6
removing printfs
tmoreau89 May 21, 2019
897c08e
adding option to skip execution in simulator
tmoreau89 May 21, 2019
f956f15
InvalidShapeError reporting
tmoreau89 May 21, 2019
48ad24b
reset the xlnk driver before every FPGA program
tmoreau89 May 21, 2019
c693522
key flag used when building VTA target
tmoreau89 May 21, 2019
04d1788
initial conv2d autotuning support
tmoreau89 May 21, 2019
29ebd80
edits to tune_conv2d.py
tmoreau89 May 21, 2019
7e633cd
exhaustive search
tmoreau89 May 21, 2019
67b49c7
logging simulator stats in autoTVM
tmoreau89 May 22, 2019
378b3a5
tuning over all resnet layers
tmoreau89 May 22, 2019
e3656ca
removing sim stats from log for now due to tophub issues
tmoreau89 May 24, 2019
8186632
autoTVM task extraction for VTA (nnvm for now)
tmoreau89 May 24, 2019
51773ec
merge fix
tmoreau89 May 29, 2019
3ad29a4
Insert stop_fusion for vta.
ZihengJiang May 29, 2019
df67958
Update.
ZihengJiang May 31, 2019
7b2d306
fix bug from relay build config change
tmoreau89 Jun 4, 2019
d539d15
typo fix
tmoreau89 Jun 4, 2019
b107718
typo fix
tmoreau89 Jun 4, 2019
c5936ba
Fix for tvm::buuild
ZihengJiang Jun 4, 2019
96b7529
relay task extraction for VTA (wip)
tmoreau89 Jun 5, 2019
fadd29d
refactor relay to vta compilation script
tmoreau89 Jun 5, 2019
6ba0fff
further refactor, cleanup
tmoreau89 Jun 5, 2019
4a34d4b
relay based task extraction working
tmoreau89 Jun 6, 2019
59f1c02
autotuning script refactor
tmoreau89 Jun 6, 2019
51f1ee0
refactoring, debug runtime
tmoreau89 Jun 7, 2019
e92c0c2
removing debug messages
tmoreau89 Jun 7, 2019
affd158
proper argparsing, and target setting
tmoreau89 Jun 10, 2019
8f277bf
adding dense tuning
tmoreau89 Jun 10, 2019
be4d3a1
updated tutorial to use Relay
tmoreau89 Jun 10, 2019
05b08fc
setup for colab
tmoreau89 Jun 11, 2019
1ef1e25
fix url
tmoreau89 Jun 12, 2019
1619a11
dense operator placeholder
tmoreau89 Jun 13, 2019
e0e1bc7
fix support for pass manager
tmoreau89 Jun 13, 2019
0b4addb
dense op benchmark
tmoreau89 Jun 14, 2019
d16f91c
getting rid of kwargs usage
tmoreau89 Jun 14, 2019
5e81732
registration of dense definition and schedule for vta
tmoreau89 Jun 14, 2019
52880c9
error reporting
tmoreau89 Jun 14, 2019
d0b2ade
dense support
tmoreau89 Jun 14, 2019
5e100b5
remove use of kwargs
tmoreau89 Jun 14, 2019
28b976f
update dense schedule
tmoreau89 Jun 14, 2019
eafd93e
fix API change from PR3353
tmoreau89 Jun 18, 2019
a333a07
fixing flop derivation bug
tmoreau89 Jun 18, 2019
1c4e950
dense operator tuning
tmoreau89 Jun 18, 2019
af5cfd4
tuning conv2d only
tmoreau89 Jun 18, 2019
a04a3cb
skip dense layer in quant, cleanup
tmoreau89 Jun 19, 2019
db7462d
support for callable build func
tmoreau89 Jun 19, 2019
ae413e5
multiprocessing bug fix
tmoreau89 Jun 19, 2019
6e3e5b8
doc
tmoreau89 Jun 19, 2019
432a2cc
skip dense layer
tmoreau89 Jun 19, 2019
794ce52
cleanup
tmoreau89 Jun 19, 2019
80c4f6b
clean up
tmoreau89 Jun 19, 2019
ab1f6cd
this ensures that relay to vta compilation works for renset-18
tmoreau89 Jun 19, 2019
cce05da
autotvm task extraction test for VTA
tmoreau89 Jun 19, 2019
4be3cbc
adding headers
tmoreau89 Jun 19, 2019
ab3069e
missing headers
tmoreau89 Jun 19, 2019
67ae8d1
header
tmoreau89 Jun 19, 2019
5c86609
rename test file
tmoreau89 Jun 19, 2019
19f51fc
lint fix
tmoreau89 Jun 19, 2019
49689bc
another set of lint fixes
tmoreau89 Jun 19, 2019
e6f2187
lint fix
tmoreau89 Jun 19, 2019
2a1b76e
compiler warnings
tmoreau89 Jun 19, 2019
b0fdab0
removing ci tests for now that require changes to the packages on the…
tmoreau89 Jun 19, 2019
a6ffab3
ci fix due to TaskExtractEnv API change
tmoreau89 Jun 19, 2019
30e8ad0
lint fix
tmoreau89 Jun 19, 2019
07eb36e
reorganize vta tutorial page; added more comments to e2e resnet
tmoreau89 Jun 19, 2019
f18de91
missing readme file for sphynx gallery
tmoreau89 Jun 19, 2019
0985a21
ci fix
tmoreau89 Jun 19, 2019
0d454d8
quantization ci fix
tmoreau89 Jun 19, 2019
655c0a5
ci fix for nnvm task extraction
tmoreau89 Jun 19, 2019
32bb0d4
bug fix
tmoreau89 Jun 19, 2019
a444f03
default case in operator override to prevent sphynx gallery issues
tmoreau89 Jun 20, 2019
31beec6
deprecating nnvm for VTA
tmoreau89 Jun 20, 2019
fa73537
refactoring
tmoreau89 Jun 20, 2019
401baa7
fix naming
tmoreau89 Jun 20, 2019
f1b810e
annotation ops
tmoreau89 Jun 20, 2019
51acba8
typo fix
tmoreau89 Jun 20, 2019
819e2d9
autoTVM tutorial for VTA
tmoreau89 Jun 20, 2019
ff20dc5
bug fix and tweaking output
tmoreau89 Jun 20, 2019
772a837
addressing reviews
tmoreau89 Jun 21, 2019
a692d2f
fix
tmoreau89 Jun 21, 2019
b2d060a
Update.
ZihengJiang Jun 23, 2019
5215d62
Update.
ZihengJiang Jun 23, 2019
3cb83a6
addressing comments
tmoreau89 Jun 24, 2019
42a447c
addressing more comments
tmoreau89 Jun 24, 2019
1c52ed1
clean up
tmoreau89 Jun 24, 2019
3e7aed3
comment
tmoreau89 Jun 24, 2019
6f9037f
adding comment
tmoreau89 Jun 24, 2019
b0d09c1
unify the AutoTVM builder
tmoreau89 Jun 24, 2019
aa02859
lint fix
tmoreau89 Jun 24, 2019
7afb87e
bug fix
tmoreau89 Jun 24, 2019
aee8f05
reflecting update on qconfig
tmoreau89 Jun 25, 2019
a25bcbf
fixing incorrect target initialization
tmoreau89 Jun 25, 2019
a69250a
proper checking
tmoreau89 Jun 25, 2019
d5ba66e
unused arg
tmoreau89 Jun 25, 2019
39a9d62
adding a TODO to address later, bug fix
tmoreau89 Jun 25, 2019
3f60022
merge fix
tmoreau89 Jun 25, 2019
8df123a
merge fix
tmoreau89 Jun 25, 2019
6c2e142
merge fixes
tmoreau89 Jun 25, 2019
288883a
merge fix
tmoreau89 Jun 25, 2019
4a61b1f
guard to avoid errors when target is set as string
tmoreau89 Jun 25, 2019
bf6df69
reverting fix
tmoreau89 Jun 25, 2019
0a5b599
fix
tmoreau89 Jun 25, 2019
f8e629f
removing unused comment
tmoreau89 Jun 26, 2019
867ebdf
guarding against improperly initialized TVM targets
tmoreau89 Jun 27, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 5 additions & 19 deletions vta/python/vta/top/arm_conv2d.py → apps/pynq_rpc/start_rpc_server_to_tracker.sh
100644 → 100755
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
#!/bin/bash
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
Expand All @@ -14,24 +15,9 @@
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""Reuse conv2d schedule from ARM CPU"""
PROJROOT="$( cd "$( dirname "${BASH_SOURCE[0]}" )/../../" && pwd )"

import tvm

from topi.nn import conv2d, conv2d_alter_layout
from topi import generic

@conv2d.register(["vtacpu", "vta"])
def compute(*args, **kwargs):
with tvm.target.arm_cpu("vtacpu"):
return conv2d(*args, **kwargs)

@generic.schedule_conv2d_nchw.register(["vtacpu", "vta"])
def schedule(*args, **kwargs):
with tvm.target.arm_cpu("vtacpu"):
return generic.schedule_conv2d_nchw(*args, **kwargs)

@conv2d_alter_layout.register(["vtacpu", "vta"])
def alter(*args, **kwargs):
with tvm.target.arm_cpu("vtacpu"):
return conv2d_alter_layout(*args, **kwargs)
export PYTHONPATH=${PYTHONPATH}:${PROJROOT}/python:${PROJROOT}/vta/python
export PYTHONPATH=${PYTHONPATH}:/home/xilinx/pynq
python3 -m vta.exec.rpc_server --tracker fleet:9190 --key pynq
5 changes: 4 additions & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -215,7 +215,10 @@ def run_doxygen(folder):
'../tutorials/autotvm',
'../tutorials/dev',
'../tutorials/topi',
'../tutorials/deployment'])
'../tutorials/deployment',
'../vta/tutorials/frontend',
'../vta/tutorials/optimize',
'../vta/tutorials/autotvm'])

def generate_doxygen_xml(app):
"""Run the doxygen make commands if we're on the ReadTheDocs server"""
Expand Down
12 changes: 6 additions & 6 deletions nnvm/python/nnvm/top/nn.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ def schedule_log_softmax(_, outs, target):
def compute_dense(attrs, inputs, _):
"""Compute definition of dense"""
if attrs.get_bool("use_bias"):
return topi.nn.dense(inputs[0], inputs[1], bias=inputs[2])
return topi.nn.dense(inputs[0], inputs[1], inputs[2])
return topi.nn.dense(inputs[0], inputs[1])

@reg.register_schedule("dense")
Expand Down Expand Up @@ -114,25 +114,25 @@ def compute_conv2d(attrs, inputs, _):
if groups == 1 and layout == 'NCHW4c' and inputs[0].dtype == 'int8':
# pylint: disable=assignment-from-no-return
out = topi.nn.conv2d(inputs[0], inputs[1], strides, padding,
dilation, layout, out_dtype=out_dtype)
dilation, layout, out_dtype)
# pylint: enable=assignment-from-no-return
elif groups == 1:
out = topi.nn.conv2d(
inputs[0], inputs[1], strides, padding, dilation, layout, out_dtype=out_dtype)
inputs[0], inputs[1], strides, padding, dilation, layout, out_dtype)
elif layout == "NCHW" and \
groups == get_const_int(inputs[0].shape[1]) and \
groups == channels:
out = topi.nn.depthwise_conv2d_nchw(
inputs[0], inputs[1], strides, padding, dilation, out_dtype=out_dtype)
inputs[0], inputs[1], strides, padding, dilation, out_dtype)
elif layout in ["NCHW", "NCHW4c"]:
out = topi.nn.group_conv2d_nchw(inputs[0], inputs[1], strides, padding, dilation, groups,
out_dtype=out_dtype)
out_dtype)
elif layout == "NHWC" and \
kernel_layout == "HWOI" and \
groups == get_const_int(inputs[0].shape[3]) and \
groups == channels:
out = topi.nn.depthwise_conv2d_nhwc(
inputs[0], inputs[1], strides, padding, dilation, out_dtype=out_dtype)
inputs[0], inputs[1], strides, padding, dilation, out_dtype)
else:
raise ValueError("not support arbitrary group number for now")

Expand Down
25 changes: 13 additions & 12 deletions python/tvm/autotvm/graph_tuner/utils/traverse_graph.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,18 +65,19 @@ def expr2graph(expr, target_ops, node_dict, node_list):
% op_name)
topi_funcs += OP2COMPUTE[op_name]
env.reset(topi_funcs)
_expr2graph_impl(expr, target_ops, node_dict, node_list)
task_pos = 0
for node_entry in node_list:
if node_entry["op"] in target_ops:
task_name, args = env.task_collection[task_pos]
task = autotvm.task.create(task_name, args,
target="llvm",
target_host=None,
template_key='direct')
node_entry["workloads"] = [task.workload]
node_entry["topi_op"] = [task_name]
task_pos += 1
with env:
_expr2graph_impl(expr, target_ops, node_dict, node_list)
task_pos = 0
for node_entry in node_list:
if node_entry["op"] in target_ops:
task_name, args = env.task_collection[task_pos]
task = autotvm.task.create(task_name, args,
target="llvm",
target_host=None,
template_key='direct')
node_entry["workloads"] = [task.workload]
node_entry["topi_op"] = [task_name]
task_pos += 1


def _expr2graph_impl(expr, target_ops, node_dict, node_list):
Expand Down
15 changes: 12 additions & 3 deletions python/tvm/autotvm/measure/measure_methods.py
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,6 @@ def __init__(self, timeout=10, n_parallel=None, build_func='default'):
build_func = ndk.create_shared
else:
raise ValueError("Invalid build_func" + build_func)

self.build_func = _wrap_build_func(build_func)
self.executor = LocalExecutor(timeout=timeout)
self.tmp_dir = tempfile.mkdtemp()
Expand Down Expand Up @@ -360,8 +359,13 @@ def _build_func_common(measure_input, check_gpu=None, cuda_arch=None, build_opti
if cuda_arch:
set_cuda_target_arch(cuda_arch)

with build_config(**opts):
func = build(s, args, target_host=task.target_host)
if measure_input.target.device_name == 'vta':
tmoreau89 marked this conversation as resolved.
Show resolved Hide resolved
# if target is vta, we need to use vta build
import vta
func = vta.build(s, args, target_host=task.target_host)
else:
with build_config(**opts):
func = build(s, args, target_host=task.target_host)
return func, tuple((get_const_tuple(x.shape), x.dtype) for x in args)


Expand Down Expand Up @@ -452,6 +456,11 @@ def run_through_rpc(measure_input, build_result,
try:
# upload built module
remote = request_remote(*remote_args)
# Program the FPGA every single time when targeting VTA
if measure_input.target.device_name == 'vta':
from vta import program_fpga, reconfig_runtime
program_fpga(remote, None)
reconfig_runtime(remote)
remote.upload(build_result.filename)
func = remote.load_module(os.path.split(build_result.filename)[1])
ctx = remote.context(str(measure_input.target), 0)
Expand Down
98 changes: 61 additions & 37 deletions python/tvm/autotvm/task/nnvm_integration.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,23 +19,22 @@
Decorator and utilities for the integration with TOPI and NNVM

"""
import threading
import warnings
import logging


from ... import target as _target

from .task import create
from .topi_integration import TaskExtractEnv

logger = logging.getLogger('autotvm')


def extract_from_graph(graph, shape, dtype, target, symbols, target_host=None):
def extract_from_graph(graph, shape, dtype, target, symbols, params=None, target_host=None):
""" Extract tuning tasks from a nnvm graph.

This function collects tuning tasks by building the graph
with a "tracing" target and tracing all the calls to topi.
and trace all the calls to topi.

Parameters
----------
Expand All @@ -49,6 +48,8 @@ def extract_from_graph(graph, shape, dtype, target, symbols, target_host=None):
The compilation target
symbols : Array of nnvm.symbol
Array of nnvm symbols want to be tuned
params : dict of str to NDArray
The parameter dictionary.
target_host: tvm.target.Target
The host compilation target

Expand All @@ -63,8 +64,8 @@ def extract_from_graph(graph, shape, dtype, target, symbols, target_host=None):

env = TaskExtractEnv.get()

#NOTE: To add more symbols, you only need to change the following lists
#nnvm symbol -> topi compute
# NOTE: To add more symbols, you only need to change the following lists
# nnvm symbol -> topi compute
SYMBOL2TOPI = {
nnvm.sym.conv2d: [topi.nn.conv2d, topi.nn.depthwise_conv2d_nchw,
topi.nn.group_conv2d_nchw],
Expand All @@ -81,29 +82,40 @@ def extract_from_graph(graph, shape, dtype, target, symbols, target_host=None):

# run compiler to collect all TOPI calls during compilation
env.reset(topi_funcs)

# disable logger temporarily
old_state = logger.disabled
logger.disabled = True

# use a "tracing" target to do a fake compile for collecting topi calls
tracing_target = _target.create("llvm -device=tracing")
nnvm.compiler.engine.clear_cache()
nnvm.compiler.build(graph, target=tracing_target, shape=shape, dtype=dtype)

logger.disabled = old_state
with env:
# disable logger temporarily
old_state = logger.disabled
logger.disabled = True

nnvm.compiler.engine.clear_cache()
# wrap build call in thread to avoid multiprocessing problems
build_thread = threading.Thread(target=nnvm.compiler.build,
args=(graph,
target,
shape,
dtype,
params,
target_host))
build_thread.start()
build_thread.join()

logger.disabled = old_state

# create tasks for target
tasks = []
for task_name, args in env.get_tasks():
tasks.append(create(task_name, args,
target=target, target_host=target_host,
template_key='direct'))
try:
tsk = create(task_name, args,
target=target, target_host=target_host,
template_key='direct')
tasks.append(tsk)
except topi.InvalidShapeError:
print("[Warning] Invalid shape during AutoTVM task creation")

return tasks


def extract_from_multiple_graph(graphs, shapes, dtypes, target, symbols, target_host=None):
def extract_from_multiple_graph(graphs, shapes, dtypes, target, symbols, params, target_host=None):
""" Extract tuning tasks from multiple nnvm graphs.

This function is the multiple graph version of extract_from_graph
Expand All @@ -120,6 +132,8 @@ def extract_from_multiple_graph(graphs, shapes, dtypes, target, symbols, target_
The compilation target
symbols : Array of nnvm.symbol
Array of nnvm symbols want to be tuned
params : dict of str to NDArray
The parameter dictionary.
target_host: tvm.target.Target
The host compilation target

Expand Down Expand Up @@ -152,25 +166,35 @@ def extract_from_multiple_graph(graphs, shapes, dtypes, target, symbols, target_

# run compiler to collect all TOPI calls during compilation
env.reset(topi_funcs)

# disable logger temporarily
old_state = logger.disabled
logger.disabled = True

# use a "tracing" target to do a fake compile for collecting topi calls
tracing_target = _target.create("llvm -device=tracing")

nnvm.compiler.engine.clear_cache()
for graph, shape, dtype in zip(graphs, shapes, dtypes):
nnvm.compiler.build(graph, target=tracing_target, shape=shape, dtype=dtype)

logger.disabled = old_state
with env:
# disable logger temporarily
old_state = logger.disabled
logger.disabled = True

for graph, shape, dtype in zip(graphs, shapes, dtypes):
nnvm.compiler.engine.clear_cache()
# wrap build call in thread to avoid multiprocessing problems
build_thread = threading.Thread(target=nnvm.compiler.build,
args=(graph,
target,
shape,
dtype,
params,
target_host))
build_thread.start()
build_thread.join()

logger.disabled = old_state

# create tasks for target
tasks = []
for task_name, args in env.get_tasks():
tasks.append(create(task_name, args,
target=target, target_host=target_host,
template_key='direct'))
try:
tsk = create(task_name, args,
target=target, target_host=target_host,
template_key='direct')
tasks.append(tsk)
except topi.InvalidShapeError:
print("[Warning] Invalid shape during AutoTVM task creation")

return tasks
Loading