Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QONNX ingestion for Vivado and Quartus #591

Closed
wants to merge 62 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
62 commits
Select commit Hold shift + click to select a range
37d4d4c
add test for reshape
jmitrevs Dec 8, 2021
7c4b6c7
snapshot of trying to fix inplace variables
jmitrevs Apr 11, 2022
0b75fd0
fix parallel reshape
jmitrevs Apr 11, 2022
329cc4c
Fix inplace usage for stream
jmitrevs Apr 12, 2022
17846f2
fix comment spelling, formatting
jmitrevs Apr 12, 2022
966d257
snapshot implementing qonnx in new master branch
jmitrevs Apr 19, 2022
37eed5f
another snapshot, to try on another computer
jmitrevs Apr 20, 2022
71228e7
fix parsing of dense
jmitrevs Apr 23, 2022
5c8c9f7
another snapshot, before updating normalize setup
jmitrevs Apr 28, 2022
b18a995
Make the size of bn scale and bias what they really are
jmitrevs Apr 28, 2022
af4f66d
don't override infered types in test
jmitrevs Apr 28, 2022
f536fae
make n_scale_bias not be a python parameter
jmitrevs Apr 28, 2022
9f771a2
Add broadcast shape for batchnorm
jmitrevs Apr 28, 2022
bef93d9
make shape comparison more robust
jmitrevs Apr 29, 2022
26d8f67
create BatchNormalization layer initializer for broadcast
jmitrevs Apr 29, 2022
8b6a8df
snapshot, parse CNV, but incorrect result
jmitrevs Apr 29, 2022
3956a93
another snapshot, towards fixing cnv
jmitrevs May 3, 2022
a220ad5
add strategy to values copied
jmitrevs May 3, 2022
71300fb
Fix CNV parsing
jmitrevs May 3, 2022
b3d7f49
Ingest qonnx jettagging (#538)
jmitrevs May 6, 2022
86b31ad
Merge remote-tracking branch 'upstream/master' into ingest-qonnx-master
jmitrevs May 10, 2022
16f4765
make model.output consistent in taking variable name in all cases, no…
jmitrevs May 11, 2022
83c627d
snapshot of working towards quartus fix
jmitrevs May 16, 2022
5ae179a
Work around apparent mac clang bug
jmitrevs May 16, 2022
46c3f6b
Add passes that were forgotten in previous commits
jmitrevs May 17, 2022
fa5aba0
add more quartus qonnx tests--should maybe parametrize in the future
jmitrevs Jun 1, 2022
e00cfba
Some code cleanup
jmitrevs Jun 1, 2022
6d6e81b
reshape and transpose constant fusion
jmitrevs Jun 1, 2022
4911f71
update reshape test to include Quartus
jmitrevs Jun 2, 2022
9535f16
remove cppname
jmitrevs Jun 2, 2022
efe4f79
remove cppname from InplaceVariable stuff
jmitrevs Jun 3, 2022
e392fbc
Merge pull request #580 from jmitrevs/ingest-qonnx-quartus
jmitrevs Jun 21, 2022
1d88051
Merge remote-tracking branch 'upstream/master' into ingest-qonnx-master
jmitrevs Jun 22, 2022
166da8b
partial first attempt to add tracing to quartus backend
jmitrevs Jun 24, 2022
e98cd71
continue adding tracing for quartus
jmitrevs Jun 24, 2022
8674a8a
Add trace pytest, fix bug uncovered in pytest
jmitrevs Jun 24, 2022
c298966
add docstring
jmitrevs Jun 24, 2022
fd2ef95
move batchnorm broadcast to fpga_backend from vivado_backend
jmitrevs Jul 6, 2022
a65192b
Revert "Work around apparent mac clang bug"
jmitrevs Jul 6, 2022
1932b8c
Merge remote-tracking branch 'upstream/master' into ingest-qonnx-master
jmitrevs Jul 7, 2022
9fca924
Update test image and qonnx test
thesps Jul 8, 2022
5920d1d
fix extras for quartus_backend when we have optimization_passes
jmitrevs Jul 21, 2022
95d4c41
Delete old and broken Gemm parsing: use qonnx package to convert to …
jmitrevs Jul 21, 2022
51d29ce
Merge remote-tracking branch 'upstream/main' into ingest-qonnx-master
jmitrevs Jul 22, 2022
8a17e2f
Merge remote-tracking branch 'upstream/main' into ingest-qonnx-master
jmitrevs Sep 1, 2022
0de7636
Merge remote-tracking branch 'upstream/main' into ingest-qonnx-master
jmitrevs Sep 24, 2022
13b1306
Merge remote-tracking branch 'upstream/main' into ingest-qonnx-master
jmitrevs Oct 4, 2022
951c2ce
remove rounding and saturation modes from accumulator when precision …
jmitrevs Oct 10, 2022
f77e343
Merge remote-tracking branch 'upstream/main' into ingest-qonnx-master
jmitrevs Nov 15, 2022
2fe81d7
fix parsing when flatten is the first layer after input
jmitrevs Jan 6, 2023
b9f9bec
fix fusing bn to Dense when the output name != node name
jmitrevs Jan 6, 2023
be3c648
Merge remote-tracking branch 'upstream/main' into ingest-qonnx-master
jmitrevs Jan 26, 2023
5195574
fix expected rf issues
jmitrevs Jan 26, 2023
cc0e5b1
make steps to support Flatten for Quartus stream
jmitrevs Nov 22, 2022
0110414
precommint fixes for onnx converters
jmitrevs Jan 26, 2023
6d693cc
apply pre-commit on optimizer passes
jmitrevs Jan 27, 2023
005fad6
backend pre-commit fixes
jmitrevs Jan 27, 2023
a79b926
model pre-commit fixes
jmitrevs Jan 27, 2023
3382967
pytest pre-commit fixes
jmitrevs Jan 27, 2023
c78c99e
pytest pre-commit fixes: trailing whitespace
jmitrevs Jan 27, 2023
388a1d4
mark batch dimension in input
jmitrevs Jan 28, 2023
9ddb6c7
Merge branch 'main' into ingest-qonnx-master
jmitrevs Feb 13, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
49 changes: 47 additions & 2 deletions hls4ml/backends/fpga/fpga_backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@
LSTM,
Activation,
BatchNormalization,
BatchNormOnnx,
Conv,
Conv1D,
Conv2D,
Dense,
Expand All @@ -22,14 +24,17 @@
GarNetStack,
GlobalPooling1D,
GlobalPooling2D,
MatMul,
Merge,
Pooling1D,
Pooling2D,
Quant,
SeparableConv1D,
SeparableConv2D,
SimpleRNN,
Softmax,
)
from hls4ml.model.optimizer import model_optimizer
from hls4ml.model.optimizer import layer_optimizer, model_optimizer
from hls4ml.model.types import (
ExponentPrecisionType,
FixedPrecisionType,
Expand Down Expand Up @@ -70,7 +75,18 @@ def __init__(self, name):
attrs.append(TypeAttribute('accum'))
self.attribute_map[layer] = attrs

rf_layers = accum_layers + [BatchNormalization, Activation, Embedding, GarNet, GarNetStack]
rf_layers = accum_layers + [
BatchNormalization,
BatchNormOnnx,
Activation,
Embedding,
GarNet,
GarNetStack,
Quant,
Merge,
MatMul,
Conv,
]

for layer in rf_layers:
attrs = self.attribute_map.get(layer, [])
Expand Down Expand Up @@ -826,3 +842,32 @@ def generate_conv2d_line_buffer_fn(
def write_hls(self, model):
self.writer.write_hls(model)
return True

@layer_optimizer(BatchNormalization)
def init_batchnormalization(self, layer):
'''Broadcast weights and scale if needed'''
input_shape = layer.get_input_variable().shape

scale = layer.weights['scale'].data_unquantized
bias = layer.weights['bias'].data_unquantized

n_filt = layer.get_attr('n_filt', -1)

scale_bias_shape = input_shape if n_filt == -1 else (n_filt,)

# Check shape, broadcast if needed. Don't broadcast if a squeeze makes them match.
if scale.shape != tuple(scale_bias_shape) and np.squeeze(scale).shape != tuple(scale_bias_shape):
layer.add_weights_variable(
name='scale',
data=np.broadcast_to(scale, scale_bias_shape),
precision=layer.get_attr("scale_precision"),
quantizer=layer.get_attr("scale_quantizer"),
)

if bias.shape != tuple(scale_bias_shape) and np.squeeze(bias).shape != tuple(scale_bias_shape):
layer.add_weights_variable(
name='bias',
data=np.broadcast_to(bias, scale_bias_shape),
precision=layer.get_attr("bias_precision"),
quantizer=layer.get_attr("bias_quantizer"),
)
Loading