Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reshape fixes: don't repack stream for flatten; remove final reshape #443

Merged
merged 5 commits into from
Nov 9, 2021

Conversation

jmduarte
Copy link
Member

@jmduarte jmduarte commented Nov 7, 2021

2 reshape-related fixes:

+-----------------+---------+-------+--------+-------+-----+
|       Name      | BRAM_18K| DSP48E|   FF   |  LUT  | URAM|
+-----------------+---------+-------+--------+-------+-----+
|DSP              |        -|      -|       -|      -|    -|
|Expression       |        -|      -|       0|      2|    -|
|FIFO             |      163|      -|    6241|  13136|    -|
|Instance         |       99|      0|   30015|  34856|    0|
|Memory           |        -|      -|       -|      -|    -|
|Multiplexer      |        -|      -|       -|      9|    -|
|Register         |        -|      -|       1|      -|    -|
+-----------------+---------+-------+--------+-------+-----+
|Total            |      262|      0|   36257|  48003|    0|
+-----------------+---------+-------+--------+-------+-----+
|Available        |      280|    220|  106400|  53200|    0|
+-----------------+---------+-------+--------+-------+-----+
|Utilization (%)  |       93|      0|      34|     90|    0|
+-----------------+---------+-------+--------+-------+-----+

@jmduarte jmduarte requested a review from thesps November 7, 2021 22:39
@jmduarte jmduarte added this to the v0.6.0 milestone Nov 8, 2021
@thesps
Copy link
Contributor

thesps commented Nov 8, 2021

I'm happy with the state of both developments:

  • Don't repack stream for flatten: tested the example-models/keras/qkeras_mnist_cnn on pynq-z2, getting identical performance to master (NN accuracy & throughput), but for fewer resources
  • RemoveFinalReshape works as expected

Only question: do you want the CI test for the RemoveFinalReshape from #444 ? (It's a PR to this branch)

Some detail on the resource saving for that model:

Version Total LUTs Logic LUTs LUTRAMs SRLs FFs RAMB36 RAMB18 DSP48 Blocks
incl. repack-4-Flatten (master) 33264(62.53%) 31843(59.86%) 22(0.13%) 1399(8.04%) 64112(60.26%) 4(2.86%) 112(40.00%) 58(26.36%)
repack-4-Flatten removed (this PR) 29404(55.27%) 27974(52.58%) 22(0.13%) 1408(8.09%) 44830(42.13%) 4(2.86%) 111(39.64%) 58(26.36%)

Add a test for a model with Reshape as the final layer
@jmduarte
Copy link
Member Author

jmduarte commented Nov 8, 2021

Yes, now merged!

@thesps thesps merged commit 7f75add into master Nov 9, 2021
thesps added a commit that referenced this pull request Nov 9, 2021
…443)

* fix 2 reshape issues: don't reshape streams for flatten and remove final reshape

* Add a test for a model with Reshape as the final layer

* swap

* only remove for io_parallel; warn for both io_parallel and io_stream

Co-authored-by: Sioni Summers <[email protected]>
thesps added a commit that referenced this pull request Nov 11, 2021
…443)

* fix 2 reshape issues: don't reshape streams for flatten and remove final reshape

* Add a test for a model with Reshape as the final layer

* swap

* only remove for io_parallel; warn for both io_parallel and io_stream

Co-authored-by: Sioni Summers <[email protected]>
jmitrevs pushed a commit that referenced this pull request Dec 2, 2021
* fix batched multiple inputs

* Fixed 'qkeras_mnist_dense' example build problem #423

* Update for pyyaml 6.0 (#435)

* yaml.safe_load instead of yaml.load

* Use yaml.safe_load in converters __init__.py

* `axi_stream_driver` update (#420)

* Update `zcu102` and `pynq-z2` `axi-stream` driver

* Reshape fixes: don't repack stream for flatten; remove final reshape (#443)

* fix 2 reshape issues: don't reshape streams for flatten and remove final reshape

* Add a test for a model with Reshape as the final layer

* swap

* only remove for io_parallel; warn for both io_parallel and io_stream

Co-authored-by: Sioni Summers <[email protected]>

* Reorder loops in im2col_2d_cl given resource strategy issue. Reenable relevant test. Use 5000 MNIST samples rather than full dataset for faster testing

* Support applying Softmax over multidimensional tensors (#384)

* Support softmax over multidimensional tensors

* Style cleanup

* Added axis part in keras_to_hls.py

* Added some extensions to test_softmax.py but multidimensional softmax is still getting bad performances (i.e. below the one set in the assertion)

* Clean up the softmax test

* Make sure io_parallel softmax is not used on multi-dim input

Co-authored-by: nicologhielmetti <[email protected]>

* Disable some unsupported layers

* Set appropriate data type for quantized_relu activations

* Display unsigned types properly in profiling

* Add a QONNX test, update the image

* Don't convert Quant to BatchNorm. Convert weight-Quant to Constant, and activation-quant to Activation

Co-authored-by: Javier M. Duarte <[email protected]>
Co-authored-by: Thea Aarrestad <[email protected]>
Co-authored-by: David Siorpaes <[email protected]>
Co-authored-by: Nicolò Ghielmetti <[email protected]>
Co-authored-by: vloncar <[email protected]>
@jmduarte jmduarte deleted the reshape_issues branch December 10, 2021 04:47
calad0i pushed a commit to calad0i/hls4ml that referenced this pull request Jul 1, 2023
…astmachinelearning#443)

* fix 2 reshape issues: don't reshape streams for flatten and remove final reshape

* Add a test for a model with Reshape as the final layer

* swap

* only remove for io_parallel; warn for both io_parallel and io_stream

Co-authored-by: Sioni Summers <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Compilation error with concatenate layer Flatten layer creating a lot of logic
3 participants