About the hardwareization of BN and activation functions in Finn architecture #884
-
@maltanar For If so, how to understand the transformation process, if not, how to deal with it? I checked Finn and FinN-R papers and readdocs, and I didn't get a detailed answer. Therefore, I hope you can give me some advice and guidance. In addition, I do not fully understand the introduction of PE in Finn-R's article, and I hope you can give me some guidance if you can. Thanks. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi, In general, you would apply the default sequence of FINN transformation steps (from the "build_dataflow") to your model and see where streamlining (and later the conversion to HLS-backend layers) breaks. Then adjust the transformation steps until you arrive at a correct model where all float operations have been removed. In practice you can think of PE as the parallelization across the output channel dimension, while SIMD is the parallelization across the input channel + kernel dimensions. |
Beta Was this translation helpful? Give feedback.
Hi,
exactly, BatchNorm, non-linear activation functions, and quantizer scale values will all be collapsed into threshold operations by FINN's streamlining transformations. The threshold can either be implemented as part of an MVAU (Matrix Vector Activation Unit) or as MVU (Matrix Vector Unit) + standalone MultiThreshold. Here is a related thread on how the MultiThreshold works: #914
In general, you would apply the default sequence of FINN transformation steps (from the "build_dataflow") to your model and see where streamlining (and later the conversion to HLS-backend layers) breaks. Then adjust the transformation steps until you arrive at a correct model where all float operations have be…