-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fancy reports #51
Fancy reports #51
Conversation
Good catch! Regarding the tests, my main gripe with this approach is that they don't really test parsing of reports in general, rather just the one single report, which may be incomplete or outdated. But I'm fine with it, since I don't have a better alternative in the meantime. One change I would suggest is to replace the |
Right, I see your point. I would think the reports are basically fixed based on the version used. I was thinking this could be a start because this just tests:
but this could be expanded to also test
|
I think ultimately we should enable the reporting features on the synthesis tests, once we have that. In the meantime we could think of a way to generalize the tests (like checking if the correct values are extracted, match templates etc |
* Add quantized sigmoid, fix quantized tanh for QKeras (fastmachinelearning#569) * snapshot of beginnings * make version that works for Vivado, error for Quartus * Change order of precision from quantizer * add hard sigmoid and tanh * fix setting of slope and shift type * revert config parsing--seems a little strange but works * fix hard_sigmoid and hard_tanh for streaming * update pytest for quantized tanh and sigmoid * remove inadvertently included matoplotlib * add special case when W == min_width. * fix merge of main * Go back to having AP_TRN and AP_WRP as defaults * handle case when use_real_tanh is not defined * make the activations use AP_RND_CONV (and AP_SAT) by default * remove use of use_real_tanh in test since not always supported * fix incorrect default types for Keras (not QKeras) hard_sigmoid * Mostly fix up things for Quartus * get rid of intermediate cast * fix an i++ compilation issue * Quartus seems to not like ac_fixed<1,0,false>, so make 2 bits. * fix activation quantizer * make sat, round defeult activation parameters, don't need to set again * Make the slope and shift not be configurable for HardActivation * some pre-commit fixes * pre-commint //hls to // hls fixes * update CI version * fixes for parsing errors from pre-commits * remove qactivation from list of activation_layers * print_vivado_report function for nicer reports (fastmachinelearning#730) * print_vivado_report function for fancier reports * Fancy reports (#51) * fix uram divide by 0 * add test * fix parsing of vsynth in 2020.1; add test * Update test_report.py * exclude pregenerated reports --------- Co-authored-by: Javier Duarte <[email protected]> --------- Co-authored-by: Jovan Mitrevski <[email protected]> Co-authored-by: Vladimir <[email protected]>
* print_vivado_report function for fancier reports * Fancy reports (#51) * fix uram divide by 0 * add test * fix parsing of vsynth in 2020.1; add test * Update test_report.py * exclude pregenerated reports --------- Co-authored-by: Javier Duarte <[email protected]>
vsynth
report (CLB vs. Slice) in 2020.1