-
Notifications
You must be signed in to change notification settings - Fork 409
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
QONNX ingestion for Vivado and Quartus #591
Conversation
* allow propagating types when width != integer * add special cases for power of 2 * move infering of precision to separate optimizer * update batchnorm and banchnorm/Dense/Conv fusion * propagate const information in merge with const (e.g. add node for bias) * restrict propagation of types for accumulator only when set by quant nodes
…t sometimes layer name
I initially made the optimizers propagate all precisions, but this proved to be bad if you left the default ap_fixed<16,6> since you wound up with huge accumulators. I wanted to restrict the propagation to when you explicitly specify the precision of the inputs. The way QKeras is currently parsed, I don't think you can tell when the input is specially configured. I used the special attributes to control when to apply the optimizer. There may be a better way to indicate when to to activate this optimizer. |
Yeah, I agree that it's bad for 'PTQ' models so it should be somehow protected. I had been thinking about this independently from QONNX (since it is), and in that sense it could be another standalone flow (call it type propagation or something like that). That way controlling whether to run these passes can be left to the user since they can already control which flows to run from the config. And that way, if the flow is run at all I think it's an okay policy to override any types in the config - so we wouldn't necessarily need to know if they were configured or extracted from the model. |
This pull request was presented at an hls4ml meeting: https://indico.cern.ch/event/1184299/contributions/4975803/attachments/2484362/4265432/QONNX%20Ingestion.pdf |
Concerning the propagation of precisions down, my plan was that once the QKeras parsing was improved, you would enable it there, too. This was meant to be an optimization that worked in all cases of QAT. But maybe it is fine to have it as a standalone flow that you enable in QKeras and QONNX parsing by default, and in other cases if you prefer. If you control it that way, you have the advantage of being able to enable it or disable it at will, even in cases that are not QKeras or QONNX, but you do have the disadvantage of having to enable it or disable it for the whole graph, while the current method (with the planned QKeras extension) enables it per node provided the quantizations are set by QAT, but doesn't allow enabling it when doing PTQ. I could be convinced either way |
This is replaced by #832 |
Description
This is the main development to ingest QONNX in hls4ml. It's a bit of a large PR, and will probably take a while to review and update as needed. It has been tested on the QONNX model zoo inputs, and includes them in pytests.
It includes #562, #583, and #525 because they were largely developed here, and are generally needed for ONNX parsing. Ideally, those would go in first. (I will go through those PRs to make sure they have everything needed to be merged.)
For more information, see https://indico.cern.ch/event/1184299/contributions/4975803/attachments/2484362/4265432/QONNX%20Ingestion.pdf
Type of change
For a new feature or function, please create an issue first to discuss it
with us before submitting a pull request.
Note: Please delete options that are not relevant.
Tests
The
test_qonnx.py
is the main source of tests, mainly running over the QONNX model zoo inputs.Checklist