Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QONNX ingestion for Vivado and Quartus #591

Closed
wants to merge 62 commits into from
Closed

Conversation

jmitrevs
Copy link
Contributor

@jmitrevs jmitrevs commented Jul 6, 2022

Description

This is the main development to ingest QONNX in hls4ml. It's a bit of a large PR, and will probably take a while to review and update as needed. It has been tested on the QONNX model zoo inputs, and includes them in pytests.

It includes #562, #583, and #525 because they were largely developed here, and are generally needed for ONNX parsing. Ideally, those would go in first. (I will go through those PRs to make sure they have everything needed to be merged.)

For more information, see https://indico.cern.ch/event/1184299/contributions/4975803/attachments/2484362/4265432/QONNX%20Ingestion.pdf

Type of change

For a new feature or function, please create an issue first to discuss it
with us before submitting a pull request.

Note: Please delete options that are not relevant.

  • Bug fix (non-breaking change that fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • A new research paper code implementation

Tests

The test_qonnx.py is the main source of tests, mainly running over the QONNX model zoo inputs.

  • Add tests that do synthesis

Checklist

  • I have read the guidelines for contributing.
  • I have commented my code, particularly in hard-to-understand areas (but could do better)
  • I have made corresponding changes to the documentation.
  • My changes generate no new warnings.
  • I have added tests that prove my fix is effective or that my feature works.

* allow propagating types when width != integer

* add special cases for power of 2

* move infering of precision to separate optimizer

* update batchnorm and banchnorm/Dense/Conv fusion

* propagate const information in merge with const (e.g. add node for bias)

* restrict propagation of types for accumulator only when set by quant nodes
@jmitrevs
Copy link
Contributor Author

For example, the propagate_dense_precision and propagate_conv_precision passes are obviously really nice - and could be expanded to other layers - but they look for attributes quant_precision, weight_precision, bias_precision which would only be there for QONNX models and not for QKeras models (which already adds weight_quantizer, bias_quantizer). So that part of the IR that relates to quantization needs to be made consistent between the two (which may imply changes to both QONNX and QKeras conversion).

I initially made the optimizers propagate all precisions, but this proved to be bad if you left the default ap_fixed<16,6> since you wound up with huge accumulators. I wanted to restrict the propagation to when you explicitly specify the precision of the inputs. The way QKeras is currently parsed, I don't think you can tell when the input is specially configured. I used the special attributes to control when to apply the optimizer. There may be a better way to indicate when to to activate this optimizer.

@thesps
Copy link
Contributor

thesps commented Jul 27, 2022

I initially made the optimizers propagate all precisions, but this proved to be bad if you left the default ap_fixed<16,6> since you wound up with huge accumulators. I wanted to restrict the propagation to when you explicitly specify the precision of the inputs. The way QKeras is currently parsed, I don't think you can tell when the input is specially configured. I used the special attributes to control when to apply the optimizer. There may be a better way to indicate when to to activate this optimizer.

Yeah, I agree that it's bad for 'PTQ' models so it should be somehow protected. I had been thinking about this independently from QONNX (since it is), and in that sense it could be another standalone flow (call it type propagation or something like that). That way controlling whether to run these passes can be left to the user since they can already control which flows to run from the config. And that way, if the flow is run at all I think it's an okay policy to override any types in the config - so we wouldn't necessarily need to know if they were configured or extracted from the model.

@jmitrevs
Copy link
Contributor Author

@jmitrevs
Copy link
Contributor Author

Concerning the propagation of precisions down, my plan was that once the QKeras parsing was improved, you would enable it there, too. This was meant to be an optimization that worked in all cases of QAT. But maybe it is fine to have it as a standalone flow that you enable in QKeras and QONNX parsing by default, and in other cases if you prefer. If you control it that way, you have the advantage of being able to enable it or disable it at will, even in cases that are not QKeras or QONNX, but you do have the disadvantage of having to enable it or disable it for the whole graph, while the current method (with the planned QKeras extension) enables it per node provided the quantizations are set by QAT, but doesn't allow enabling it when doing PTQ. I could be convinced either way

@jmitrevs jmitrevs marked this pull request as draft September 1, 2022 23:25
@jmduarte jmduarte linked an issue Dec 5, 2022 that may be closed by this pull request
@jmduarte jmduarte added this to the v0.8.0 milestone Jun 15, 2023
@jmitrevs jmitrevs mentioned this pull request Jul 13, 2023
8 tasks
@jmitrevs
Copy link
Contributor Author

This is replaced by #832

@jmitrevs jmitrevs closed this Jul 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

error while converting onnx model to hls
3 participants