-
Notifications
You must be signed in to change notification settings - Fork 409
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP Add custom KL loss layer HLS implementation #606
Conversation
I think maybe all of this should go into contrib, since it's not used directly but using the extensions API. Potentially make a kl_loss directory in there with these two files + a readme to explain how to use it. I think that would be useful. |
@jmitrevs readme updated!
remove trailing whitespace
When I run
Does it succeed for you? |
@jmitrevs You're probably using TF 2.8 or newer where the information about the custom layer is not embedded in the model when saving it to disk, but rather its computation graph is embedded. So when loading the model back you get these lambda ops. You can try saving and then loading the model back and printing its |
The model that the test seems to parse is:
|
So I think it is as @vloncar says. Let's see what options we have to proceed. |
Maybe add |
Is there anything we can get from the |
My quick attempts with |
Upon digging a bit more, turns out this implementation is problematic, not the TF version itself. Apparently we shouldn't use
|
contrib/kl_layer/nnet_distance.h
Outdated
|
||
// Internal info | ||
static const unsigned table_size = 1024; | ||
static constexpr float exp_range = 1024; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this really need to be float? It would likely have bad QoR if it is not a power-of-two integer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what should it be instead of float?
I think you need these changes:
However, I still get a def format(self, node):
params = self._default_config_params(node)
params['n_in'] = node.get_input_variable(node.inputs[0]).shape[0]
params['n_out'] = 1
return self.template.format(**params) around line 90 of |
Fixed, thanks!
Ah I see, that's most likely because I have removed the Distance class as Vladimir suggested. But seems like it required more changes than just removing the class.. Also I can not test it locally since I have different error when running: |
I fixed the outstanding issues. Unfortunately, Anyway, it is ready now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, thanks Katya!
) * add kl layer * separate hls part; clean up and add docs * creeate KL layer folder in contrib and move the files there * pass pre-commit check * README and fix pre-commit issue * update readme * fix formatting * add readme * Update README.md @jmitrevs readme updated! * Update README.md remove trailing whitespace * Update kl_layer.py * Rename nnet_distance.h to kl_layer.h * Update README.md * Update kl_layer.py * Update kl_layer.h * fix pre-commit * Fix KLLoss layer example --------- Co-authored-by: Jovan Mitrevski <[email protected]> Co-authored-by: Vladimir Loncar <[email protected]>
Adds an implementation of the KL loss layer used for CMS Anomaly detection at L1.
Adds an example of usage of KL layer is in
contrib/kl_layer.py
and theHLS
part is inhls4ml/templates/vivado/nnet_utils/nnet_distance.h
.The original implementation of the KL layer is available on the
AE_L1_paper
branch, this PR updates the implementation for the new layer API.Type of change
Tests
The test creates a dummy Keras model which includes the KL loss layer, converts it to an
hls4ml
modeland synthesises it.
Test Configuration:
To run the test do
python contrib/kl_layer.py
Checklist