-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add missed loaders for the ReLU and ELU activation layers #78
Comments
I can take if there are no volunteers for this issue and I was adding the ELU layer as I don't see other work for now for me and I'd like to try to export the trained model in kotlinDL to see how it works 😉 |
I suppose you are enough mature for second-level tasks, let's leave it for newcomers, it's easy to add, but need to understand some important things. |
- missing activation layers loaders added - two distinct examples with save/load added to examples folder (trying to reach 0.7 accuracy)
- missing activation layers loaders added - two distinct examples with save/load added to examples folder (trying to reach 0.7 accuracy)
Nevermind my commits, we happened to work on it at the same time :D |
@kokorins @dosier Just a tip: it's much better to first leave a comment mentioning that you are working on a PR for an existing issue in the issue page (or create an issue first if it does not already exist and let the maintainers know that you are interested to work on it). With that, you can prevent duplicated effort and you don't end up working on an issue which another person is also working on at the same time. Further, for some of the issues, you may need to confirm and check with maintainers that you want to start working on it; this is so as to prevent working on an issue which is not really an issue or is not a priority at least in a near future. Currently, the maintainer of this project is @zaleslaw. |
@kokorins could you please write any comment here, to assign the ticket on you |
I've just read a code a bit and decided to contribute. As there were no action until the Pull Request, I havent expected any parallel work. Can someone share the code style for the project (IntelliJ xml or something of those lines)? |
- missing activation layers loaders added - two distinct examples with save/load added to examples folder (trying to reach 0.7 accuracy)
It's ok @kokorins, no code style/contribution guideline in the project yet (we are working on it). I've committed some days ago |
- missing activation layers loaders added - two distinct examples with save/load added to examples folder (trying to reach 0.7 accuracy)
* Added missing saving functions for ReLU and ELU activation layers (JetBrains#78) * Reverted changes to the imports * Added "Model name: $name" line (if `name` is not null) to summary method #120 * Added model name validation in SequentialCompilationTest.summary #120 * Added "type" part to summary line for Sequential model #120 * Wrote simple test for Functional model summary (#120) - based of toyresnet - also offers base for other Functional model tests
* Add loaders for ELU, RELU activation layers (#78) - missing activation layers loaders added - two distinct examples with save/load added to examples folder (trying to reach 0.7 accuracy) * Add parameter trainability to ELU, ReLU savers
* Added missing saving functions for ReLU and ELU activation layers (JetBrains#78) * Reverted changes to the imports * Added RepeatVector layer #123 * Added serialisation support for RepeatVector layer #123 * Wrote test for RepeatVector #123 * Made changed requested by avan (see desc.) - added missing require check in init block of RepeatVector - updated docs - reformatted code - housekeeping * Removed redundant Obs.repeat ext fun * Made changed requested by avan (see desc.) - change require message in computeOutputShape - used inputShape.size(...) for creating shape - removed author tag * Used `=` instead of `return` block, added TODO * Implemented changes requested by zaleslaw - save trainability status - renamed tests * Added test for negative `n` #123 * Added missing newline
Each layer should have an implementation for export/import from/to JSON configuration (Keras compatible format) (see ModelLoader.kt and ModelSaver.kt)
The saving functions for the ReLU and ELU activation layer are missed
As an integration example, add a convolutional neural network to the examples package (in CNN package for example), it could be based on improved LeNet model, train it on "mnist" or "fashion mnist" and add it as an integration test to the "examples" module tests
The text was updated successfully, but these errors were encountered: