-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding a simple HSC Autoencoder as an example #136
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #136 +/- ##
==========================================
- Coverage 39.72% 39.64% -0.09%
==========================================
Files 22 23 +1
Lines 1815 1839 +24
==========================================
+ Hits 721 729 +8
- Misses 1094 1110 +16 ☔ View full report in Codecov by Sentry. |
src/fibad/fibad_default_config.toml
Outdated
|
||
# Default PyTorch optimizer parameters. The keys match the names of the parameters | ||
lr = 0.01 | ||
#momentum = 0.9 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am using optim.Adam
as my optimizer, which doesn't take momentum
as an argument. So, the only way to make it work in the current implementation was to set dev_mode = true
in my local config file and to comment out momentum (above) in the default configuration file.
I tried setting momentum=false
in my local config, but that didn't take care of the above. What do you think is a better solution here? As all optimizers might/mightn't take different arguments.
This also raises the question of the default behavior of our configuration system to assume defaults via the default config file and then come up with a combination using the default and user-specified configs? For me, personally, this has been confusing sometimes. Can't we ask users to ask users to copy the default config file and modify it as needed? And then only that file is used for reading the configs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are both good questions, and it can drive a good conversation during the sync meeting today.
For the first question, regarding different arguments for different optimizers and loss functions, one thing we could potentially do is introspect the function that is about to be called (in this case optim.Adam
) and get the list of parameters that it can accept, then only pass the config values that match one of those. That might be the most transparent to the user in general.
Another approach would be to create a subtable in the config file for the optimizer of choice, but this feels unnecessarily burdensome. Something like:
[optimizer]
name = 'torch.optim.Adam'
...
[optimizer.Adam]
lr = 0.01
< other Adam params >
[optimzer.Other]
< params that are only for "Other" >
Regarding the second question about just making a copy of the config and using that. In the short term we could implement that, but we've built the config system to be able to support external model/dataset libraries as well as those that are built in.
For a user working with an external library, we would want the defaults of that library to override the fibad defaults, and then the user specific values to override anything else. I think that for a new user, that the first step of merging fibad and external defaults becomes a bit challenging (or at least annoying)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the suggested workaround of momentum=False you can make the following change and it should work:
If you git fetch
you should be able to git cherry-pick 47c42213470be3e14c9b7e569d41bde94b9337a8
in your PR branch to pull in the change.
For this PR I'd advocate cherry-picking this change, and putting the rest of the config refactor discussed above and in the meeting into an issue to be completed separately from this PR.
"%tensorboard --logdir ./results" | ||
"%tensorboard --logdir ./results\n", | ||
"\n", | ||
"# if running on a remote server, and tunnelling a connection,\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like there is a tensorboard integration feature buried in this comment :)
We should probably flesh it out a bit.
src/fibad/models/model_registry.py
Outdated
del arguments["name"] | ||
return optimizer_cls(self.parameters(), **arguments) | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You may want to rebase this on main before merging, this looks like some appearance of one of Drew's earlier commits from PR #132 GIt should sort this out in the rebase and give you a clean diff here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some minor comments but nothing big. This should get merged even if there is disagreement on my commentary.
During installation, we only send error messages to stdout. Specifying this now in the messages that are printed during installation.
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 4 to 5. - [Release notes](https://github.com/codecov/codecov-action/releases) - [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md) - [Commits](codecov/codecov-action@v4...v5) --- updated-dependencies: - dependency-name: codecov/codecov-action dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [dawidd6/action-download-artifact](https://github.com/dawidd6/action-download-artifact) from 6 to 7. - [Release notes](https://github.com/dawidd6/action-download-artifact/releases) - [Commits](dawidd6/action-download-artifact@v6...v7) --- updated-dependencies: - dependency-name: dawidd6/action-download-artifact dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…nloaded files (#130) - Should fix issue #127. - Moved removal of incomplete downloads from the prune stage to the f/s read stage - Added a better error to the case where HSCDataSet arrives at an absurdly small size of image to crop to. Co-authored-by: Drew Oldag <[email protected]>
I cherry picked |
Change Description
This PR does the following:-
tensorboard
when running it remotely on a serverThis is currently marked as a draft because of a an issue in the change I implemented in
fibad_default_config.toml
(see below)