-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lesionwise metrics for Competitions #866
Conversation
MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅ |
hi @rachitsaluja - I am going to mark this PR as draft. Please fix the failing tests, sign the CLA and then mark this PR as ready for review. |
Hi @rachitsaluja, you have indicated in your PR that the new additions have corresponding unit tests that cover all added code. However, the changed files do not indicate this: Are you sure the unit tests have been updated to ensure that the new additions are invoked? Perhaps a commit push is missing somewhere? The coverage update should come up like this [ref]: I am marking this PR has a draft until the tests have been added and everything is passing. |
@sarthakpati , I have added my tests now but I am not sure why codecov is not invoked? I have not worked with codecov so I am unaware. |
@VukW Thanks for helping me out today. The new tests in the workflow passed as well. Still not sure why codecov is not invoked. |
This is extremely weird. I apologize for the confusion, @rachitsaluja. To track this problem and have a potential solution, I have opened a discussion post with them here: codecov/feedback#368 Additionally, I just pushed a dummy commit on another PR (8b92852) to test the code coverage. |
@sarthakpati Thanks for the help! I appreciate it. |
hey @rachitsaluja, I still haven't heard from the codecov team about this. The test commit I did on the other PR worked as expected. Can I perhaps ask you to base your PR off of the new API branch instead of the current master? An additional reason behind this is that the current master API is due to be deprecated soon, and one of the major changes will be the way CLI apps are being invoked. I have a couple of additional suggestions to improve your PR:
|
@sarthakpati Thanks for you comments and your effort to work with codecov. Please find my comments below for your questions.
I can base my PR of the new APIs branch. Could you send me the documentation for it? I don't know where all the CLI python code went compared to the
I could probably do this but I have concerns, the present
For this, I don't think integrating it within the same script is helpful to users. These "Lesion-wise" Metrics are carefully curated metrics with particular hyper-parameters for sub-challenges and future challenges and not for a normal use. And the sheer difference in metrics in challenges will be known this year again and I think it should be a separate entity. I think we should think of the competition code as a separate entity just to be clear to the users. It makes it much easier for users who are participating in the competitions to use GaNDLF as it can be confusing for a few folks and this way they don't need learn the GaNDLF workflow to compute simple metrics. I will be happy to discuss more. |
Replies are inline.
Great! There is no explicit documentation about the new API branch, yet. The branch itself is in https://github.com/mlcommons/GaNDLF/tree/new-apis_v0.1.0-dev. As I mentioned earlier, the major changes here are related to the CLI, and if you are extending the metrics submodule as suggested, you won't need to add a new CLI app.
I would suggest you keep in line with the interface that is already provided so that there is a uniform user experience. Otherwise, this will simply add to the maintenance responsibilities of the framework.
I understand that these metrics are for challenges, but even so, integrating them into a larger framework means that their maintenance and continued code support falls on the shoulders of the framework maintainers. For this reason, I would still recommend that you integrate it with the current generate metrics functionality.
GaNDLF's metrics module is fairly independent of the training/inference aspects of the framework [ref], and a user need not learn that to use metrics. As a "compromise", might I suggest that if the config for contains |
Pinging @rachitsaluja for any update. |
Stale pull request message |
N/A
Proposed Changes
Simple run can be performed -
Checklist
CONTRIBUTING
guide has been followed.typing
is used to provide type hints, including and not limited to usingOptional
if a variable has a pre-defined value).pip install
step is needed for PR to be functional), please ensure it is reflected in all the files that control the CI, namely: python-test.yml, and all docker files [1,2,3].