-
Notifications
You must be signed in to change notification settings - Fork 443
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhance XAI (CLI, ExplainParameters, test) #1941
Conversation
tests/e2e/test_api_xai_sanity.py
Outdated
"SSD": (13, 13), | ||
"YOLOX": (13, 13), | ||
} | ||
|
||
@e2e_pytest_api | ||
def test_inference_xai(self): | ||
with tempfile.TemporaryDirectory() as temp_dir: | ||
hyper_parameters, model_template = self.setup_configurable_parameters(DEFAULT_DET_TEMPLATE_DIR, num_iters=2) | ||
hyper_parameters, model_template = self.setup_configurable_parameters( | ||
DEFAULT_DET_TEMPLATE_DIR, num_iters=100 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need 100 liters for test code? I think it's quiet long even with e2e..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree with @JihwanEom 's opinion. could we reduce this parameter?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for not paying required attention to it. I reduced to 15. I tested, this is the minimal num of iters to get somewhat trained model. Is it ok for e2e?
@@ -35,6 +39,12 @@ | |||
"train_params": ["params", "--learning_parameters.num_iters", "1", "--learning_parameters.batch_size", "4"], | |||
} | |||
|
|||
num_iters_per_model = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we have below long interactions with integration test?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the same point, we need to reduce the elapsed time for integration tests.
IMO, we just need to check functionality in the integration tests executed at every PR.
So, how about reducing num_iters
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, there is a functionality in XAI, which suppose to generate saliency maps only for predictions. I need a trained model to test it. So if the model is not trained - no confident predictions (no threshold is passed).
As a solution, I moved this check into the e2e, keeping 1 iter for tests/integration/cli/detection/test_detection.py
Does it work for you?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moving it into e2e looks good to me
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just question: is it difficult to use InferenceParameters
as like before?
Adding new entity to otx.api
effects to the further Geti interface.
So, I prefer using pre-existed one
tests/e2e/test_api_xai_sanity.py
Outdated
"SSD": (13, 13), | ||
"YOLOX": (13, 13), | ||
} | ||
|
||
@e2e_pytest_api | ||
def test_inference_xai(self): | ||
with tempfile.TemporaryDirectory() as temp_dir: | ||
hyper_parameters, model_template = self.setup_configurable_parameters(DEFAULT_DET_TEMPLATE_DIR, num_iters=2) | ||
hyper_parameters, model_template = self.setup_configurable_parameters( | ||
DEFAULT_DET_TEMPLATE_DIR, num_iters=100 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree with @JihwanEom 's opinion. could we reduce this parameter?
@@ -35,6 +39,12 @@ | |||
"train_params": ["params", "--learning_parameters.num_iters", "1", "--learning_parameters.batch_size", "4"], | |||
} | |||
|
|||
num_iters_per_model = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the same point, we need to reduce the elapsed time for integration tests.
IMO, we just need to check functionality in the integration tests executed at every PR.
So, how about reducing num_iters
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your work! Mostly LGTM. But I want to ask you to check current test iteration can become lower as Jihwan said.
As you see, currently task.infer API still loaded with explain functionality - so for it On the other side Plus, there are many things that potentially can be parametrized in XAI, therefore I want to keep it separated from |
Thanks for kind explanation. |
tests/unit/algorithms/classification/test_xai_classification_validity.py
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your work! Added 5 cents from my side.
otx/cli/tools/explain.py
Outdated
# "-w", | ||
"--process-saliency-maps", | ||
action="store_true", | ||
help="Processing of saliency map includes (1) resize to input image resolution and (2) apply a colormap." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
help="Processing of saliency map includes (1) resize to input image resolution and (2) apply a colormap." | |
help="Processing of saliency map includes (1) resizing to input image resolution and (2) applying a colormap." |
@@ -65,18 +68,61 @@ def get_args(): | |||
"For Openvino task, default method will be selected.", | |||
) | |||
parser.add_argument( | |||
# "-w", | |||
"--process-saliency-maps", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please add these parameters to the documentation (CLI commands: Explain) to keep it up to date?
Also can you consider updating Changelog with you CLI update as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, but please resolve conflicts and update CHANGELOG as Galina mentioned. I believe that the document update could be done by the another PR.
TODO: rebase and update changelog |
@JihwanEom @wonjuleee @sungmanc @GalyaZalesskaya Please review. I already rebased this PR three times due to conflicts. I would like it to avoid further rebasing if possible. Many thanks! |
@negvet Could you create another PR to resolve pre-commit issue? https://github.com/openvinotoolkit/training_extensions/actions/runs/4619805865/jobs/8169044264 |
Summary
Update XAI CLI. Add
--process-saliency-maps
and--explain-all-classes
.Add
ExplainParameters
entity used for explain task.Recover XAI test. Make them runable at CI.
Minor refactoring (e.g. rename
DetSaliencyMap
toDetClassProbabilityMap
, which is more specific and precise).Sorry for putting it all in the same PR.
How to test
Corresponding tests added to integration/cli +e2e
Checklist
develop
branchLicense
Feel free to contact the maintainers if that's a concern.