Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhance XAI (CLI, ExplainParameters, test) #1941

Merged
merged 8 commits into from
Apr 5, 2023

Conversation

negvet
Copy link
Collaborator

@negvet negvet commented Mar 24, 2023

Summary

Update XAI CLI. Add --process-saliency-maps and --explain-all-classes.
Add ExplainParameters entity used for explain task.
Recover XAI test. Make them runable at CI.
Minor refactoring (e.g. rename DetSaliencyMap to DetClassProbabilityMap, which is more specific and precise).

Sorry for putting it all in the same PR.

How to test

Corresponding tests added to integration/cli +e2e

Checklist

License

  • I submit my code changes under the same MIT License that covers the project.
    Feel free to contact the maintainers if that's a concern.
  • I have updated the license header for each file (see an example below)
# Copyright (C) 2023 Intel Corporation
#
# SPDX-License-Identifier: MIT

@negvet negvet requested a review from a team as a code owner March 24, 2023 14:25
@github-actions github-actions bot added ALGO Any changes in OTX Algo Tasks implementation API Any changes in OTX API CLI Any changes in OTE CLI TEST Any changes in tests labels Mar 24, 2023
"SSD": (13, 13),
"YOLOX": (13, 13),
}

@e2e_pytest_api
def test_inference_xai(self):
with tempfile.TemporaryDirectory() as temp_dir:
hyper_parameters, model_template = self.setup_configurable_parameters(DEFAULT_DET_TEMPLATE_DIR, num_iters=2)
hyper_parameters, model_template = self.setup_configurable_parameters(
DEFAULT_DET_TEMPLATE_DIR, num_iters=100
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need 100 liters for test code? I think it's quiet long even with e2e..

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree with @JihwanEom 's opinion. could we reduce this parameter?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for not paying required attention to it. I reduced to 15. I tested, this is the minimal num of iters to get somewhat trained model. Is it ok for e2e?

@@ -35,6 +39,12 @@
"train_params": ["params", "--learning_parameters.num_iters", "1", "--learning_parameters.batch_size", "4"],
}

num_iters_per_model = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we have below long interactions with integration test?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the same point, we need to reduce the elapsed time for integration tests.

IMO, we just need to check functionality in the integration tests executed at every PR.
So, how about reducing num_iters?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, there is a functionality in XAI, which suppose to generate saliency maps only for predictions. I need a trained model to test it. So if the model is not trained - no confident predictions (no threshold is passed).

As a solution, I moved this check into the e2e, keeping 1 iter for tests/integration/cli/detection/test_detection.py Does it work for you?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moving it into e2e looks good to me

tests/test_suite/run_test_command.py Outdated Show resolved Hide resolved
tests/test_suite/run_test_command.py Outdated Show resolved Hide resolved
tests/test_suite/run_test_command.py Outdated Show resolved Hide resolved
tests/test_suite/run_test_command.py Outdated Show resolved Hide resolved
otx/cli/tools/explain.py Outdated Show resolved Hide resolved
otx/cli/tools/explain.py Outdated Show resolved Hide resolved
tests/test_suite/run_test_command.py Outdated Show resolved Hide resolved
Copy link
Contributor

@sungmanc sungmanc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just question: is it difficult to use InferenceParameters as like before?

Adding new entity to otx.api effects to the further Geti interface.
So, I prefer using pre-existed one

otx/api/entities/inference_parameters.py Show resolved Hide resolved
"SSD": (13, 13),
"YOLOX": (13, 13),
}

@e2e_pytest_api
def test_inference_xai(self):
with tempfile.TemporaryDirectory() as temp_dir:
hyper_parameters, model_template = self.setup_configurable_parameters(DEFAULT_DET_TEMPLATE_DIR, num_iters=2)
hyper_parameters, model_template = self.setup_configurable_parameters(
DEFAULT_DET_TEMPLATE_DIR, num_iters=100
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree with @JihwanEom 's opinion. could we reduce this parameter?

tests/e2e/test_api_xai_sanity.py Outdated Show resolved Hide resolved
@@ -35,6 +39,12 @@
"train_params": ["params", "--learning_parameters.num_iters", "1", "--learning_parameters.batch_size", "4"],
}

num_iters_per_model = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the same point, we need to reduce the elapsed time for integration tests.

IMO, we just need to check functionality in the integration tests executed at every PR.
So, how about reducing num_iters?

@sungmanc sungmanc added this to the 1.2.0 milestone Mar 27, 2023
Copy link
Contributor

@eunwoosh eunwoosh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your work! Mostly LGTM. But I want to ask you to check current test iteration can become lower as Jihwan said.

@negvet
Copy link
Collaborator Author

negvet commented Mar 27, 2023

Just question: is it difficult to use InferenceParameters as like before?
Adding new entity to otx.api effects to the further Geti interface. So, I prefer using pre-existed one

InferenceParameters are still used, but for task.infer().
ExplainParameters are used only for task.explain(), new API. I believe new interface allow us to introduce new objects of our choice to parametrize it (Geti is still not using explain API). Do you agree?

As you see, currently task.infer API still loaded with explain functionality - so for it InferenceParameters include both inference and explain parameters (I do not use new ExplainParameters for task.infer).

On the other side ExplainParameters will be used specifically for XAI purposes by task.explain. When task.explain will be used by Geti (1.4-1.5?), then explain load will be removed from task.infer (together with explain parameters in InferenceParameters) to make it faster.

Plus, there are many things that potentially can be parametrized in XAI, therefore I want to keep it separated from InferenceParameters.

@sungmanc
Copy link
Contributor

Just question: is it difficult to use InferenceParameters as like before?
Adding new entity to otx.api effects to the further Geti interface. So, I prefer using pre-existed one

InferenceParameters are still used, but for task.infer(). ExplainParameters are used only for task.explain(), new API. I believe new interface allow us to introduce new objects of our choice to parametrize it (Geti is still not using explain API). Do you agree?

As you see, currently task.infer API still loaded with explain functionality - so for it InferenceParameters include both inference and explain parameters (I do not use new ExplainParameters for task.infer).

On the other side ExplainParameters will be used specifically for XAI purposes by task.explain. When task.explain will be used by Geti (1.4-1.5?), then explain load will be removed from task.infer (together with explain parameters in InferenceParameters) to make it faster.

Plus, there is many things that potentially can be parametrized in XAI, therefore I want to keep it separate from InferenceParameters.

Thanks for kind explanation.

eunwoosh
eunwoosh previously approved these changes Mar 28, 2023
Copy link
Contributor

@GalyaZalesskaya GalyaZalesskaya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your work! Added 5 cents from my side.

# "-w",
"--process-saliency-maps",
action="store_true",
help="Processing of saliency map includes (1) resize to input image resolution and (2) apply a colormap."
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
help="Processing of saliency map includes (1) resize to input image resolution and (2) apply a colormap."
help="Processing of saliency map includes (1) resizing to input image resolution and (2) applying a colormap."

@@ -65,18 +68,61 @@ def get_args():
"For Openvino task, default method will be selected.",
)
parser.add_argument(
# "-w",
"--process-saliency-maps",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please add these parameters to the documentation (CLI commands: Explain) to keep it up to date?

Also can you consider updating Changelog with you CLI update as well?

wonjuleee
wonjuleee previously approved these changes Mar 28, 2023
Copy link
Contributor

@wonjuleee wonjuleee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, but please resolve conflicts and update CHANGELOG as Galina mentioned. I believe that the document update could be done by the another PR.

@negvet negvet dismissed stale reviews from wonjuleee and eunwoosh via be685cb March 30, 2023 11:15
@github-actions github-actions bot added the DOC Improvements or additions to documentation label Mar 30, 2023
@negvet
Copy link
Collaborator Author

negvet commented Mar 30, 2023

TODO: rebase and update changelog

@negvet
Copy link
Collaborator Author

negvet commented Apr 5, 2023

@JihwanEom @wonjuleee @sungmanc @GalyaZalesskaya Please review. I already rebased this PR three times due to conflicts. I would like it to avoid further rebasing if possible. Many thanks!

@negvet negvet requested a review from JihwanEom April 5, 2023 10:45
@sovrasov sovrasov merged commit 40094c2 into openvinotoolkit:develop Apr 5, 2023
@JihwanEom
Copy link
Contributor

@negvet Could you create another PR to resolve pre-commit issue? https://github.com/openvinotoolkit/training_extensions/actions/runs/4619805865/jobs/8169044264

@negvet negvet deleted the et/xai_enhance branch July 18, 2023 14:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ALGO Any changes in OTX Algo Tasks implementation API Any changes in OTX API CLI Any changes in OTE CLI DOC Improvements or additions to documentation TEST Any changes in tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants