Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Detection metric release. #159

Merged
merged 116 commits into from
Oct 8, 2020
Merged

Detection metric release. #159

merged 116 commits into from
Oct 8, 2020

Conversation

benjaminrwilson
Copy link
Contributor

Provides a set of metrics to be used for detection evaluation.

argoverse/evaluation/detection_utils.py Outdated Show resolved Hide resolved
argoverse/evaluation/detection_utils.py Outdated Show resolved Hide resolved
argoverse/evaluation/detection_utils.py Outdated Show resolved Hide resolved
argoverse/evaluation/detection_utils.py Outdated Show resolved Hide resolved
argoverse/evaluation/detection_utils.py Outdated Show resolved Hide resolved
tests/test_eval_detection.py Outdated Show resolved Hide resolved
tests/test_eval_detection.py Show resolved Hide resolved
assert dist_fn(df1, df2, DistFnType.TRANSLATION) == 75 ** (1 / 2)


def test_scale_distance() -> None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1-line docstring would be helpful to explain the scenario and goal of the test

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. We should add documentation for the metrics somewhere. Where do you think that should go?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What sort of documentation were you thinking of? I think comments inline in the code are great, and then in the EvalAI Evaluation tab we should add the equivalent of https://evalai.cloudcv.org/web/challenges/challenge-page/453/evaluation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add a README in the evaluation folder.

assert metrics.AOE.Means == 0


def _rotvec_to_quat(rotvec: R) -> R:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we go ahead and make this a function in the argoverse.utils.transform along with its counterpart quat2rotmat? and add a unit test on it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or maybe I misunderstood -- what is a rotvec? axis-angle parameterization?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Happy to change this if it fits in with the other code better. Yeah: https://en.wikipedia.org/wiki/Axis%E2%80%93angle_representation#Rotation_vector. Let me know what you think.

Copy link
Contributor

@seanremy seanremy Sep 10, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't need to be it's own function in argoverse.utils.transform since it uses https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.transform.Rotation.html. The thing that should be done is the quaternion re-ordering, which is the [3, 0, 1, 2] part. But we can resolve that in the other comment. Leaving this open so we remember to change it

scale_errors = 1 - (inter / union)
return scale_errors
elif metric == DistFnType.ORIENTATION:
dt_yaws = R.from_quat(np.vstack(dt_df["quaternion"].array)[:, [3, 0, 1, 2]]).as_euler("xyz")[:, 2]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead of repeating the [3,0,1,2] here, i would prefer to make the quat2yaw (or a better name) an explicit function in argoverse.utils.transform. I like how you are using scipy for all of this. you could use this hacky function as a unit test on it:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. Let's settle the rotvec portion and this simultaneously.

@johnwlambert
Copy link
Contributor

johnwlambert commented Sep 5, 2020

I believe the Travis build is failing: https://travis-ci.org/github/argoai/argoverse-api

tqdm, pandas, and click are not included in our current dependencies, if I remember right.

@janesjanes
Copy link
Contributor

Thanks for the push! and thanks john for going over it, I haven't look at it in details yet, but for the build failing seems like it's timeout at tox. If it pass locally in your machine then maybe we can try increase the timeout from 10 mins (30 mins works for me)

@benjaminrwilson
Copy link
Contributor Author

I believe the Travis build is failing: https://travis-ci.org/github/argoai/argoverse-api

tqdm, pandas, and click are not included in our current dependencies, if I remember right.

I've removed tqdm for now --- the multiprocessing implementation was causing an issue when running locally. Using pandas and click would probably be worth having further discussion.

@seanremy
Copy link
Contributor

I made a bunch of changes, I believe I've addressed all of the comments except the one about Pandas.

I know in our discussion we decided to limit Pandas to the post-processing stage, but after looking back over the code I think it might be simpler to leave it as is. What do you guys think?

argoverse/evaluation/detection_utils.py Outdated Show resolved Hide resolved
argoverse/evaluation/eval_detection.py Outdated Show resolved Hide resolved
argoverse/evaluation/eval_detection.py Outdated Show resolved Hide resolved
argoverse/evaluation/detection_utils.py Outdated Show resolved Hide resolved
argoverse/evaluation/eval_detection.py Outdated Show resolved Hide resolved
argoverse/evaluation/detection_utils.py Outdated Show resolved Hide resolved
argoverse/evaluation/detection_utils.py Outdated Show resolved Hide resolved
argoverse/evaluation/detection_utils.py Outdated Show resolved Hide resolved
argoverse/evaluation/detection_utils.py Outdated Show resolved Hide resolved
argoverse/evaluation/detection_utils.py Outdated Show resolved Hide resolved
argoverse/evaluation/eval_detection.py Outdated Show resolved Hide resolved
argoverse/evaluation/eval_detection.py Outdated Show resolved Hide resolved
argoverse/evaluation/eval_detection.py Outdated Show resolved Hide resolved
argoverse/evaluation/eval_detection.py Outdated Show resolved Hide resolved
argoverse/evaluation/eval_detection.py Outdated Show resolved Hide resolved
argoverse/evaluation/eval_detection.py Outdated Show resolved Hide resolved
argoverse/evaluation/detection_utils.py Outdated Show resolved Hide resolved
argoverse/evaluation/detection_utils.py Outdated Show resolved Hide resolved
argoverse/evaluation/detection_utils.py Outdated Show resolved Hide resolved
metrics = assign(dts, gts, cfg)
# if these assign correctly, we should get an ATE of 0.1 for the first two
expected_result: float = 0.1
assert np.isclose(metrics[0, 4], expected_result)
Copy link
Contributor

@johnwlambert johnwlambert Oct 5, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you mind explaining what [0,4] means in the metrics dict?
maybe something like

ATE_COL_IDX = 4
assert np.isclose(metrics[0, ATE_COL_IDX], expected_result) # instance 0
assert np.isclose(metrics[1, ATE_COL_IDX], expected_result) # instance 1

how is the 0,1,2 instance order determined?

# ensure the detections match at all thresholds, have 0 TP errors, and have AP = 1
assert (
cls_to_accum["VEHICLE"]
== np.array([[1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0], [1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0]])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Small nitpick -- might be a bit easier to parse if we make the values here expected_ATE, expected_ASE instead of 0, 1 etc, which i have a bit of a hard time following

@johnwlambert johnwlambert merged commit 1dfc03e into argoverse:master Oct 8, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants