-
Notifications
You must be signed in to change notification settings - Fork 249
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Detection metric release. #159
Detection metric release. #159
Conversation
assert dist_fn(df1, df2, DistFnType.TRANSLATION) == 75 ** (1 / 2) | ||
|
||
|
||
def test_scale_distance() -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1-line docstring would be helpful to explain the scenario and goal of the test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. We should add documentation for the metrics somewhere. Where do you think that should go?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What sort of documentation were you thinking of? I think comments inline in the code are great, and then in the EvalAI Evaluation tab we should add the equivalent of https://evalai.cloudcv.org/web/challenges/challenge-page/453/evaluation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's add a README in the evaluation folder.
tests/test_eval_detection.py
Outdated
assert metrics.AOE.Means == 0 | ||
|
||
|
||
def _rotvec_to_quat(rotvec: R) -> R: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we go ahead and make this a function in the argoverse.utils.transform
along with its counterpart quat2rotmat
? and add a unit test on it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or maybe I misunderstood -- what is a rotvec? axis-angle parameterization?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Happy to change this if it fits in with the other code better. Yeah: https://en.wikipedia.org/wiki/Axis%E2%80%93angle_representation#Rotation_vector. Let me know what you think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't need to be it's own function in argoverse.utils.transform since it uses https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.transform.Rotation.html. The thing that should be done is the quaternion re-ordering, which is the [3, 0, 1, 2] part. But we can resolve that in the other comment. Leaving this open so we remember to change it
scale_errors = 1 - (inter / union) | ||
return scale_errors | ||
elif metric == DistFnType.ORIENTATION: | ||
dt_yaws = R.from_quat(np.vstack(dt_df["quaternion"].array)[:, [3, 0, 1, 2]]).as_euler("xyz")[:, 2] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
instead of repeating the [3,0,1,2] here, i would prefer to make the quat2yaw (or a better name) an explicit function in argoverse.utils.transform
. I like how you are using scipy for all of this. you could use this hacky function as a unit test on it:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree. Let's settle the rotvec
portion and this simultaneously.
I believe the Travis build is failing: https://travis-ci.org/github/argoai/argoverse-api
|
Thanks for the push! and thanks john for going over it, I haven't look at it in details yet, but for the build failing seems like it's timeout at |
I've removed |
I made a bunch of changes, I believe I've addressed all of the comments except the one about Pandas. I know in our discussion we decided to limit Pandas to the post-processing stage, but after looking back over the code I think it might be simpler to leave it as is. What do you guys think? |
tests/test_eval_detection.py
Outdated
metrics = assign(dts, gts, cfg) | ||
# if these assign correctly, we should get an ATE of 0.1 for the first two | ||
expected_result: float = 0.1 | ||
assert np.isclose(metrics[0, 4], expected_result) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you mind explaining what [0,4] means in the metrics dict?
maybe something like
ATE_COL_IDX = 4
assert np.isclose(metrics[0, ATE_COL_IDX], expected_result) # instance 0
assert np.isclose(metrics[1, ATE_COL_IDX], expected_result) # instance 1
how is the 0,1,2 instance order determined?
tests/test_eval_detection.py
Outdated
# ensure the detections match at all thresholds, have 0 TP errors, and have AP = 1 | ||
assert ( | ||
cls_to_accum["VEHICLE"] | ||
== np.array([[1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0], [1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0]]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small nitpick -- might be a bit easier to parse if we make the values here expected_ATE
, expected_ASE
instead of 0, 1 etc, which i have a bit of a hard time following
Provides a set of metrics to be used for detection evaluation.