You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In Table 8 of Uni3D, generalization study is conducted by evaluating zero-shot detection accuracy on KITTI. Since two separate detection heads are trained and a dual-BN is leveraged for nuscenes and waymo during pre-training, what are the implementation details for conducting zero-shot detection on KITTI and which detection head is used?
The text was updated successfully, but these errors were encountered:
Note that Table 8 in Uni3D reports the AP_{BEV} and AP_{3D} of the car category at IoU = 0.7. The experiment setting follows that of ST3D, where the baseline model is trained on source domain (e.g., Waymo) and tested on target domain (e.g. KITTI). For example, the first and fourth rows represent the results of the PV-RCNN model pre-trained on Waymo but evaluated on KITTI (only car category). Similarly, we can use Uni3D to perform the joint training on Waymo and nuScenes, and then directly test on the KITTI dataset, which is shown in the third row of Table 8.
In Table 8 of Uni3D, generalization study is conducted by evaluating zero-shot detection accuracy on KITTI. Since two separate detection heads are trained and a dual-BN is leveraged for nuscenes and waymo during pre-training, what are the implementation details for conducting zero-shot detection on KITTI and which detection head is used?
The text was updated successfully, but these errors were encountered: