You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a question regarding the speed of Pointpillars. Since you determined that Pointpillars-FSA (or -DSA) runs with nearly half the amount of G-FLOPs, I assume this means your model should have a faster inference speed than the baseline...
if my assumption is correct, do you know how much faster can it run?
I tested Pointpillars-FSA with custom data and compared the speed with base Pointpillars, but didn't see any difference in speed....
To me it's not clear how Pointpillars-FSA would perform less G-FLOPs if you are adding extra layers to the baseline, con you please explain? Or for faster speed should I pass only the context features to the BEVEncoder, and ignore the pillar features, thus avoiding concatenation of pillar and context features?
The text was updated successfully, but these errors were encountered:
Hello @AutoVision-cloud, thanks for the release of your nice work.
I have a question regarding the speed of Pointpillars. Since you determined that Pointpillars-FSA (or -DSA) runs with nearly half the amount of G-FLOPs, I assume this means your model should have a faster inference speed than the baseline...
if my assumption is correct, do you know how much faster can it run?
I tested Pointpillars-FSA with custom data and compared the speed with base Pointpillars, but didn't see any difference in speed....
To me it's not clear how Pointpillars-FSA would perform less G-FLOPs if you are adding extra layers to the baseline, con you please explain? Or for faster speed should I pass only the context features to the BEVEncoder, and ignore the pillar features, thus avoiding concatenation of pillar and context features?
The text was updated successfully, but these errors were encountered: