Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Virtual Detection Area in SensorData #733

Closed
jruebsam opened this issue Jun 27, 2023 · 9 comments
Closed

Virtual Detection Area in SensorData #733

jruebsam opened this issue Jun 27, 2023 · 9 comments
Labels
FeatureRequest Proposals which enhance the interface or add additional features.

Comments

@jruebsam
Copy link
Contributor

Virtual Detection Area

Currently it is not possible from a SensorData message to get an understanding
what the current area of observation of a sensor model is.

I would like to have some kind of message, lets say a Virtual Detection Area
which describes in which kind of FOV the current sensor models operates.
This could be a simble generic message which contains a repeated list of points, e.g.

Solution

message VirtualDetectionArea
{
    // List of points of the boundary of the detection area sorted counter-clockwise 
   // relative to sensor mounting position, projected onto ground surface
    repeated Vector2d boundary_point = 1;
}

Since this message is related to the output of a sensor model I would like to add this to the SensorData message.
Also this would be important for our current usecase, which is focused on visualization of SensorData messages.

@jruebsam jruebsam added the FeatureRequest Proposals which enhance the interface or add additional features. label Aug 16, 2023
@jruebsam
Copy link
Contributor Author

jruebsam commented Sep 6, 2023

@pmai @ThomasNaderBMW @PhRosenberger any suggestions how to proceed with this or if this should be discussed in a Subgroup?

@PhRosenberger
Copy link
Contributor

@jruebsam if I understand you correctly, you would like to know what the simulated sensor could have seen (a.k.a. nominal FoV) in comparison to the actual output data? I wonder why it is restricted to a ground projection and not 3D or even 4D (incl. velocity or intensity). However, do you know our concept of a "Unified relevance region from sensor model knowledge" from this paper: https://tuprints.ulb.tu-darmstadt.de/18950/ ? - This seems related to your thoughts, right?

Are you attending the GSVF next week? - Would be great to discuss it there in person:-)

@jruebsam
Copy link
Contributor Author

jruebsam commented Sep 8, 2023

@PhRosenberger THanks for the reply, yes it would be something like a nominal FOV. In our usecase we want to use this in a visualizer thats why its only 2D, since 3D FOVs can become quiet confusing. Unfornutanely I will not be able to join GSVF, but also thanks for the paper.

@PhRosenberger
Copy link
Contributor

@jruebsam Oh, what a pity that we cannot meet this week!

However, for your use case, I see the existing GenericSensorView as an option: https://opensimulationinterface.github.io/osi-antora-generator/asamosi/latest/gen/structosi3_1_1GenericSensorView.html

What do you think?

@jruebsam
Copy link
Contributor Author

jruebsam commented Oct 3, 2023

@PhRosenberger I thought about that, however GenericSensorView only contains a FOV and a Mounting Position but not a Range. It could be an option that we do some kind of extension at this location.

@PhRosenberger
Copy link
Contributor

Yes, this could be a good way to solve this issue. Would you mind atrting a PR for this with the extension on GenericSensorViewConfiguration?

@thomassedlmayer
Copy link
Contributor

@PhRosenberger I'm not against adding a range field to GenericSensorViewConfiguration in general as I feel like it would be a consistent addition to the FoV fields. I don't know the reasons why it wasn't initially added though. But I think, if it was there, it then should only be used for the configuration of the input which is quite different from an intended sensor model output range. E.g. requiring/providing only a specific excerpt of ground truth information to a sensor model is different from describing its intended output range.

Adding something like a VirtualDetectionArea to SensorData would IMO correctly indicate that it actually refers to the sensor model output.

But I also agree that we should at least offer 3D boundary points.

@PhRosenberger
Copy link
Contributor

PhRosenberger commented Jan 23, 2024

Ok I think then we have two different cases:

  1. The maximum (and theoretical) FoV of the sensor at all, which I see in the SensorViewConfig. Here we should add the range, I guess, to cover it.
  2. The actual FoV at each time step, where the sensor is able to detect something, which could be addressed by the VirtualDetectionArea.

So we need to different PRs to solve this issue here, right?

@pmai
Copy link
Contributor

pmai commented Apr 5, 2024

Adressed by #781

@pmai pmai closed this as completed Apr 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
FeatureRequest Proposals which enhance the interface or add additional features.
Projects
None yet
Development

No branches or pull requests

4 participants