Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ADD Devernay-Faugeras FOV camera image distortion model #373

Open
ghost opened this issue Aug 12, 2023 · 1 comment
Open

ADD Devernay-Faugeras FOV camera image distortion model #373

ghost opened this issue Aug 12, 2023 · 1 comment
Labels
enhancement New feature or request help wanted Extra attention is needed

Comments

@ghost
Copy link

ghost commented Aug 12, 2023

Hi! I would like to request the addition of the Devernay-Faugeras FOV camera image distortion model to Gazebo Garden.

Desired behavior

Currently, only the Brown-Conrady camera image distortion model is available in Gazebo Garden. However, many robotics applications rely on the Devernay-Faugeras FOV camera image distortion model for its better representation of fish-eye lens and support by calibration tools (such as Kalibr).

To do so, I would suggest the following alternative implementation of the OgreDistortionPass::Distort() function in gz/rendering/ogre/OgreDistortionPass :

//////////////////////////////////////////////////
gz::math::Vector2d OgreDistortionPass::Distort(
    const gz::math::Vector2d &_in,
    const gz::math::Vector2d &_center, double _k1, double _k2, double _k3,
    double _p1, double _p2, unsigned int _width, double _f)
{
  // apply the Devernay-Faugeras FOV camera image distortion model, see
  // "Devernay, F., Faugeras, O. Straight lines have to be straight. 
  // Machine Vision and Applications 13, 14–24 (2001). DOI: 10.1007/PL00013269

  gz::math::Vector2d normalized2d = (_in - _center) * (_width / _f);
  gz::math::Vector3d normalized(normalized2d.X(), normalized2d.Y(), 0);
  double rSq = normalized.X() * normalized.X() +
               normalized.Y() * normalized.Y();

  // Calculate undistorted radius r_u
  double r_u = std::sqrt(rSq);

  // Calculate distorted radius r_d
  double r_d = std::atan(2 * r_u * std::tan(_k1 / 2))  / _k1;

  // Calculate ratio
  double ratio = r_d / r_u;

  // Calculate distorted coordinates
  double X_d = ratio * normalized.X();
  double Y_d = ratio * normalized.Y();

  return ((_center * _width) +
    gz::math::Vector2d(X_d, Y_d) *_f) / _width;
}

where the k1 parameter from Brown-Conrady's distortion model acts as the w parameters (field-of-view of the ideal fish-eye lens) of Devernay-Faugeras' FOV distortion model, the center parameter is made to match the cx and cy intrinsics, and the other parameters k2,k3,p1 and p2 are ignored.

The equations for the model can be found below, with w subsituted by s.

image

@ghost ghost added the enhancement New feature or request label Aug 12, 2023
@iche033
Copy link
Contributor

iche033 commented Aug 14, 2023

I think that would work. There's also a few more things that'll need to be done in order users to specify / change the type of distortion model to use in gazebo. That will likely have to happen through sdf, e.g.

<distortion type="brown">
   ...
</distortion>

In sdformat, we'll need to add a type attribute to distortion element:
https://github.com/gazebosim/sdformat/blob/7f63d022a48d3a224a3b7a777316c3822525c7b2/sdf/1.10/camera.sdf#L120

gz-sensors will need to parse this new type attribute, possibly when creating a new image distortion model. Currently there is the ImageBrowDistortionModel that uses rendering api to create the distortion effect and set it to the camera. I think we would need to add a new API to set the distortion type.

@azeey azeey added the help wanted Extra attention is needed label Aug 21, 2023
@azeey azeey removed this from Core development Aug 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants