Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] Benchmarking FPS for camera rendering #100

Closed
sashwat-mahalingam opened this issue Jul 21, 2023 · 1 comment
Closed

[Question] Benchmarking FPS for camera rendering #100

sashwat-mahalingam opened this issue Jul 21, 2023 · 1 comment
Labels
documentation Improvements or additions to documentation question Further information is requested

Comments

@sashwat-mahalingam
Copy link

I recently set up a simple reaching environment within ORBIT's reach_env.py class, adding one Pinhole camera into the environment and setting --num_envs 10 so each environment instance would have a Pinhole camera. When I profile this setup, I average 0.18 seconds per simulation step, equating to an FPS of 10 / 0.18 = 55.5 FPS. Having experimented with multiple values of --num_envs between 1 and 20 (which is the maximum I can go to that exhausts my memory on my GPU), I am still stuck at this value - 0.18 seconds - for rendering 10 cameras per step.

My workflow is, all within reach_env.py:

  1. Setup one pinhole camera under env_0 namespace in the __init__.
self.camera_cfg = PinholeCameraCfg(
                sensor_tick=0,
                height=480,
                width=640,
                data_types=["rgb", "distance_to_image_plane"],
                usd_params=PinholeCameraCfg.UsdCameraCfg(
                focal_length=24.0, focus_distance=400.0, horizontal_aperture=20.955, clipping_range=(0.1, 1.0e5))
            )
        
        self.camera = Camera(cfg=self.camera_cfg, device="cpu")
  1. Spawning is done in design_scene() as self.camera.spawn(self.template_env_ns + "/CameraSensor"), right after the robot is spawned.

  2. In initialize_views(), initialize each grid-cloned environment instance's camera within that grid-cloned environment's namespace, and update all camera buffers using update.

self.cams = [self.env_0_camera] + [Camera(cfg=self.camera_cfg, device="cpu") for _ in range(self.num_envs - 1)]
        
        for i in range(self.num_envs):
           self.cams[i].initialize(self.env_ns + f"/env_{i}/CameraSensor/Camera")
           
           env_pos = np.array(self.envs_positions[i].cpu().numpy())
           
           self.cams[i].set_world_pose_from_view(eye=np.array([2, 2, 2]) + env_pos, target=np.array([0, 0, 0]) + env_pos)
           self.cams[i].update(self.dt)
  1. After the sim.step() is called in __init__, I needed to call update again else I would run into NoneType errors when trying to read the buffer later on.
for i in range(self.num_envs):
    self.cams[i].update(self.dt)
  1. During _step_impl(), update each camera's buffer as described in step 3.

  2. To access camera observations, the observation manager invokes:

def images(self, env):
  cam_dats = []
  for i in range(env.num_envs):
      cam_dat = convert_dict_to_backend(env.cams_views[i].data.output, backend="torch")
      cam_dats.append(cam_dat['rgb'])
  return torch.stack(cam_dats, dim=0)

Is there any efficiency technique I am missing in my implementation here? Additionally, how was the implementation done for the benchmarking experiment that computed 270 FPS when having ten cameras record observations? Thanks!

@Mayankm96
Copy link
Contributor

Hi @sashwat-mahalingam ,

We checked the scripts and realized the numbers reported earlier on the website corresponded to the rendering frequency which didn't account for the overhead of reading the buffers and sending them to the camera buffers. The FPS you get currently is what we expect as well. We are waiting for the multi-camera support from Isaac Sim to provide better-quality numbers on camera rendering and reading out the data.

Sorry for the confusion on this!

@Mayankm96 Mayankm96 added documentation Improvements or additions to documentation question Further information is requested labels Jul 26, 2023
Mayankm96 added a commit that referenced this issue Aug 8, 2023
# Description

Earlier the `TerrainGeneratorCfg` and `TerrainImporterCfg` were in the
same file. However, this leads to some issues with circular dependencies
when referring to the `TerrainImporter` as an attribute of the
`TerrainImporterCfg` (i.e. providing the class name as a member of the
config object).

The MR fixes the above circular dependency. Also, it moves all the
terrain parameters to its configuration object to make the terrain
initialization consistent with the other asset constructors.

## Type of change

- Breaking change (fix or feature that would cause existing
functionality to not work as expected)

## Checklist

- [x] I have run the [`pre-commit` checks](https://pre-commit.com/) with
`./orbit.sh --format`
- [ ] I have made corresponding changes to the documentation
- [x] My changes generate no new warnings
- [x] I have added tests that prove my fix is effective or that my
feature works
- [x] I have updated the changelog and the corresponding version in the
extension's `config/extension.toml` file
Mayankm96 added a commit that referenced this issue Dec 22, 2023
# Description

Earlier the `TerrainGeneratorCfg` and `TerrainImporterCfg` were in the
same file. However, this leads to some issues with circular dependencies
when referring to the `TerrainImporter` as an attribute of the
`TerrainImporterCfg` (i.e. providing the class name as a member of the
config object).

The MR fixes the above circular dependency. Also, it moves all the
terrain parameters to its configuration object to make the terrain
initialization consistent with the other asset constructors.

## Type of change

- Breaking change (fix or feature that would cause existing
functionality to not work as expected)

## Checklist

- [x] I have run the [`pre-commit` checks](https://pre-commit.com/) with
`./orbit.sh --format`
- [ ] I have made corresponding changes to the documentation
- [x] My changes generate no new warnings
- [x] I have added tests that prove my fix is effective or that my
feature works
- [x] I have updated the changelog and the corresponding version in the
extension's `config/extension.toml` file
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants