-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Odom frame does not exist when running Tutorial with realsense, VSLAM and nvblox #34
Comments
by the way, I'm using jetson AGX Xavier with jetpack 5.0.2 with realsense D435i |
Hi there. Thank you for the message and for your interest in nvblox. I think your diagnosis is correct, the issue is that vslam hasn't started. The (annoying) messages on the console are because, nvblox checks at a 100Hz to see if the sensor pose is available and can't find the transform from 'odom' to the camera. Could you just check a few things?
|
Hi @alexmillane, thanks for your reply I ran it over again with the nvblox_vslam_realsense.launch.py and seem that my camera is outputting /camera/color/image_raw, but not outputting images for /camera/realsense_splitter_node/output/infra_1, which is the one vlam subscribed. and /camera/depth/image_rect_raw no output too but seeing my realsense-viewer, it works properly. and rqt_graph seems that vslam_node and nvblox_node are up and topics subscrition are right. Here is my ros2 node list, it seems that these two nodes are running but maybe this time it has no image input so it does not work for your question that if the vslam is successfully built I want to only running the realsense node on the nvblox_vslam_realsense.launch.py by only leaving the realsense related lines, but there are no output both for topic echo and rviz2 In conclusion, I think maybe the splitter node doesn't work in my case, since there is no output when the topics are came out of splitter(including depth) I think I might just try removing the splitter first and then see if it works, Thanks for your reply and help and if you has any idea for this problem please feel free to tell me. ------------------------updates--------------------------------------- so maybe there are some issues in the splitter node or I didn't configure it right. but for the running, it seems that it didn't output ESDF map, may i ask how to output ESDF map since I want it to do path planning and NAVIGATE my robot? Any advises would be much appreciated, and I want to refer to issac simulation setup, is that a valid way? Anyway, thanks for your reply and assistance. |
Great that you got it working. To me, it appears that something is wrong with the projector toggling. One potential issue with removing the splitter is that, if the projector is on, the projector pattern with cause VSLAM tracking performance to degrade. Could you have a look at the IR images and see if you see the projector pattern? They appear as spots. One thing you could also try is adding
to the VSLAM parameters in the launchfile. This should have been in there, and it is a bug that it's not. Thank you for finding that. We will correct this shortly. Regarding the ESDF map: the 2D ESDF slice should already be published. Try to visualize it in RVIZ and see if you can. We're using the 2D slice here at nvidia to navigate ground robots. |
Hi @alexmillane, In order to get the splitter work, I've try to just launch the realsense, splitter in a container and see the output: admin@ubuntu:/workspaces/isaac_ros-dev$ ros2 launch nvblox_examples_bringup realsense_2.launch.py I see that there are some errors there, and I look into the splitter code, is that seems like the splitter couldn't tell the realsense node to turn off but it said it cannot, so there are no output from splitter? Is that way to fix or anyother way to stop emitting IR patterns? Again, thanks for your reply. |
Just to double check again, the infrared topics before the splitter are there? Correct? So I'm having the same issue as you on one of my computers and I have made some progress in solving it. Could you post the results of the following three commands?
|
@alexmillane Yes, the infrared topics before the splitter are there, upper one is the cameras infrared topic and below is the splitter one. As to the command, I think you mean running inside the container, the dkms is first outputs "bash: dkms: command not found" and I apt installed it, and it has no output, so do the third commands. inside the container: for your convenience, I ran it outside the container too, and dkms is also not installed and no output after apt-get install, but third one has outputs: Hope these information helps, thank you. |
Hi @chivas1000 , I think the issue is lacking the kernel patch outside the container to run the RealSense. Another thing to try is to build the realsense drivers with For newer kernels we found we needed to use a different work-around, detailed here: IntelRealSense/librealsense#10439 |
Thank you @helenol , but it seems that there are no avaliable realsense binary releases of jetpack 5.0.2 i.e. kernel 5.10.104-tegra or Ubuntu20.04 ROS humble, I'm trying to build it from source to see if it can solve this issue |
Hi @helenol, I managed to build and install realsense outside the container with steps of: but after installing it, and open the container and build the colcon build nvblox again, it doesn't solve the issue I supposed that your idea of kernel patch needs to be install is right and USB_BACKEND doesn't work since maybe only the kernel controls the depth camera advanced features. But the kernel patch you gave are for kernel 5.13/15 amd64 and jetpack 5.0.2 is kernel 5.1.104 arm64, so I don't know if it could supports, should I build the kernel patch or just wait for the release? or any advises would be greatly welcome. Thanks for your help |
Hi @chivas1000. Somehow I missed your message about being on the jetson XD. I was giving advice assuming x86. Could you send me an email at [email protected]? |
Hi @alexmillane , Thank you for the trouble shooting, here is the update Since right now there no installable dkms for arm64 for jetpack 5.0.2 and its kernel. https://github.com/IntelRealSense/realsense-ros/tree/ros2-beta noting that the difference would be at STEP 2 for apt binary install: After building, I would be able to run splitter(Upper two and below right are splitter output, below left are original IR cam), so here would be some tips for who may run into mistakes same with me. Thank you again for the support. |
Hmmm yea good point, the dmks-based solution suggested by my troubleshooting is for x86 only. Obviously for us at nvidia switching from a jetson to a nuc is not a great solution... I'm running on the jetson quite frequently. So I'm trying to understand what's different about your system. I'm going to record a video of things running that might help you... I'll get back to you later this week. |
Just chiming in, as I have almost the exact same setup, just with a Xavier NX instead, and am experiencing the same issues. @chivas1000 can you give a slightly more detailed description of how you fixed the issue? Also did you get the rqt_graph working inside the container? Thanks :D |
Hi @ripdk12 , |
I did this already and this fixed the odom frame problem, but am still experience issues, and switching to nuc is not an option in my case. I suppose I'll open an issue myself with more details, but thanks for the snappy response! |
Hi
I'm implementing nvblox and VSLAM on my robots to do a navigation application
The installation is good and I can run the examples of nvblox bag example and VSLAM realsense example
but Got Error When running the nvblox realsense and NVSLAM examples
https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox/blob/main/docs/tutorial-nvblox-vslam-realsense.md
here are error logs:
[nvblox_node-3] Warning: Invalid frame ID "odom" passed to canTransform argument target_frame - frame does not exist
[nvblox_node-3] at line 93 in /opt/ros/humble/src/geometry2/tf2/src/buffer_core.cpp
[nvblox_node-3] Warning: Invalid frame ID "odom" passed to canTransform argument target_frame - frame does not exist
[nvblox_node-3] at line 93 in /opt/ros/humble/src/geometry2/tf2/src/buffer_core.cpp
[nvblox_node-3] Warning: Invalid frame ID "odom" passed to canTransform argument target_frame - frame does not exist
[nvblox_node-3] at line 93 in /opt/ros/humble/src/geometry2/tf2/src/buffer_core.cpp
[nvblox_node-3] Warning: Invalid frame ID "odom" passed to canTransform argument target_frame - frame does not exist
[nvblox_node-3] at line 93 in /opt/ros/humble/src/geometry2/tf2/src/buffer_core.cpp
[rviz2-5] [INFO] [1664879434.204059913] [rclcpp]: signal_handler(signum=2)
[rviz2-5] Warning: Invalid frame ID "odom" passed to canTransform argument target_frame - frame does not exist
[rviz2-5] at line 93 in /opt/ros/humble/src/geometry2/tf2/src/buffer_core.cpp
[rviz2-5] Warning: Invalid frame ID "odom" passed to canTransform argument target_frame - frame does not exist
[rviz2-5] at line 93 in /opt/ros/humble/src/geometry2/tf2/src/buffer_core.cpp
[component_container-1] [INFO] [1664879436.427147138] [rclcpp]: signal_handler(signum=2)
[component_container-1] [INFO] [1664879436.666174595] [rclcpp]: signal_handler(signum=2)
[component_container-1] [INFO] [1664879436.827491359] [rclcpp]: signal_handler(signum=2)
[component_container-1] [INFO] [1664879437.023495947] [rclcpp]: signal_handler(signum=2)
[component_container-1] terminate called after throwing an instance of 'rclcpp::exceptions::InvalidParameterTypeException'
[component_container-1] what(): parameter 'accel_fps' has invalid type: cannot undeclare an statically typed parameter
I can confirm the camera are running since the topic can output images.
Does anyone have ideas on how this could be solve?
BTW I'm new to docker, why does the rqt_graph doesn't work in the container although installed it?
Much appreciated if anyone could help.
The text was updated successfully, but these errors were encountered: