-
Notifications
You must be signed in to change notification settings - Fork 225
dora-rerun-py : Python Rerun Node for Dora #965
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
@haixuanTao please review. i have lots of new updates for rerun to Visualization and Debugging |
@phil-opp please review |
Hi! The node implementation seems correct, but I have one doubt: having two same nodes but one in Rust and one in Python may be hard to debug at some point. Do you have any suggestion? Why prefer python over Rust? |
In Rerun there are lots of examples in python. we can use it for enhancing debugging with rerun. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems fine for me! just one comment about the build
and path
procedure
- bbox | ||
|
||
- id: plot | ||
path: ./node-hub/dora-rerun-py/dora_rerun_py/main.py |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's better to make it work with pip and symlink scripts like:
build: pip install opencv-video-capture
path: opencv-video-capture
- bbox | ||
|
||
- id: plot | ||
path: ./node-hub/dora-rerun-py/dora_rerun_py/main.py |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do you have to use a __main__.py
file?
Thx for the explanation! It seems acceptable for me! |
As mentioned on discord, I think it's going to be difficult for us to maintain both dora-rerun and dora-rerun-py. I also fear that it might be confusing for some. I thnk that we can link to your repo on the README what do you think? |
You’re right maintaining two parallel codebases is going to get unwieldy, and could definitely confuse users. Linking out to the Python port from our main README sounds like a solid compromise. I’d suggest adding a “Related Projects” (or “Python Implementation”) section near the top of the README that says something like: For a native-Python version of Dora Rerun, check out the dora-rerun-py project That way anyone looking for Python support will find it immediately, and we keep our core docs focused. Let me know if you’d like me to draft that section or submit a PR! |
That sounds great! |
This PR adds a brand‑new Python implementation of the Rerun node—packaged as dora-rerun-py—to the Dora ecosystem. With this change you can now write Dora dataflow nodes in pure Python and visualize data in Rerun
What’s included
dora-rerun-py package
A standalone Python package under node-hub/dora-rerun-py/ that subscribes to Dora events and logs them via the Rerun Python SDK.
Supports images (with proper color_model), depth→3D point clouds, 2D bounding boxes, text logs, numeric series, and joint states.
pyproject.toml with correct project metadata, dependencies (numpy, opencv-python, pyarrow, rerun-sdk, dora), and console script registration.
example.video.mp4
Upcoming Updates
🌐 3D Point Cloud and Trajectory Viewer: Enables visualization of robot sensor positions and movement paths in 3D space, helping users see robot sensor data clearly.
🤖 Robot Arm Pose Viewer: Allows viewing of robot joint positions in 3D using line strips, which is useful for understanding robot arm movements.
🛰️ Sensor Visualization Extensions:
IMU (Inertial Measurement Unit): This tool visualizes real-time gyroscope and accelerometer data as line charts, showing orientation and movement. It can also represent orientation with arrows or a cube in a 3D view.
GPS: Displays the robot's GPS coordinates as dynamic points or lines in 3D, showing paths and updating positions over time as the robot moves.
Ultrasonic Sensor: Shows distances as expanding circles or vertical bars originating from the sensor, helping detect obstacles and map short-ranges.
Infrared (IR) Sensor: Displays simple visual indicators for detecting proximity or following lines, seen as signal bars or colored dots.
Time-of-Flight (ToF) Sensor: Provides precise depth information using a colored depth map or grid, showing nearby distances accurately.
Radar: Visualizes fast-moving objects or reflections as vector cones or points on a map, helping simulate how robots perceive their environment.