Rerun
@rerun.io
Rerun is building the multimodal data stack
🕸️ Website https://rerun.io/
⭐ GitHub http://github.com/rerun-io/rerun
👾 Discord http://discord.gg/ZqaWgHZ2p7
🕸️ Website https://rerun.io/
⭐ GitHub http://github.com/rerun-io/rerun
👾 Discord http://discord.gg/ZqaWgHZ2p7
Reposted by Rerun
Small but important quality of life update by @rerun.io: Customizable frustum colors. Finally we can distinguish estimates and ground truth in the same 3D view ;)
September 29, 2025 at 9:00 AM
Small but important quality of life update by @rerun.io: Customizable frustum colors. Finally we can distinguish estimates and ground truth in the same 3D view ;)
Reposted by Rerun
Have a sim vehicle in jolt-rust working with stacked hinges for wheel rotation and steering on two of four wheels, no suspension, visualized in @rerun.io:
September 10, 2025 at 3:57 PM
Have a sim vehicle in jolt-rust working with stacked hinges for wheel rotation and steering on two of four wheels, no suspension, visualized in @rerun.io:
Reposted by Rerun
✨ Massive Pipeline Refactor → One Framework for Ego + Exo Datasets, Visualized with @rerun.io 🚀
After a refactoring, my entire egocentric/exocentric pipeline is now modular. One codebase handles different sensor layouts and outputs a unified, multimodal timeseries file that you can open in Rerun.
After a refactoring, my entire egocentric/exocentric pipeline is now modular. One codebase handles different sensor layouts and outputs a unified, multimodal timeseries file that you can open in Rerun.
June 26, 2025 at 1:40 PM
✨ Massive Pipeline Refactor → One Framework for Ego + Exo Datasets, Visualized with @rerun.io 🚀
After a refactoring, my entire egocentric/exocentric pipeline is now modular. One codebase handles different sensor layouts and outputs a unified, multimodal timeseries file that you can open in Rerun.
After a refactoring, my entire egocentric/exocentric pipeline is now modular. One codebase handles different sensor layouts and outputs a unified, multimodal timeseries file that you can open in Rerun.
Reposted by Rerun
Trying to wrap my head around fwd/bwd kinematics for imitation learning, so I built a fully‑differentiable kinematic hand skeleton in JAX and visualized it with @rerun.io new callback system in a Jupyter Notebook. This shows each joint angle and how it impacts the kinematic skeleton.
May 2, 2025 at 8:59 PM
Trying to wrap my head around fwd/bwd kinematics for imitation learning, so I built a fully‑differentiable kinematic hand skeleton in JAX and visualized it with @rerun.io new callback system in a Jupyter Notebook. This shows each joint angle and how it impacts the kinematic skeleton.
Reposted by Rerun
Exciting news for egui: there is a draft branch for switching the text handling to Parley, which will bring support for color emojis, right-to-left text, access to system fonts, and much more! github.com/emilk/egui/p...
[WIP] Render text with Parley by valadaptive · Pull Request #5784 · emilk/egui
Resolves Cosmic Text for font rendering #3378
Resolves Automatically load system fonts when needed #5233
Closes feat: add feature for load system fonts according user input #1687 (superseded)
Resol...
github.com
April 24, 2025 at 12:39 PM
Exciting news for egui: there is a draft branch for switching the text handling to Parley, which will bring support for color emojis, right-to-left text, access to system fonts, and much more! github.com/emilk/egui/p...
Reposted by Rerun
@rerun.io v0.23 is finally out! 🎉 I’ve extended my @gradio-hf.bsky.social annotation pipeline to support multiview videos using the callback system introduced in 0.23.
April 24, 2025 at 2:20 PM
@rerun.io v0.23 is finally out! 🎉 I’ve extended my @gradio-hf.bsky.social annotation pipeline to support multiview videos using the callback system introduced in 0.23.
Reposted by Rerun
Visualized with @rerun.io, I’ve integrated video‑based depth estimation into my robot‑training pipeline to make data collection as accessible as possible—without requiring specialized hardware.
April 17, 2025 at 8:08 PM
Visualized with @rerun.io, I’ve integrated video‑based depth estimation into my robot‑training pipeline to make data collection as accessible as possible—without requiring specialized hardware.
Reposted by Rerun
I extended my previous @rerun.io and @gradio-hf.bsky.social annotation pipeline for multiple views. You can see how powerful this is when using Meta's Segment Anything and multi-view geometry. Only annotating 2 views, I can triangulate the other 6 views and get masks extremely quickly!
April 10, 2025 at 5:00 PM
I extended my previous @rerun.io and @gradio-hf.bsky.social annotation pipeline for multiple views. You can see how powerful this is when using Meta's Segment Anything and multi-view geometry. Only annotating 2 views, I can triangulate the other 6 views and get masks extremely quickly!
Reposted by Rerun
Fun use of Rerun: www.youtube.com/watch?v=pxRG...
Re: Can You Fool A Self Driving Car? Testing variations reveal what went wrong
YouTube video by Parallel Domain
www.youtube.com
April 8, 2025 at 3:59 PM
Fun use of Rerun: www.youtube.com/watch?v=pxRG...
Reposted by Rerun
Here’s a sneak peek using @rerun.io and @gradio-hf.bsky.social for data annotation. It uses Video Depth Anything and Segment Anything 2 under the hood to generate segmentation masks and depth maps/point clouds. More to share next week.
April 1, 2025 at 7:13 PM
Here’s a sneak peek using @rerun.io and @gradio-hf.bsky.social for data annotation. It uses Video Depth Anything and Segment Anything 2 under the hood to generate segmentation masks and depth maps/point clouds. More to share next week.
Reposted by Rerun
Using @rerun.io , I established a baseline from the HoCAP dataset and conducted a qualitative comparison among the ground-truth calibrated cameras, Dust3r, and VGGT—all within rerun. The improvements are evident in both the camera parameters and the multi-view depth map/point cloud.
March 25, 2025 at 5:57 PM
Using @rerun.io , I established a baseline from the HoCAP dataset and conducted a qualitative comparison among the ground-truth calibrated cameras, Dust3r, and VGGT—all within rerun. The improvements are evident in both the camera parameters and the multi-view depth map/point cloud.
Reposted by Rerun
Rerun keeps growing! I’m so happy we started this thing, and that we’re building in the open, in Rust 🤘
1/ We just raised $17M to build the multimodal data stack for Physical AI! 🚀
Lead: pointnine.com
With: costanoa.vc, Sunflower Capital,
@seedcamp.com
Angels including: @rauchg.blue, Eric Jang, Oliver Cameron, @wesmckinney.com , Nicolas Dessaigne, Arnav Bimbhet
Thesis: rerun.io/blog/physica...
Lead: pointnine.com
With: costanoa.vc, Sunflower Capital,
@seedcamp.com
Angels including: @rauchg.blue, Eric Jang, Oliver Cameron, @wesmckinney.com , Nicolas Dessaigne, Arnav Bimbhet
Thesis: rerun.io/blog/physica...
March 20, 2025 at 6:21 PM
Rerun keeps growing! I’m so happy we started this thing, and that we’re building in the open, in Rust 🤘
1/ We just raised $17M to build the multimodal data stack for Physical AI! 🚀
Lead: pointnine.com
With: costanoa.vc, Sunflower Capital,
@seedcamp.com
Angels including: @rauchg.blue, Eric Jang, Oliver Cameron, @wesmckinney.com , Nicolas Dessaigne, Arnav Bimbhet
Thesis: rerun.io/blog/physica...
Lead: pointnine.com
With: costanoa.vc, Sunflower Capital,
@seedcamp.com
Angels including: @rauchg.blue, Eric Jang, Oliver Cameron, @wesmckinney.com , Nicolas Dessaigne, Arnav Bimbhet
Thesis: rerun.io/blog/physica...
March 20, 2025 at 6:13 PM
1/ We just raised $17M to build the multimodal data stack for Physical AI! 🚀
Lead: pointnine.com
With: costanoa.vc, Sunflower Capital,
@seedcamp.com
Angels including: @rauchg.blue, Eric Jang, Oliver Cameron, @wesmckinney.com , Nicolas Dessaigne, Arnav Bimbhet
Thesis: rerun.io/blog/physica...
Lead: pointnine.com
With: costanoa.vc, Sunflower Capital,
@seedcamp.com
Angels including: @rauchg.blue, Eric Jang, Oliver Cameron, @wesmckinney.com , Nicolas Dessaigne, Arnav Bimbhet
Thesis: rerun.io/blog/physica...
Reposted by Rerun
More progress towards building a straightforward method to collect first-person (ego) and third-person (exo) data for robotic training in @rerun.io. I’ve been using the HO-cap dataset to establish a baseline, and here are some updates I’ve made (code at the end)
March 18, 2025 at 3:32 PM
More progress towards building a straightforward method to collect first-person (ego) and third-person (exo) data for robotic training in @rerun.io. I’ve been using the HO-cap dataset to establish a baseline, and here are some updates I’ve made (code at the end)
Reposted by Rerun
Finally finished porting mast3r-slam to @rerun.io and adding a @gradio-hf.bsky.social interface. Really cool to see it running on any video I throw at it, I've included the code at the end
March 7, 2025 at 9:52 PM
Finally finished porting mast3r-slam to @rerun.io and adding a @gradio-hf.bsky.social interface. Really cool to see it running on any video I throw at it, I've included the code at the end
Reposted by Rerun
I'm working towards an easy method to collect a combined third-person and first-person pose dataset starting Assembly101 from Meta, with near real-time performance via @rerun.io visualization. The end goal is robot imitation learning with Hugging Face LeRobot
February 24, 2025 at 4:02 PM
I'm working towards an easy method to collect a combined third-person and first-person pose dataset starting Assembly101 from Meta, with near real-time performance via @rerun.io visualization. The end goal is robot imitation learning with Hugging Face LeRobot
Reposted by Rerun
Following up on my prompt depth anything post, I'm starting a bit of a miniseries where I'm going through the tutorials of
Lerobot to understand better how I can get a real robot to work on my custom dataset. Using @rerun.io to visualize
code: github.com/rerun-io/pi0...
Lerobot to understand better how I can get a real robot to work on my custom dataset. Using @rerun.io to visualize
code: github.com/rerun-io/pi0...
February 11, 2025 at 5:09 PM
Following up on my prompt depth anything post, I'm starting a bit of a miniseries where I'm going through the tutorials of
Lerobot to understand better how I can get a real robot to work on my custom dataset. Using @rerun.io to visualize
code: github.com/rerun-io/pi0...
Lerobot to understand better how I can get a real robot to work on my custom dataset. Using @rerun.io to visualize
code: github.com/rerun-io/pi0...
Rerun 0.22 is out! 🔎🟡🔜🔵
The release brings long-requested entity filtering for finding data faster in the Viewer, significantly simplified APIs for partial & columnar updates, and many other enhancements.
vimeo.com/1054160833
The release brings long-requested entity filtering for finding data faster in the Viewer, significantly simplified APIs for partial & columnar updates, and many other enhancements.
vimeo.com/1054160833
Rerun 0.22: entity filtering
This is "Rerun 0.22: entity filtering" by Nikolaus West on Vimeo, the home for high quality videos and the people who love them.
vimeo.com
February 6, 2025 at 7:34 PM
Rerun 0.22 is out! 🔎🟡🔜🔵
The release brings long-requested entity filtering for finding data faster in the Viewer, significantly simplified APIs for partial & columnar updates, and many other enhancements.
vimeo.com/1054160833
The release brings long-requested entity filtering for finding data faster in the Viewer, significantly simplified APIs for partial & columnar updates, and many other enhancements.
vimeo.com/1054160833
Egui's improved ui rendering & more is of course also going to show up in the next Rerun release! :)
Introducing egui 0.30!
This adds `egui::Scene`: a pannable, zoomable container for other UI elements.
This release also makes frames and corner radius more in line with how CSS and Figma works.
We’ve also improved the crispness of the rendering, and a lot more!
This adds `egui::Scene`: a pannable, zoomable container for other UI elements.
This release also makes frames and corner radius more in line with how CSS and Figma works.
We’ve also improved the crispness of the rendering, and a lot more!
February 4, 2025 at 4:49 PM
Egui's improved ui rendering & more is of course also going to show up in the next Rerun release! :)
Reposted by Rerun
Recently, I've been playing with my iPhone ToF sensor, but the problem has always been the abysmal resolution (256x192). The team behind DepthAnything released PromptDepthAnything that fixes this. Using @rerun.io to visualize. Links at the end of the thread
February 3, 2025 at 1:18 PM
Recently, I've been playing with my iPhone ToF sensor, but the problem has always been the abysmal resolution (256x192). The team behind DepthAnything released PromptDepthAnything that fixes this. Using @rerun.io to visualize. Links at the end of the thread
Entity filtering coming up in the UI 👀
January 21, 2025 at 5:20 PM
Entity filtering coming up in the UI 👀
Reposted by Rerun
This year, while writing my master's thesis, I found this great blog post from @rerun.io that shows the structure of #rosbags. I recommend it to anyone who wonders how rosbags work.
It helps understand why not all rosbags can be easily recovered when your robot's battery dies 🪫.
rerun.io/blog/rosbag
It helps understand why not all rosbags can be easily recovered when your robot's battery dies 🪫.
rerun.io/blog/rosbag
From the Evolution of Rosbag to the Future of AI Tooling
Thirteen years ago, Willow Garage released ROS (the Robot Operating System) and established one of the standard productivity tools for the entire robotics industry.
rerun.io
December 6, 2024 at 6:40 PM
This year, while writing my master's thesis, I found this great blog post from @rerun.io that shows the structure of #rosbags. I recommend it to anyone who wonders how rosbags work.
It helps understand why not all rosbags can be easily recovered when your robot's battery dies 🪫.
rerun.io/blog/rosbag
It helps understand why not all rosbags can be easily recovered when your robot's battery dies 🪫.
rerun.io/blog/rosbag