Rerun
@rerun.io
140 followers 13 following 33 posts
Rerun is building the multimodal data stack 🕸️ Website https://rerun.io/ ⭐ GitHub http://github.com/rerun-io/rerun 👾 Discord http://discord.gg/ZqaWgHZ2p7
Posts Media Videos Starter Packs
Reposted by Rerun
ericbrachmann.bsky.social
Small but important quality of life update by @rerun.io: Customizable frustum colors. Finally we can distinguish estimates and ground truth in the same 3D view ;)
Reposted by Rerun
lucasw0.bsky.social
Have a sim vehicle in jolt-rust working with stacked hinges for wheel rotation and steering on two of four wheels, no suspension, visualized in @rerun.io:
Reposted by Rerun
pablovelagomez.bsky.social
✨ Massive Pipeline Refactor → One Framework for Ego + Exo Datasets, Visualized with @rerun.io 🚀

After a refactoring, my entire egocentric/exocentric pipeline is now modular. One codebase handles different sensor layouts and outputs a unified, multimodal timeseries file that you can open in Rerun.
Reposted by Rerun
pablovelagomez.bsky.social
MVP of Multiview Video → Camera parameters + 3D keypoints. Visualized with @rerun.io
Reposted by Rerun
pablovelagomez.bsky.social
Trying to wrap my head around fwd/bwd kinematics for imitation learning, so I built a fully‑differentiable kinematic hand skeleton in JAX and visualized it with @rerun.io new callback system in a Jupyter Notebook. This shows each joint angle and how it impacts the kinematic skeleton.
Reposted by Rerun
pablovelagomez.bsky.social
@rerun.io v0.23 is finally out! 🎉 I’ve extended my @gradio-hf.bsky.social annotation pipeline to support multiview videos using the callback system introduced in 0.23.
Reposted by Rerun
pablovelagomez.bsky.social
Visualized with @rerun.io, I’ve integrated video‑based depth estimation into my robot‑training pipeline to make data collection as accessible as possible—without requiring specialized hardware.
Reposted by Rerun
pablovelagomez.bsky.social
I extended my previous @rerun.io and @gradio-hf.bsky.social annotation pipeline for multiple views. You can see how powerful this is when using Meta's Segment Anything and multi-view geometry. Only annotating 2 views, I can triangulate the other 6 views and get masks extremely quickly!
Reposted by Rerun
pablovelagomez.bsky.social
Here’s a sneak peek using @rerun.io and @gradio-hf.bsky.social for data annotation. It uses Video Depth Anything and Segment Anything 2 under the hood to generate segmentation masks and depth maps/point clouds. More to share next week.
Reposted by Rerun
pablovelagomez.bsky.social
Using @rerun.io , I established a baseline from the HoCAP dataset and conducted a qualitative comparison among the ground-truth calibrated cameras, Dust3r, and VGGT—all within rerun. The improvements are evident in both the camera parameters and the multi-view depth map/point cloud.
Reposted by Rerun
ernerfeldt.bsky.social
Rerun keeps growing! I’m so happy we started this thing, and that we’re building in the open, in Rust 🤘
rerun.io
Rerun @rerun.io · Mar 20
1/ We just raised $17M to build the multimodal data stack for Physical AI! 🚀

Lead: pointnine.com
With: costanoa.vc, Sunflower Capital,
@seedcamp.com
Angels including: @rauchg.blue, Eric Jang, Oliver Cameron, @wesmckinney.com , Nicolas Dessaigne, Arnav Bimbhet

Thesis: rerun.io/blog/physica...
rerun.io
Rerun @rerun.io · Mar 20
10/ It comes with visualization built-in for fast data observability over both online and offline systems. The query engine enables you to combine vector search and full dataframe queries, over both raw logs and structured datasets, to support robotics-aware data science and dataset curation.
rerun.io
Rerun @rerun.io · Mar 20
9/ We are now building a new database and cloud data platform for Physical AI. The database is built around the same data model as the open-source project.
rerun.io
Rerun @rerun.io · Mar 20
8/ This data model is core to our popular open source framework for logging and visualizing multimodal data, which companies like Meta, Google, @hf.co's LeRobot, and Unitree Robotics have adopted it in their own open-source projects.
rerun.io
Rerun @rerun.io · Mar 20
7/ We spent the first 2 years of the company iterating on a data model for Physical AI that works for both messy online logs and efficient columnar storage of offline pipeline data.
rerun.io
Rerun @rerun.io · Mar 20
6/ Rerun is building a unified, multimodal data stack—one data model and platform supporting both online and offline workflows seamlessly, retaining semantic richness throughout.
rerun.io
Rerun @rerun.io · Mar 20
5/ Until now, no single tool has spanned the entire stack from online data capture, to offline dataset management.
rerun.io
Rerun @rerun.io · Mar 20
4/ Hand-written online code is being replaced with ML, requiring advanced offline pipelines to collect, manage, label, and curate massive datasets.
rerun.io
Rerun @rerun.io · Mar 20
3/ Physical AI systems rely on two key components:

Online systems: Run live on robots, processing data and interacting with the real world in real-time;
Offline systems: Run in data centers, analyzing data and improving online systems through training and simulations.
rerun.io
Rerun @rerun.io · Mar 20
2/ Physical AI—robotics, drones, autonomous vehicles—is rapidly evolving, powered by advances in machine learning. But today's data stacks aren't built for this new era.
rerun.io
Rerun @rerun.io · Mar 20
1/ We just raised $17M to build the multimodal data stack for Physical AI! 🚀

Lead: pointnine.com
With: costanoa.vc, Sunflower Capital,
@seedcamp.com
Angels including: @rauchg.blue, Eric Jang, Oliver Cameron, @wesmckinney.com , Nicolas Dessaigne, Arnav Bimbhet

Thesis: rerun.io/blog/physica...
Reposted by Rerun
pablovelagomez.bsky.social
More progress towards building a straightforward method to collect first-person (ego) and third-person (exo) data for robotic training in @rerun.io. I’ve been using the HO-cap dataset to establish a baseline, and here are some updates I’ve made (code at the end)