Evan Ackerman
@evanackerman.bsky.social
1.3K followers
34 following
96 posts
Senior editor at IEEE Spectrum. I hug robots.
spectrum.ieee.org
Posts
Media
Videos
Starter Packs
Evan Ackerman
@evanackerman.bsky.social
· Sep 11
Reality Is Ruining the Humanoid Robot Hype
Over the next several years, humanoid robots will change the nature of work. Or at least, that’s what humanoid robotics companies have been consistently promising, enabling them to raise hundreds of millions of dollars at valuations that run into the billions.
Delivering on these promises will require a lot of robots. Agility Robotics expects to ship “hundreds ” of its Digit robots in 2025 and has a factory in Oregon capable of building over 10,000 robots per year. Tesla is planning to produce 5,000 of its Optimus robots in 2025, and at least 50,000 in 2026. Figure believes “there is a path to 100,000 robots ” by 2029. And these are just three of the largest companies in an increasingly crowded space.
Amplifying this message are many financial analysts: Bank of America Global Research , for example, predicts that global humanoid robot shipments will reach 18,000 units in 2025. And Morgan Stanley Research estimates that by 2050 there could be over 1 billion humanoid robots, part of a US $5 trillion market.
But as of now, the market for humanoid robots is almost entirely hypothetical. Even the most successful companies in this space have deployed only a small handful of robots in carefully controlled pilot projects. And future projections seem to be based on an extraordinarily broad interpretation of jobs that a capable, efficient, and safe humanoid robot—which does not currently exist—might conceivably be able to do. Can the current reality connect with the promised scale?
What Will It Take to Scale Humanoid Robots? Physically building tens of thousands, or even hundreds of thousands, of humanoid robots, is certainly possible in the near term. In 2023, on the order of 500,000 industrial robots were installed worldwide . Under the basic assumption that a humanoid robot is approximately equivalent to four industrial arms in terms of components, existing supply chains should be able to support even the most optimistic near-term projections for humanoid manufacturing.
But simply building the robots is arguably the easiest part of scaling humanoids, says Melonee Wise , who served as chief product officer at Agility Robotics until this month. “The bigger problem is demand—I don’t think anyone has found an application for humanoids that would require several thousand robots per facility.” Large deployments, Wise explains, are the most realistic way for a robotics company to scale its business, since onboarding any new client can take weeks or months. An alternative approach to deploying several thousand robots to do a single job is to deploy several hundred robots that can each do 10 jobs, which seems to be what most of the humanoid industry is betting on in the medium to long term.
While there’s a belief across much of the humanoid robotics industry that rapid progress in AI must somehow translate into rapid progress toward multipurpose robots, it’s not clear how, when, or if that will happen. “I think what a lot of people are hoping for is they’re going to AI their way out of this,” says Wise. “But the reality of the situation is that currently AI is not robust enough to meet the requirements of the market.”
Bringing Humanoid Robots to Market Market requirements for humanoid robots include a slew of extremely dull, extremely critical things like battery life, reliability, and safety. Of these, battery life is the most straightforward—for a robot to usefully do a job, it can’t spend most of its time charging. The next version of Agility’s Digit robot, which can handle payloads of up to 16 kilograms, includes a bulky “backpack” containing a battery with a charging ratio of 10 to 1: The robot can run for 90 minutes, and fully recharge in 9 minutes. Slimmer humanoid robots from other companies must necessarily be making compromises to maintain their svelte form factors.
In operation, Digit will probably spend a few minutes charging after running for 30 minutes. That’s because 60 minutes of Digit’s runtime is essentially a reserve in case something happens in its workspace that requires it to temporarily pause, a not-infrequent occurrence in the logistics and manufacturing environments that Agility is targeting. Without a 60-minute reserve, the robot would be much more likely to run out of power mid-task and need to be manually recharged. Consider what that might look like with even a modest deployment of several hundred robots weighing over a hundred kilograms each. “No one wants to deal with that,” comments Wise.
Potential customers for humanoid robots are very concerned with downtime. Over the course of a month, a factory operating at 99 percent reliability will see approximately 5 hours of downtime. Wise says that any downtime that stops something like a production line can cost tens of thousands of dollars per minute, which is why many industrial customers expect a couple more 9s of reliability: 99.99 percent. Wise says that Agility has demonstrated this level of reliability in some specific applications, but not in the context of multipurpose or general-purpose functionality.
Humanoid Robot Safety A humanoid robot in an industrial environment must meet general safety requirements for industrial machines. In the past, robotic systems like autonomous vehicles and drones have benefited from immature regulatory environments to scale quickly. But Wise says that approach can’t work for humanoids, because the industry is already heavily regulated—the robot is simply considered another piece of machinery.
There are also more specific safety standards currently under development for humanoid robots, explains Matt Powers, associate director of autonomy R&D at Boston Dynamics. He notes that his company is helping develop an International Organization for Standardization (ISO) safety standard for dynamically balancing legged robots . “We’re very happy that the top players in the field, like Agility and Figure, are joining us in developing a way to explain why we believe that the systems that we’re deploying are safe,” Powers says.
These standards are necessary because the traditional safety approach of cutting power may not be a good option for a dynamically balancing system. Doing so will cause a humanoid robot to fall over, potentially making the situation even worse. There is no simple solution to this problem, and the initial approach that Boston Dynamics expects to take with its Atlas robot is to keep the robot out of situations where simply powering it off might not be the best option. “We’re going to start with relatively low-risk deployments, and then expand as we build confidence in our safety systems,” Powers says. “I think a methodical approach is really going to be the winner here.”
In practice, low risk means keeping humanoid robots away from people. But humanoids that are restricted by what jobs they can safely do and where they can safely move are going to have more trouble finding tasks that provide value.
Are Humanoids the Answer? The issues of demand, battery life, reliability, and safety all need to be solved before humanoid robots can scale. But a more fundamental question to ask is whether a bipedal robot is actually worth the trouble.
Dynamic balancing with legs would theoretically enable these robots to navigate complex environments like a human. Yet demo videos show these humanoid robots as either mostly stationary or repetitively moving short distances over flat floors. The promise is that what we’re seeing now is just the first step toward humanlike mobility. But in the short to medium term, there are much more reliable, efficient, and cost-effective platforms that can take over in these situations: robots with arms, but with wheels instead of legs.
Safe and reliable humanoid robots have the potential to revolutionize the labor market at some point in the future. But potential is just that, and despite the humanoid enthusiasm, we have to be realistic about what it will take to turn potential into reality.
This article appears in the October 2025 print issue as “Why Humanoid Robots Aren’t Scaling.”
spectrum.ieee.org
Evan Ackerman
@evanackerman.bsky.social
· Aug 29
Video Friday: Spot’s Got Talent
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
CLAWAR 2025 : 5–7 September 2025, SHENZHEN, CHINA ACTUATE 2025 : 23–24 September 2025, SAN FRANCISCO CoRL 2025 : 27–30 September 2025, SEOUL IEEE Humanoids : 30 September–2 October 2025, SEOUL World Robot Summit : 10–12 October 2025, OSAKA, JAPAN IROS 2025 : 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos!
Boston Dynamics is back and their dancing robot dogs are bigger, better, and bolder than ever! Watch as they bring a “dead” robot to life and unleash a never before seen synchronized dance routine to “Good Vibrations.”
And much more interestingly, here’s a discussion of how they made it work:
[ Boston Dynamics ]
I don’t especially care whether a robot falls over . I care whether it gets itself back up again.
[ LimX Dynamics ]
The robot autonomously connects multiple wires to the environment using small flying anchors—drones equipped with anchoring mechanisms at the wire tips. Guided by an onboard RGB-D camera for control and environmental recognition, the system enables wire attachment in unprepared environments and supports simultaneous multi-wire connections, expanding the operational range of wire-driven robots.
[ JSK Robotics Laboratory ] at [ University of Tokyo ]
Thanks, Shintaro!
For a robot that barely has a face, this is some pretty good emoting.
[ Pollen ]
Learning skills from human motions offers a promising path toward generalizable policies for whole-body humanoid control, yet two key cornerstones are missing: (1) a scalable, high-quality motion tracking framework that faithfully transforms kinematic references into robust, extremely dynamic motions on real hardware, and (2) a distillation approach that can effectively learn these motion primitives and compose them to solve downstream tasks. We address these gaps with BeyondMimic, a real-world framework to learn from human motions for versatile and naturalistic humanoid control via guided diffusion.
[ Hybrid Robotics ]
Introducing our open-source metal-made bipedal robot MEVITA. All components can be procured through e-commerce, and the robot is built with a minimal number of parts. All hardware, software, and learning environments are released as open source.
[ MEVITA ]
Thanks, Kento!
I’ve always thought that being able to rent robots (or exoskeletons) to help you move furniture or otherwise carry stuff would be very useful.
[ DEEP Robotics ]
A new study explains how tiny water bugs use fan-like propellers to zip across streams at speeds up to 120 body lengths per second. The researchers then created a similar fan structure and used it to propel and maneuver an insect-sized robot. The discovery offers new possibilities for designing small machines that could operate during floods or other challenging situations.
[ Georgia Tech ]
Dynamic locomotion of legged robots is a critical yet challenging topic in expanding the operational range of mobile robots. To achieve generalized legged locomotion on diverse terrains while preserving the robustness of learning-based controllers, this paper proposes to learn an attention-based map encoding conditioned on robot proprioception, which is trained as part of the end-to-end controller using reinforcement learning. We show that the network learns to focus on steppable areas for future footholds when the robot dynamically navigates diverse and challenging terrains.
[ Paper ] from [ ETH Zurich ]
In the fifth installment of our Moonshot Podcast Deep Dive video interview series, X’s Captain of Moonshots Astro Teller sits down with Google DeepMind’s Chief Scientist Jeff Dean for a conversation about the origin of Jeff’s pioneering work scaling neural networks. They discuss the first time AI captured Jeff’s imagination, the earliest Google Brain framework, the team’s stratospheric advancements in image recognition and speech-to-text, how AI is evolving, and more.
[ Moonshot Podcast ]
spectrum.ieee.org
Evan Ackerman
@evanackerman.bsky.social
· Aug 22
Video Friday: Inaugural World Humanoid Robot Games Held
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
RO-MAN 2025 : 25–29 August 2025, EINDHOVEN, THE NETHERLANDS CLAWAR 2025 : 5–7 September 2025, SHENZHEN, CHINA ACTUATE 2025 : 23–24 September 2025, SAN FRANCISCO CoRL 2025 : 27–30 September 2025, SEOUL IEEE Humanoids : 30 September–2 October 2025, SEOUL World Robot Summit : 10–12 October 2025, OSAKA, JAPAN IROS 2025 : 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos!
The First World Humanoid Robot Games Conclude Successfully! Unitree Strikes Four Golds (1500m, 400m, 100m Obstacle, 4×100m Relay).
[ Unitree ]
Steady! PNDbotics Adam has become the only full-size humanoid robot athlete to successfully finish the 100m Obstacle Race at the World Humanoid Robot Games!
[ PNDbotics ]
Introducing Field Foundation Models (FFMs) from FieldAI - a new class of “physics-first” foundation models built specifically for embodied intelligence. Unlike conventional vision or language models retrofitted for robotics, FFMs are designed from the ground up to grapple with uncertainty, risk, and the physical constraints of the real world. This enables safe and reliable robot behaviors when managing scenarios that they have not been trained on, navigating dynamic, unstructured environments without prior maps, GPS, or predefined paths.
[ Field AI ]
Multiply Labs, leveraging Universal Robots’ collaborative robots, has developed a groundbreaking robotic cluster that is fundamentally transforming the manufacturing of life-saving cell and gene therapies. The Multiply Labs solution drives a staggering 74% cost reduction and enables up to 100x more patient doses per square foot of cleanroom.
[ Universal Robots ]
In this video, we put Vulcan V3, the world’s first ambidextrous humanoid robotic hand capable of performing the full American Sign Language (ASL) alphabet, to the ultimate test—side by side with a real human!
[ Hackaday ]
Thanks, Kelvin!
More robots need to have this form factor.
[ Texas A & M University ]
Robotic vacuums are so pervasive now that it’s easy to forget how much of an icon the iRobot Roomba has been.
[ iRobot ]
This is quite possibly the largest robotic hand I’ve ever seen.
[ CAFE Project ] via [ BUILT ]
Modular robots built by Dartmouth researchers are finding their feet outdoors. Engineered to assemble into structures that best suit the task at hand, the robots are pieced together from cube-shaped robotic blocks that combine rigid rods and soft, stretchy strings whose tension can be adjusted to deform the blocks and control their shape.
[ Dartmouth ]
Our quadruped robot X30 has completed extreme-environment missions in Hoh Xil—supporting patrol teams, carrying vital supplies, and protecting fragile ecosystems.
[ DEEP Robotics ]
We propose a base-shaped robot named “koboshi” that moves everyday objects. This koboshi has a spherical surface in contact with the floor, and by moving a weight inside using built-in motors, it can rock up and down, and side to side. By placing everyday items on this koboshi, users can impart new movement to otherwise static objects. The koboshi is equipped with sensors to measure its posture, enabling interaction with users. Additionally, it has communication capabilities, allowing multiple units to communicate with each other.
[ Paper ]
Bi-LAT is the world’s first Vision-Language-Action (VLA) model that integrates bilateral control into imitation learning, enabling robots to adjust force levels based on natural language instructions.
[ Bi-LAT ] to be presented at [ IEEE RO-MAN 2025 ]
Thanks, Masato!
Look at this jaunty little guy!
Although, they very obviously cut the video right before it smashes face first into furniture more than once.
[ Paper ] to be presented at [ 2025 IEEE-RAS International Conference on Humanoid Robotics ]
This research has been conducted at the Human Centered Robotics Lab at UT Austin. The video shows our latest experimental bipedal robot, dubbed Mercury, which has passive feet. This means that there are no actuated ankles, unlike humans, forcing Mercury to gain balance by dynamically stepping.
[ University of Texas at Austin Human Centered Robotics Lab ]
We put two RIVR delivery robots to work with an autonomous vehicle — showing how Physical AI can handle the full last mile, from warehouse to consumers’ doorsteps.
[ Rivr ]
The KR TITAN ultra is a high-performance industrial robot weighing 4.6 tonnes and capable of handling payloads up to 1.5 tonnes.
[ Kuka ]
CMU MechE’s Ding Zhao and Ph.D. student Yaru Niu describe LocoMan, a robotic assistant they have been developing.
[ Carnegie Mellon University ]
Twenty-two years ago, Silicon Valley executive Henry Evans had a massive stroke that left him mute and paralyzed from the neck down. But that didn’t prevent him from becoming a leading advocate of adaptive robotic tech to help disabled people – or from writing country songs, one letter at a time. Correspondent John Blackstone talks with Evans about his upbeat attitude and unlikely pursuits.
[ CBS News ]
spectrum.ieee.org
Evan Ackerman
@evanackerman.bsky.social
· Aug 15
Video Friday: SCUTTLE
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
RO-MAN 2025 : 25–29 August 2025, EINDHOVEN, THE NETHERLANDS CLAWAR 2025 : 5–7 September 2025, SHENZHEN, CHINA ACTUATE 2025 : 23–24 September 2025, SAN FRANCISCO CoRL 2025 : 27–30 September 2025, SEOUL IEEE Humanoids : 30 September–2 October 2025, SEOUL World Robot Summit : 10–12 October 2025, OSAKA, JAPAN IROS 2025 : 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos!
Check out our latest innovations on SCUTTLE, advancing multilegged mobility anywhere.
[ GCR ]
That laundry folding robot we’ve been working on for 15 years is still not here yet.
Honestly I think Figure could learn a few tricks from vintage UC Berkeley PR2, though:
- YouTube
[ Figure ]
Tensegrity robots are so cool, but so hard—it’s good to see progress.
[ Michigan Robotics ]
We should find out next week how quick this is.
[ Unitree ]
We introduce a methodology for task-specific design optimization of multirotor Micro Aerial Vehicles. By leveraging reinforcement learning, Bayesian optimization, and covariance matrix adaptation evolution strategy, we optimize aerial robot designs guided only by their closed-loop performance in a considered task. Our approach systematically explores the design space of motor pose configurations while ensuring manufacturability constraints and minimal aerodynamic interference. Results demonstrate that optimized designs achieve superior performance compared to conventional multirotor configurations in agile waypoint navigation tasks, including against fully actuated designs from the literature. We build and test one of the optimized designs in the real world to validate the sim2real transferability of our approach.
[ ARL ]
Thanks, Kostas!
I guess legs are required for this inspection application because of the stairs right at the beginning? But sometimes, that’s how the world is.
[ DEEP Robotics ]
The Institute of Robotics and Mechatronics at DLR has a long tradition in developing multi-fingered hands, creating novel mechatronic concepts as well as autonomous grasping and manipulation capabilities. The range of hands spans from Rotex, a first two-fingered gripper for space applications, to the highly anthropomorphic Awiwi Hand and variable stiffness end effectors. This video summarizes the developments of DLR in this field over the past 30 years, starting with the Rotex experiment in 1993.
[ DLR RM ]
The quest for agile quadrupedal robots is limited by handcrafted reward design in reinforcement learning. While animal motion capture provides 3D references, its cost prohibits scaling. We address this with a novel video-based framework. The proposed framework significantly advances robotic locomotion capabilities.
[ Arc Lab ]
Serious question: Why don’t humanoid robots sit down more often?
[ EngineAI ]
And now, this.
[ LimX Dynamics ]
NASA researchers are currently using wind tunnel and flight tests to gather data on an electric vertical takeoff and landing (eVTOL) scaled-down small aircraft that resembles an air taxi that aircraft manufacturers can use for their own designs. By using a smaller version of a full-sized aircraft called the RAVEN Subscale Wind Tunnel and Flight Test (RAVEN SWFT) vehicle, NASA is able to conduct its tests in a fast and cost-effective manner.
[ NASA ]
This video details the advances in orbital manipulation made by DLR’s Robotic and Mechatronics Center over the past 30 years, paving the way for the development of robotic technology for space sustainability.
[ DLR RM ]
This summer, a team of robots explored a simulated Martian landscape in Germany, remotely guided by an astronaut aboard the International Space Station. This marked the fourth and final session of the Surface Avatar experiment, a collaboration between ESA and the German Aerospace Center (DLR) to develop how astronauts can control robotic teams to perform complex tasks on the Moon and Mars.
[ ESA ]
spectrum.ieee.org
Evan Ackerman
@evanackerman.bsky.social
· Jul 25
Video Friday: Skyfall Takes on Mars With Swarm Helicopter Concept
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
RO-MAN 2025 : 25–29 August 2025, EINDHOVEN, THE NETHERLANDS CLAWAR 2025 : 5–7 September 2025, SHENZHEN, CHINA ACTUATE 2025 : 23–24 September 2025, SAN FRANCISCO CoRL 2025 : 27–30 September 2025, SEOUL IEEE Humanoids : 30 September–2 October 2025, SEOUL World Robot Summit : 10–12 October 2025, OSAKA, JAPAN IROS 2025 : 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos!
AeroVironment revealed Skyfall—a potential future mission concept for next-generation Mars Helicopters developed with NASA’s Jet Propulsion Laboratory (JPL) to help pave the way for human landing on Mars through autonomous aerial exploration.
The concept is heavily focused on rapidly delivering an affordable, technically mature solution for expanded Mars exploration that would be ready for launch by 2028. Skyfall is designed to deploy six scout helicopters on Mars, where they would explore many of the sites selected by NASA and industry as top candidate landing sites for America’s first Martian astronauts. While exploring the region, each helicopter can operate independently, beaming high-resolution surface imaging and sub-surface radar data back to Earth for analysis, helping ensure crewed vehicles make safe landings at areas with maximum amounts of water, ice, and other resources.
The concept would be the first to use the “Skyfall Maneuver”–an innovative entry, descent, and landing technique whereby the six rotorcraft deploy from their entry capsule during its descent through the Martian atmosphere. By flying the helicopters down to the Mars surface under their own power, Skyfall would eliminate the necessity for a landing platform–traditionally one of the most expensive, complex, and risky elements of any Mars mission.
[ AeroVironment ]
By far the best part of videos like these is watching the expressions on the faces of the students when their robot succeeds at something.
[ RaiLab ]
This is just a rendering of course, but the real thing should be showing up on August 6.
[ Fourier ]
Top performer in its class! Less than two weeks after its last release, MagicLab unveils another breakthrough — MagicDog-W, the wheeled quadruped robot. Cyber-flex, dominate all terrains!
[ MagicLab ]
Inspired by the octopus’s remarkable ability to wrap and grip with precision, this study introduces a vacuum-driven, origami-inspired soft actuator that mimics such versatility through self-folding design and high bending angles. Its crease-free, 3D-printable structure enables compact, modular robotics with enhanced grasping force—ideal for handling objects of various shapes and sizes using octopus-like suction synergy.
[ Paper ] via [ IEEE Transactions on Robots ]
Thanks, Bram!
Is it a plane? Is it a helicopter? Yes.
[ Robotics and Intelligent Systems Laboratory, City University of Hong Kong ]
You don’t need wrist rotation as long as you have the right gripper .
[ Nature Machine Intelligence ]
ICRA 2026 will be in Vienna next June!
[ ICRA 2026 ]
Boing, boing, boing!
[ Robotics and Intelligent Systems Laboratory, City University of Hong Kong ]
ROBOTERA Unveils L7: Next-Generation Full-Size Bipedal Humanoid Robot with Powerful Mobility and Dexterous Manipulation!
[ ROBOTERA ]
Meet UBTECH New-Gen of Industrial Humanoid Robot—Walker S2 makes multiple industry-leading breakthroughs! Walker S2 is the world’s first humanoid robot to achieve 3-minute autonomous battery swapping and 24/7 continuous operation.
[ UBTECH ]
ARMstrong Dex is a human-scale dual-arm hydraulic robot developed by the Korea Atomic Energy Research Institute (KAERI) for disaster response. It can perform vertical pull-ups and manipulate loads over 50 kg, demonstrating strength beyond human capabilities. However, disaster environments also require agility and fast, precise movement. This test evaluated ARMstrong Dex’s ability to throw a 500 ml water bottle (0.5 kg) into a target container. The experiment assessed high-speed coordination, trajectory control, and endpoint accuracy, which are key attributes for operating in dynamic rescue scenarios.
[ KAERI ]
This is not a humanoid robot, it’s a data acquisition platform.
[ PNDbotics ]
Neat feature on this drone to shift the battery back and forth to compensate for movement of the arm.
[ Paper ] via [ Drones journal ]
As residential buildings become taller and more advanced, the demand for seamless and secure in-building delivery continues to grow. In high-end apartments and modern senior living facilities where couriers cannot access upper floors, robots like FlashBot Max are becoming essential. In this featured elderly care residence, FlashBot Max completes 80-100 deliveries daily, seamlessly navigating elevators, notifying residents upon arrival, and returning to its charging station after each delivery.
[ Pudu Robotics ]
“How to Shake Trees With Aerial Manipulators.”
[ GRVC ]
We see a future where seeing a cobot in a hospital delivering supplies feels as normal as seeing a tractor in a field. Watch our CEO Brad Porter share what robots moving in the world should feel like.
[ Cobot ]
Introducing the Engineered Arts UI for robot Roles, it’s now simple to set up a robot to behave exactly the way you want it to. We give a quick overview of customization for languages, personality, knowledge and abilities. All of this is done with no code. Just simple LLM prompts, drop down list selections and some switches to enable the features you need.
[ Engineered Arts ]
Unlike most quadrupeds, CARA doesn’t use any gears or pulleys. Instead, her joints are driven by rope through capstan drives. Capstan drives offer several advantages: zero backlash, high torque transparency, low inertia, low cost, and quiet operation. These qualities make them an ideal speed reducer for robotics.
[ CARA ]
spectrum.ieee.org
Evan Ackerman
@evanackerman.bsky.social
· Jul 18
Video Friday: Robot Metabolism
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics robotics . We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
RO-MAN 2025 : 25–29 August 2025, EINDHOVEN, THE NETHERLANDS CLAWAR 2025 : 5–7 September 2025, SHENZHEN, CHINA ACTUATE 2025 : 23–24 September 2025, SAN FRANCISCO CoRL 2025 : 27–30 September 2025, SEOUL IEEE Humanoids : 30 September–2 October 2025, SEOUL World Robot Summit : 10–12 October 2025, OSAKA, JAPAN IROS 2025 : 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos!
Columbia University researchers introduce a process that allows machines to “grow” physically by integrating parts from their surroundings or from other robots, demonstrating a step towards self-sustaining robot ecologies.
[ Robot Metabolism ] via [ Columbia ]
We challenged ourselves to see just how far we could push Digit’s ability to stabilize itself in response to a disturbance. Utilizing state-of-the-art AI technology and robust physical intelligence, Digit can adapt to substantial disruptions, all without the use of visual perception.
[ Agility Robotics ]
We are presenting the Figure 03 (F.03) battery — a significant advancement in our core humanoid robot technology roadmap.
The effort that was put into safety for this battery is impressive. But I would note two things: the battery life is “5 hours of run time at peak performance” without saying what “peak performance” actually means, and 2-kilowatt fast charge still means over an hour to fully charge.
[ Figure ]
Well this is a nifty idea.
[ UBTECH ]
PAPRLE is a plug-and-play robotic limb environment for flexible configuration and control of robotic limbs across applications. With PAPRLE, user can use diverse configurations of leader-follower pair for teleoperation. In the video, we show several teleoperation examples supported by PAPRLE.
[ PAPRLE ]
Thanks, Joohyung!
Always nice to see a robot with a carefully thought out commercial use case in which it can just do robot stuff like a robot.
[ Cohesive Robotics ]
Thanks, David!
We are interested in deploying autonomous legged robots in diverse environments, such as industrial facilities and forests. As part of the DigiForest project, we are working on new systems to autonomously build forest inventories with legged platforms, which we have deployed in the UK, Finland, and Switzerland.
[ Oxford ]
Thanks, Matias!
In this research we introduce a self-healing, biocompatible strain sensor using Galinstan and a Diels-Alder polymer, capable of restoring both mechanical and sensing functions after repeated damage. This highly stretchable and conductive sensor demonstrates strong performance metrics—including 80% mechanical healing efficiency and 105% gauge factor recovery—making it suitable for smart wearable applications.
[ Paper ]
Thanks, Bram!
The “Amazing Hand” from Pollen Robotics costs under $250.
[ Pollen ]
Welcome to our Unboxing Day! After months of waiting, our humanoid robot has finally arrived at Fraunhofer IPA in Stuttgart.
I used to take stretching classes from a woman who could do this backwards in 5.43 seconds .
[ Fraunhofer ]
At the Changchun stop of the VOYAGEX Music Festival on July 12, PNDbotics’ full-sized humanoid robot Adam took the stage as a keytar player with the famous Chinese musician Hu Yutong’s band.
[ PNDbotics ]
Material movement is the invisible infrastructure of hospitals, airports, cities–everyday life. We build robots that support the people doing this essential, often overlooked work. Watch our CEO Brad Porter reflect on what inspired Cobot.
[ Cobot ]
Yes please.
[ Pollen ]
I think I could get to the point of being okay with this living in my bathroom.
[ Paper ]
Thanks to its social perception, high expressiveness and out-of-the-box integration, TIAGo Head offers the ultimate human-robot interaction experience.
[ PAL Robotics ]
Sneak peek: Our No Manning Required Ship (NOMARS) Defiant unmanned surface vessel is designed to operate for up to a year at sea without human intervention. In-water testing is preparing it for an extended at-sea demonstration of reliability and endurance.
Excellent name for any ship.
[ DARPA ]
At the 22nd International Conference on Ubiquitous Robots (UR2025), high school student and robotics researcher Ethan Hong was honored as a Special Invited Speaker for the conference banquet and “Robots With Us” panel. In this heartfelt and inspiring talk, Ethan shares the story behind Food Angel — a food delivery robot he designed and built to support people experiencing homelessness in Los Angeles. Motivated by the growing crises of homelessness and food insecurity, Ethan asked a simple but profound question: “Why not use robots to help the unhoused?”
[ UR2025 ]
spectrum.ieee.org
Evan Ackerman
@evanackerman.bsky.social
· Jul 11
Video Friday: Reachy Mini Brings Cute to Open-Source Robotics
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
IFAC Symposium on Robotics : 15–18 July 2025, PARIS RoboCup 2025 : 15–21 July 2025, BAHIA, BRAZIL RO-MAN 2025 : 25–29 August 2025, EINDHOVEN, THE NETHERLANDS CLAWAR 2025 : 5–7 September 2025, SHENZHEN, CHINA ACTUATE 2025 : 23–24 September 2025, SAN FRANCISCO CoRL 2025 : 27–30 September 2025, SEOUL IEEE Humanoids : 30 September–2 October 2025, SEOUL World Robot Summit : 10–12 October 2025, OSAKA, JAPAN IROS 2025 : 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos!
Reachy Mini is an expressive, open-source robot designed for human-robot interaction, creative coding, and AI experimentation. Fully programmable in Python (and soon JavaScript, Scratch) and priced from $299, it’s your gateway into robotics AI: fun, customizable, and ready to be part of your next coding project.
I’m so happy that Pollen and Reachy found a home with Hugging Face, but I hope they understand that they are never, ever allowed to change that robot’s face. O-o
[ Reachy Mini ] via [ Hugging Face ]
General-purpose robots promise a future where household assistance is ubiquitous and aging in place is supported by reliable, intelligent help. These robots will unlock human potential by enabling people to shape and interact with the physical world in transformative new ways. At the core of this transformation are Large Behavior Models (LBMs) - embodied AI systems that take in robot sensor data and output actions. LBMs are pretrained on large, diverse manipulation datasets and offer the key to realizing robust, general-purpose robotic intelligence. Yet despite their growing popularity, we still know surprisingly little about what today’s LBMs actually offer - and at what cost. This uncertainty stems from the difficulty of conducting rigorous, large-scale evaluations in real-world robotics. As a result, progress in algorithm and dataset design is often guided by intuition rather than evidence, hampering progress. Our work aims to change that.
[ Toyota Research Institute ]
Kinisi Robotics is advancing the frontier of physical intelligence by developing AI-driven robotic platforms capable of high-speed, autonomous pick-and-place operations in unstructured environments. This video showcases Kinisi’s latest wheeled-base humanoid performing dexterous bin stacking and item sorting using closed-loop perception and motion planning. The system combines high-bandwidth actuation, multi-arm coordination, and real-time vision to achieve robust manipulation without reliance on fixed infrastructure. By integrating custom hardware with onboard intelligence, Kinisi enables scalable deployment of general-purpose robots in dynamic warehouse settings, pushing toward broader commercial readiness for embodied AI systems.
[ Kinisi Robotics ]
Thanks, Bren!
In this work, we develop a data collection system where human and robot data are collected and unified in a shared space, and propose a modularized cross-embodiment Transformer that is pretrained on human data and fine-tuned on robot data. This enables high data efficiency and effective transfer from human to quadrupedal embodiments, facilitating versatile manipulation skills for unimanual and bimanual, non-prehensile and prehensile, precise tool-use, and long-horizon tasks, such as cat litter scooping!
[ Human2LocoMan ]
Thanks, Yaru!
LEIYN is a quadruped robot equipped with an active waist joint. It achieves the world’s fastest chimney climbing through dynamic motions learned via reinforcement learning.
[ JSK Lab ]
Thanks, Keita!
Quadrupedal robots are really just bipedal robots that haven’t learned to walk on two legs yet.
[ Adaptive Robotic Controls Lab, University of Hong Kong ]
This study introduces a biomimetic self-healing module for tendon-driven legged robots that uses robot motion to activate liquid metal sloshing, which removes surface oxides and significantly enhances healing strength. Validated on a life-sized monopod robot, the module enables repeated squatting after impact damage, marking the first demonstration of active self-healing in high-load robotic applications.
[ University of Tokyo ]
Thanks, Kento!
That whole putting wheels on quadruped robots thing was a great idea that someone had way back when.
[ Pudu Robotics ]
I know nothing about this video except that it’s very satisfying and comes from a YouTube account that hasn’t posted in 6 years.
[ Young-jae Bae YouTube ]
Our AI WORKER now comes in a new Swerve Drive configuration, optimized for logistics environments. With its agile and omnidirectional movement, the swerve-type mobile base can efficiently perform various logistics tasks such as item transport, shelf navigation, and precise positioning in narrow aisles.
Wait, you can have a bimanual humanoid without legs? I am shocked.
[ ROBOTIS ]
I can’t tell whether I need an office assistant, or if I just need snacks.
[ PNDbotics ]
“MagicBot Z1: Atomic kinetic energy, the brave are fearless,” says the MagicBot website. Hard to argue with that!
[ MagicLab ]
We’re excited to announce our new HQ in Palo Alto [CA]. As we grow, consolidating our Sunnyvale [CA] and Moss [Norway] team under one roof will accelerate our speed to ramping production and getting NEO into homes near you.
I’m not entirely sure that moving from Norway to California is an upgrade, honestly.
[ 1X ]
Jim Kernan, Chief Product Officer at Engineered Arts, shares how they’re commercializing humanoid robots—blending AI, expressive design, and real-world applications to build trust and engagement.
[ Humanoids Summit ]
In the second installment of our Moonshot Podcast Deep Dive video interview series, X’s Captain of Moonshots Astro Teller sits down with André Prager, former Chief Engineer at Wing, for a conversation about the early days of Wing and how the team solved some of their toughest engineering challenges to develop simple, lightweight, inexpensive delivery drones that are now being used every day across three continents.
[ Moonshot Podcast ]
spectrum.ieee.org
Evan Ackerman
@evanackerman.bsky.social
· Jun 27
Video Friday: This Quadruped Throws With Its Whole Body
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
IAS 2025 : 30 June–4 July 2025, GENOA, ITALY ICRES 2025 : 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics : 8–11 July 2025, SUWON, SOUTH KOREA IFAC Symposium on Robotics : 15–18 July 2025, PARIS RoboCup 2025 : 15–21 July 2025, BAHIA, BRAZIL RO-MAN 2025 : 25–29 August 2025, EINDHOVEN, THE NETHERLANDS CLAWAR 2025 : 5–7 September 2025, SHENZHEN CoRL 2025 : 27–30 September 2025, SEOUL IEEE Humanoids : 30 September–2 October 2025, SEOUL World Robot Summit : 10–12 October 2025, OSAKA, JAPAN IROS 2025 : 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos!
Throwing is a fundamental skill that enables robots to manipulate objects in ways that extend beyond the reach of their arms. We present a control framework that combines learning and model-based control for prehensile whole-body throwing with legged mobile manipulators. This work provides an early demonstration of prehensile throwing with quantified accuracy on hardware, contributing to progress in dynamic whole-body manipulation.
[ Paper ] from [ ETH Zurich ]
As it turns out, in many situations humanoid robots don’t necessarily need legs at all.
[ ROBOTERA ]
Picking-in-Motion is a brand new feature as part of Autopicker 2.0. Instead of remaining stationary while picking an item, Autopicker begins traveling toward its next destination immediately after retrieving a storage tote – completing the pick while on the move. The robot then drops off the first storage tote at an empty slot near the next pick location before collecting the next tote.
[ Brightpick ]
Thanks, Gilmarie!
I am pretty sure this is not yet real, but boy is it shiny.
[ SoftBank ] via [ RobotStart ]
Why use one thumb when you can instead use two thumbs?
[ TU Berlin ]
Kirigami offers unique opportunities for guided morphing by leveraging the geometry of the cuts. This work presents inflatable kirigami crawlers created by introducing cut patterns into heat-sealable textiles to achieve locomotion upon cyclic pneumatic actuation. We found that the kirigami actuators exhibit directional anisotropic friction properties when inflated, having higher friction coefficients against the direction of the movement, enabling them to move across surfaces with varying roughness. We further enhanced the functionality of inflatable kirigami actuators by introducing multiple channels and segments to create functional soft robotic prototypes with versatile locomotion capabilities.
[ Paper ] from [ SDU Soft Robotics ]
Lockheed Martin wants to get into the Mars Sample Return game for a mere US$3 billion.
[ Lockheed Martin ]
This is pretty gross and exactly what you want a robot to be doing: dealing with municipal solid waste.
[ ZenRobotics ]
Drag your mouse or move your phone to explore this 360-degree panorama provided by NASA’s Curiosity Mars rover. This view shows some of the rover’s first looks at a region that has only been viewed from space until now, and where the surface is crisscrossed with spiderweblike patterns.
[ NASA Jet Propulsion Laboratory ]
In case you were wondering, iRobot is still around.
[ iRobot ]
Legendary roboticist Cynthia Breazeal talks about the equally legendary Personal Robots Group at the MIT Media Lab.
[ MIT Personal Robots Group ]
In the first installment of our Moonshot Podcast Deep Dive video interview series, X’s Captain of Moonshots Astro Teller sits down with Sebastian Thrun , co-founder of the Moonshot Factory, for a conversation about the history of Waymo and Google X, the ethics of innovation, the future of AI, and more.
[ Google X, The Moonshot Factory ]
spectrum.ieee.org
Evan Ackerman
@evanackerman.bsky.social
· Jun 23
How the Rubin Observatory Will Reinvent Astronomy
Night is falling on Cerro Pachón.
Stray clouds reflect the last few rays of golden light as the sun dips below the horizon. I focus my camera across the summit to the westernmost peak of the mountain. Silhouetted within a dying blaze of red and orange light looms the sphinxlike shape of the Vera C. Rubin Observatory .
“Not bad,” says William O’Mullane , the observatory’s deputy project manager, amateur photographer, and master of understatement. We watch as the sky fades through reds and purples to a deep, velvety black. It’s my first night in Chile. For O’Mullane, and hundreds of other astronomers and engineers, it’s the culmination of years of work, as the Rubin Observatory is finally ready to go “on sky.”
Rubin is unlike any telescope ever built. Its exceptionally wide field of view, extreme speed, and massive digital camera will soon begin the 10-year Legacy Survey of Space and Time (LSST ) across the entire southern sky. The result will be a high-resolution movie of how our solar system, galaxy, and universe change over time, along with hundreds of petabytes of data representing billions of celestial objects that have never been seen before.
Stars begin to appear overhead, and O’Mullane and I pack up our cameras. It’s astronomical twilight, and after nearly 30 years, it’s time for Rubin to get to work.
On 23 June, the Vera C. Rubin Observatory released the first batch of images to the public. One of them, shown here, features a small section of the Virgo cluster of galaxies. Visible are two prominent spiral galaxies (lower right), three merging galaxies (upper right), several groups of distant galaxies, and many stars in the Milky Way galaxy. Created from over 10 hours of observing data, this image represents less than 2 percent of the field of view of a single Rubin image.
NSF-DOE Rubin Observatory
A second image reveals clouds of gas and dust in the Trifid and Lagoon nebulae, located several thousand light-years from Earth. It combines 678 images taken by the Rubin Observatory over just seven hours, revealing faint details—like nebular gas and dust—that would otherwise be invisible.
NSF-DOE Rubin Observatory
Engineering the Simonyi Survey Telescope The top of Cerro Pachón is not a big place. Spanning about 1.5 kilometers at 2,647 meters of elevation, its three peaks are home to the Southern Astrophysical Research Telescope (SOAR ), the Gemini South Telescope , and for the last decade, the Vera Rubin Observatory construction site. An hour’s flight north of the Chilean capital of Santiago, these foothills of the Andes offer uniquely stable weather. The Humboldt Current flows just offshore, cooling the surface temperature of the Pacific Ocean enough to minimize atmospheric moisture, resulting in some of the best “seeing,” as astronomers put it, in the world.
It’s a complicated but exciting time to be visiting. It’s mid-April of 2025, and I’ve arrived just a few days before “first photon,” when light from the night sky will travel through the completed telescope and into its camera for the first time. In the control room on the second floor, engineers and astronomers make plans for the evening’s tests. O’Mullane and I head up into a high bay that contains the silvering chamber for the telescope’s mirrors and a clean room for the camera and its filters. Increasingly exhausting flights of stairs lead to the massive pier on which the telescope sits, and then up again into the dome.
I suddenly feel very, very small. The Simonyi Survey Telescope towers above us—350 tonnes of steel and glass, nestled within the 30-meter-wide, 650-tonne dome. One final flight of stairs and we’re standing on the telescope platform. In its parked position, the telescope is pointed at horizon, meaning that it’s looking straight at me as I step in front of it and peer inside.
The telescope’s enormous 8.4-meter primary mirror is so flawlessly reflective that it’s essentially invisible. Made of a single piece of low-expansion borosilicate glass covered in a 120-nanometer-thick layer of pure silver, the huge mirror acts as two different mirrors, with a more pronounced curvature toward the center. Standing this close means that different reflections of the mirrors, the camera, and the structure of the telescope all clash with one another in a way that shifts every time I move. I feel like if I can somehow look at it in just the right way, it will all make sense. But I can’t, and it doesn’t.
I’m rescued from madness by O’Mullane snapping photos next to me. “Why?” I ask him. “You see this every day, right?”
“This has never been seen before,” he tells me. “It’s the first time, ever, that the lens cover has been off the camera since it’s been on the telescope.” Indeed, deep inside the nested reflections I can see a blue circle, the r-band filter within the camera itself. As of today, it’s ready to capture the universe.
Rubin’s Wide View Unveils the Universe Back down in the control room, I find director of construction Željko Ivezić. He’s just come up from the summit hotel, which has several dozen rooms for lucky visitors like myself, plus a few even luckier staff members. The rest of the staff commutes daily from the coastal town of La Serena, a 4-hour round trip.
To me, the summit hotel seems luxurious for lodgings at the top of a remote mountain. But Ivezić has a slightly different perspective. “The European-funded telescopes,” he grumbles, “have swimming pools at their hotels. And they serve wine with lunch! Up here, there’s no alcohol. It’s an American thing.” He’s referring to the fact that Rubin is primarily funded by the U.S. National Science Foundation and the U.S. Department of Energy’s Office of Science , which have strict safety requirements.
Originally, Rubin was intended to be a dark-matter survey telescope, to search for the 85 percent of the mass of the universe that we know exists but can’t identify. In the 1970s, astronomer Vera C. Rubin pioneered a spectroscopic method to measure the speed at which stars orbit around the centers of their galaxies, revealing motion that could be explained only by the presence of a halo of invisible mass at least five times the apparent mass of the galaxies themselves. Dark matter can warp the space around it enough that galaxies act as lenses, bending light from even more distant galaxies as it passes around them. It’s this gravitational lensing that the Rubin observatory was designed to detect on a massive scale. But once astronomers considered what else might be possible with a survey telescope that combined enormous light-collecting ability with a wide field of view, Rubin’s science mission rapidly expanded beyond dark matter.
Trading the ability to focus on individual objects for a wide field of view that can see tens of thousands of objects at once provides a critical perspective for understanding our universe, says Ivezić. Rubin will complement other observatories like the Hubble Space Telescope and the James Webb Space Telescope . Hubble’s Wide Field Camera 3 and Webb’s Near Infrared Camera have fields of view of less than 0.05 square degrees each, equivalent to just a few percent of the size of a full moon. The upcoming Nancy Grace Roman Space Telescope will see a bit more, with a field of view of about one full moon. Rubin, by contrast, can image 9.6 square degrees at a time—about 45 full moons’ worth of sky.
RELATED: A Trillion Rogue Planets and Not One Sun to Shine on Them
That ultrawide view offers essential context, Ivezić explains. “My wife is American, but I’m from Croatia,” he says. “Whenever we go to Croatia, she meets many people. I asked her, ‘Did you learn more about Croatia by meeting many people very superficially, or because you know me very well?’ And she said, ‘You need both. I learn a lot from you, but you could be a weirdo, so I need a control sample.’ ” Rubin is providing that control sample, so that astronomers know just how weird whatever they’re looking at in more detail might be.
Every night, the telescope will take a thousand images, one every 34 seconds. After three or four nights, it’ll have the entire southern sky covered, and then it’ll start all over again. After a decade, Rubin will have taken more than 2 million images, generated 500 petabytes of data, and visited every object it can see at least 825 times. In addition to identifying an estimated 6 million bodies in our solar system, 17 billion stars in our galaxy, and 20 billion galaxies in our universe, Rubin’s rapid cadence means that it will be able to delve into the time domain, tracking how the entire southern sky changes on an almost daily basis.
Cutting-Edge Technology Behind Rubin’s Speed Achieving these science goals meant pushing the technical envelope on nearly every aspect of the observatory. But what drove most of the design decisions is the speed at which Rubin needs to move (3.5 degrees per second)—the phrase most commonly used by the Rubin staff is “crazy fast.”
Crazy fast movement is why the telescope looks the way it does. The squat arrangement of the mirrors and camera centralizes as much mass as possible. Rubin’s oversize supporting pier is mostly steel rather than mostly concrete so that the movement of the telescope doesn’t twist the entire pier. And then there’s the megawatt of power required to drive this whole thing, which comes from huge banks of capacitors slung under the telescope to prevent a brownout on the summit every 30 seconds all night long.
Rubin is also unique in that it utilizes the largest digital camera ever built. The size of a small car and weighing 2,800 kilograms, the LSST camera captures 3.2-gigapixel images through six swappable color filters ranging from near infrared to near ultraviolet. The camera’s focal plane consists of 189 4K-by-4K charge-coupled devices grouped into 21 “rafts.” Every CCD is backed by 16 amplifiers that each read 1 million pixels, bringing the readout time for the entire sensor down to 2 seconds flat.
Astronomy in the Time Domain As humans with tiny eyeballs and short lifespans who are more or less stranded on Earth, we have only the faintest idea of how dynamic our universe is. To us, the night sky seems mostly static and also mostly empty. This is emphatically not the case.
In 1995, the Hubble Space Telescope pointed at a small and deliberately unremarkable part of the sky for a cumulative six days. The resulting image, called the Hubble Deep Field , revealed about 3,000 distant galaxies in an area that represented just one twenty-four-millionth of the sky. To observatories like Hubble, and now Rubin, the sky is crammed full of so many objects that it becomes a problem. As O’Mullane puts it, “There’s almost nothing not touching something.”
One of Rubin’s biggest challenges will be deblending—identifying and then separating things like stars and galaxies that appear to overlap. This has to be done carefully by using images taken through different filters to estimate how much of the brightness of a given pixel comes from each object.
At first, Rubin won’t have this problem. At each location, the camera will capture one 30-second exposure before moving on. As Rubin returns to each location every three or four days, subsequent exposures will be combined in a process called coadding. In a coadded image, each pixel represents all of the data collected from that location in every previous image, which results in a much longer effective exposure time. The camera may record only a few photons from a distant galaxy in each individual image, but a few photons per image added together over 825 images yields much richer data. By the end of Rubin’s 10-year survey, the coadding process will generate images with as much detail as a typical Hubble image, but over the entire southern sky. A few lucky areas called “deep drilling fields ” will receive even more attention, with each one getting a staggering 23,000 images or more.
Rubin will add every object that it detects to its catalog, and over time, the catalog will provide a baseline of the night sky, which the observatory can then use to identify changes. Some of these changes will be movement—Rubin may see an object in one place, and then spot it in a different place some time later, which is how objects like near-Earth asteroids will be detected. But the vast majority of the changes will be in brightness rather than movement.
RELATED: Three Steps to Stopping Killer Asteroids
Every image that Rubin collects will be compared with a baseline image, and any change will automatically generate a software alert within 60 seconds of when the image was taken. Rubin’s wide field of view means that there will be a lot of these alerts—on the order of 10,000 per image, or 10 million alerts per night. Other automated systems will manage the alerts. Called alert brokers, they ingest the alert streams and filter them for the scientific community. If you’re an astronomer interested in Type Ia supernovae, for example, you can subscribe to an alert broker and set up a filter so that you’ll get notified when Rubin spots one.
Many of these alerts will be triggered by variable stars, which cyclically change in brightness. Rubin is also expected to identify somewhere between 3 million and 4 million supernovae —that works out to over a thousand new supernovae for every night of observing. And the rest of the alerts? Nobody knows for sure, and that’s why the alerts have to go out so quickly, so that other telescopes can react to make deeper observations of what Rubin finds.
Managing Rubin’s Vast Data Output After the data leaves Rubin’s camera, most of the processing will take place at the SLAC National Accelerator Laboratory in Menlo Park, Calif., over 9,000 kilometers from Cerro Pachón. It takes less than 10 seconds for an image to travel from the focal plane of the camera to SLAC, thanks to a 600-gigabit fiber connection from the summit to La Serena, and from there, a dedicated 100-gigabit line and a backup 40-gigabit line that connect to the Department of Energy’s science network in the United States. The 20 terabytes of data that Rubin will produce nightly makes this bandwidth necessary. “There’s a new image every 34 seconds,” O’Mullane tells me. “If I can’t deal with it fast enough, I start to get behind. So everything has to happen on the cadence of half a minute if I want to keep up with the data flow.”
At SLAC, each image will be calibrated and cleaned up, including the removal of satellite trails. Rubin will see a lot of satellites, but since the satellites are unlikely to appear in the same place in every image, the impact on the data is expected to be minimal when the images are coadded. The processed image is compared with a baseline image and any alerts are sent out, by which time processing of the next image has already begun.
As Rubin’s catalog of objects grows, astronomers will be able to query it in all kinds of useful ways. Want every image of a particular patch of sky? No problem. All the galaxies of a certain shape? A little trickier, but sure. Looking for 10,000 objects that are similar in some dimension to 10,000 other objects? That might take a while, but it’s still possible. Astronomers can even run their own code on the raw data.
“Pretty much everyone in the astronomy community wants something from Rubin,” O’Mullane explains, “and so they want to make sure that we’re treating the data the right way. All of our code is public. It’s on GitHub . You can see what we’re doing, and if you’ve got a better solution, we’ll take it.”
One better solution may involve AI. “I think as a community we’re struggling with how we do this,” says O’Mullane. “But it’s probably something we ought to do—curating the data in such a way that it’s consumable by machine learning, providing foundation models, that sort of thing.”
The data management system is arguably as much of a critical component of the Rubin observatory as the telescope itself. While most telescopes make targeted observations that get distributed to only a few astronomers at a time, Rubin will make its data available to everyone within just a few days, which is a completely different way of doing astronomy. “We’ve essentially promised that we will take every image of everything that everyone has ever wanted to see,” explains Kevin Reil , Rubin observatory scientist. “If there’s data to be collected, we will try to collect it. And if you’re an astronomer somewhere, and you want an image of something, within three or four days we’ll give you one. It’s a colossal challenge to deliver something on this scale.”
The more time I spend on the summit, the more I start to think that the science that we know Rubin will accomplish may be the least interesting part of its mission. And despite their best efforts, I get the sense that everyone I talk to is wildly understating the impact it will have on astronomy. The sheer volume of objects, the time domain, the 10 years of coadded data—what new science will all of that reveal? Astronomers have no idea, because we’ve never looked at the universe in this way before. To me, that’s the most fascinating part of what’s about to happen.
Reil agrees. “You’ve been here,” he says. “You’ve seen what we’re doing. It’s a paradigm shift, a whole new way of doing things. It’s still a telescope and a camera, but we’re changing the world of astronomy. I don’t know how to capture—I mean, it’s the people, the intensity, the awesomeness of it. I want the world to understand the beauty of it all.”
The Intersection of Science and Engineering Because nobody has built an observatory like Rubin before, there are a lot of things that aren’t working exactly as they should, and a few things that aren’t working at all. The most obvious of these is the dome. The capacitors that drive it blew a fuse the day before I arrived, and the electricians are off the summit for the weekend. The dome shutter can’t open either. Everyone I talk to takes this sort of thing in stride—they have to, because they’ve been troubleshooting issues like these for years.
I sit down with Yousuke Utsumi , a camera operations scientist who exudes the mixture of excitement and exhaustion that I’m getting used to seeing in the younger staff. “Today is amazingly quiet,” he tells me. “I’m happy about that. But I’m also really tired. I just want to sleep.”
Just yesterday, Utsumi says, they managed to finally solve a problem that the camera team had been struggling with for weeks—an intermittent fault in the camera cooling system that only seemed to happen when the telescope was moving. This was potentially a very serious problem, and Utsumi’s phone would alert him every time the fault occurred, over and over again in the middle of the night. The fault was finally traced to a cable within the telescope’s structure that used pins that were slightly too small, leading to a loose connection.
Utsumi’s contract started in 2017 and was supposed to last three years, but he’s still here. “I wanted to see first photon,” he says. “I’m an astronomer. I’ve been working on this camera so that it can observe the universe. And I want to see that light, from those photons from distant galaxies.” This is something I’ve also been thinking about—those lonely photons traveling through space for billions of years, and within the coming days, a lucky few of them will land on the sensors Utsumi has been tending, and we’ll get to see them. He nods, smiling. “I don’t want to lose one, you know?”
Rubin’s commissioning scientists have a unique role, working at the intersection of science and engineering to turn a bunch of custom parts into a functioning science instrument. Commissioning scientist Marina Pavlovic is a postdoc from Serbia with a background in the formation of supermassive black holes created by merging galaxies. “I came here last year as a volunteer,” she tells me. “My plan was to stay for three months, and 11 months later I’m a commissioning scientist. It’s crazy!”
Pavlovic’s job is to help diagnose and troubleshoot whatever isn’t working quite right. And since most things aren’t working quite right, she’s been very busy. “I love when things need to be fixed because I am learning about the system more and more every time there’s a problem—every day is a new experience here.”
I ask her what she’ll do next, once Rubin is up and running. “If you love commissioning instruments, that is something that you can do for the rest of your life, because there are always going to be new instruments,” she says.
Before that happens, though, Pavlovic has to survive the next few weeks of going on sky. “It’s going to be so emotional. It’s going to be the beginning of a new era in astronomy, and knowing that you did it, that you made it happen, at least a tiny percent of it, that will be a priceless moment.”
“I had to learn how to calm down to do this job,” she admits, “because sometimes I get too excited about things and I cannot sleep after that. But it’s okay. I started doing yoga, and it’s working.”
From First Photon to First Light My stay on the summit comes to an end on 14 April, just a day before first photon, so as soon as I get home I check in with some of the engineers and astronomers that I met to see how things went. Guillem Megias Homar manages the adaptive optics system—232 actuators that flex the surfaces of the telescope’s three mirrors a few micrometers at a time to bring the image into perfect focus. Currently working on his Ph.D., he was born in 1997, one year after the Rubin project started.
First photon, for him, went like this: “I was in the control room, sitting next to the camera team. We have a microphone on the camera, so that we can hear when the shutter is moving. And we hear the first click. And then all of a sudden, the image shows up on the screens in the control room, and it was just an explosion of emotions. All that we have been fighting for is finally a reality. We are on sky!” There were toasts (with sparkling apple juice, of course), and enough speeches that Megias Homar started to get impatient: “I was like, when can we start working? But it was only an hour, and then everything became much more quiet.”
Another newly released image showing a small section of the Rubin Observatory’s total view of the Virgo cluster of galaxies. Visible are bright stars in the Milky Way galaxy shining in the foreground, and many distant galaxies in the background.
NSF-DOE Rubin Observatory
“It was satisfying to see that everything that we’d been building was finally working,” Victor Krabbendam , project manager for Rubin construction, tells me a few weeks later. “But some of us have been at this for so long that first photon became just one of many firsts.” Krabbendam has been with the observatory full-time for the last 21 years. “And the very moment you succeed with one thing, it’s time to be doing the next thing.”
Since first photon, Rubin has been undergoing calibrations, collecting data for the first images that it’s now sharing with the world, and preparing to scale up to begin its survey. Operations will soon become routine, the commissioning scientists will move on, and eventually, Rubin will largely run itself, with just a few people at the observatory most nights.
But for astronomers, the next 10 years will be anything but routine. “It’s going to be wildly different,” says Krabbendam. “Rubin will feed generations of scientists with trillions of data points of billions of objects. Explore the data. Harvest it. Develop your idea, see if it’s there. It’s going to be phenomenal.”
Listen to a Conversation About the Rubin Observatory As part of an experiment with AI storytelling tools, author Evan Ackerman—who visited the Vera C. Rubin Observatory in Chile for four days this past April—fed over 14 hours of raw audio from his interviews and other reporting notes into NotebookLM , an AI-powered research assistant developed by Google. The result is a podcast-style audio experience that you can listen to here. While the script and voices are AI-generated, the conversation is grounded in Ackerman’s original reporting, and includes many details that did not appear in the article above. Ackerman reviewed and edited the audio to ensure accuracy, and there are minor corrections in the transcript. Let us know what you think of this experiment in AI narration.
Your browser does not support the audio tag.
See transcript 0:01: Today we’re taking a deep dive into the engineering marvel that is the Vera C. Rubin Observatory.
0:06: And and it really is a marvel.
0:08: This project pushes the limits, you know, not just for the science itself, like mapping the Milky Way or exploring dark energy, which is amazing, obviously.
0:16: But it’s also pushing the limits in just building the tools, the technical ingenuity, the, the sheer human collaboration needed to make something this complex actually work.
0:28: That’s what’s really fascinating to me.
0:29: Exactly.
0:30: And our mission for this deep dive is to go beyond the headlines, isn’t it?
0:33: We want to uncover those specific Kind of hidden technical details, the stuff from the audio interviews, the internal docs that really define this observatory.
0:41: The clever engineering solutions.
0:43: Yeah, the nuts and bolts, the answers to challenges nobody’s faced before, stuff that anyone who appreciates, you know, complex systems engineering would find really interesting.
0:53: Definitely.
0:54: So let’s start right at the heart of it.
0:57: The Simonyi survey telescope itself.
1:00: It’s this 350 ton machine inside a 600 ton dome, 30 m wide, huge. [The dome is closer to 650 tons.]
1:07: But the really astonishing part is its speed, speed and precision.
1:11: How do you even engineer something that massive to move that quickly while keeping everything stable down to the submicron level? [Micron level is more accurate.]
1:18: Well, that’s, that’s the core challenge, right?
1:20: This telescope, it can hit a top speed of 3.5 degrees per second.
1:24: Wow.
1:24: Yeah, and it can, you know, move to basically any point in the sky.
1:28: In under 20 seconds, 20 seconds, which makes it by far the fastest moving large telescope ever built, and the dome has to keep up.
1:36: So it’s also the fastest moving dome.
1:38: So the whole building is essentially racing along with the telescope.
1:41: Exactly.
1:41: And achieving that meant pretty much every component had to be custom designed like the pier holding the telescope up.
1:47: It’s mostly steel, not concrete.
1:49: Oh, interesting.
1:50: Why steel?
1:51: Specifically to stop it from twisting or vibrating when the telescope makes those incredibly fast moves.
1:56: Concrete just wouldn’t handle the torque the same way. [The pier is more steel than concrete, but it's still substantially concrete.]
1:59: OK, that makes sense.
1:59: And the power needed to accelerate and decelerate, you know, 300 tons, that must be absolutely massive.
2:06: Oh.
2:06: The instantaneous draw would be enormous.
2:09: How did they manage that without like dimming the lights on the whole.
2:12: Mountaintop every 30 seconds.
2:14: Yeah, that was a real concern, constant brownouts.
2:17: The solution was actually pretty elegant, involving these onboard capacitor banks.
2:22: Yep, slung right underneath the telescope structure.
2:24: They can slowly sip power from the grid, store it up over time, and then bam, discharge it really quickly for those big acceleration surges.
2:32: like a giant camera flash, but for moving a telescope, of yeah.
2:36: It smooths out the demand, preventing those grid disruptions.
2:40: Very clever engineering.
2:41: And beyond the movement, the mirrors themselves, equally critical, equally impressive, I imagine.
2:47: How did they tackle designing and making optics that large and precise?
2:51: Right, so the main mirror, the primary mirror, M1M3.
2:55: It’s a single piece of glass, 8.4 m across, low expansion borosilicate glass.
3:01: And that 8.4 m size, was that just like the biggest they could manage?
3:05: Well, it was a really crucial early decision.
3:07: The science absolutely required something at least 7 or 8 m wide.
3:13: But going much bigger, say 10 or 12 m, the logistics became almost impossible.
3:19: The big one was transport.
3:21: There’s a tunnel on the mountain road up to the summit, and a mirror, much larger than 8.4 m, physically wouldn’t fit through it.
3:28: No way.
3:29: So the tunnel actually set an upper limit on the mirror size.
3:31: Pretty much, yeah.
3:32: Building new road or some other complex transport method.
3:36: It would have added enormous cost and complexity.
3:38: So 8.4 m was that sweet spot between scientific need.
3:42: And, well, physical reality.
3:43: Wow, a real world constraint driving fundamental design.
3:47: And the mirror itself, you said M1 M3, it’s not just one simple mirror surface.
3:52: Correct.
3:52: It’s technically two mirror surfaces ground into that single piece of glass.
3:57: The central part has a more pronounced curvature.
3:59: It’s M1 and M3 combined.
4:00: OK, so fabricating that must have been tricky, especially with what, 10 tons of glass just in the center.
4:07: Oh, absolutely novel and complicated.
4:09: And these mirrors, they don’t support their own weight rigidly.
4:12: So just handling them during manufacturing, polishing, even getting them out of the casting mold, was a huge engineering challenge.
4:18: You can’t just lift it like a dinner plate.
4:20: Not quite, and then there’s maintaining it, re-silvering.
4:24: They hope to do it every 5 years.
4:26: Well, traditionally, big mirrors like this often need it more, like every 1.5 to 2 years, and it’s a risky weeks-long job.
4:34: You have to unbolt this priceless, unique piece of equipment, move it.
4:39: It’s nerve-wracking.
4:40: I bet.
4:40: And the silver coating itself is tiny, right?
4:42: Incredibly thin, just a few nanometers of pure silver.
4:46: It takes about 24 g for the whole giant surface, bonded with the adhesive layers that are measured in Angstroms. [It's closer to 26 grams of silver.]
4:52: It’s amazing precision.
4:54: So tying this together, you have this fast moving telescope, massive mirrors.
4:59: How do they keep everything perfectly focused, especially with multiple optical elements moving relative to each other?
5:04: that’s where these things called hexapods come in.
5:08: Really crucial bits of kit.
5:09: Hexapods, like six feet?
5:12: Sort of.
5:13: They’re mechanical systems with 6 adjustable arms or struts.
5:17: A simpler telescope might just have one maybe on the camera for basic focusing, but Ruben needs more because it’s got the 3 mirrors plus the camera.
5:25: Exactly.
5:26: So there’s a hexapod mounted on the secondary mirror, M2.
5:29: Its job is to keep M2 perfectly positioned relative to M1 and M3, compensating for tiny shifts or flexures.
5:36: And then there’s another hexapod on the camera itself.
5:39: That one adjusts the position and tilt of the entire camera’s sensor plane, the focal plane.
5:43: To get that perfect focus across the whole field of view.
5:46: And these hexapods move in 6 ways.
5:48: Yep, 6 degrees of freedom.
5:50: They can adjust position along the X, Y, and Z axis, and they can adjust rotation or tilt around those 3 axes as well.
5:57: It allows for incredibly fine adjustments, microp precision stuff.
6:00: So they’re constantly making these tiny tweaks as the telescope moves.
6:04: Constantly.
6:05: The active optics system uses them.
6:07: It calculates the needed corrections based on reference stars in the images, figures out how the mirror might be slightly bending.
6:13: And then tells the hexapods how to compensate.
6:15: It’s controlling like 26 g of silver coating on the mirror surface down to micron precision, using the mirror’s own natural bending modes.
6:24: It’s pretty wild.
6:24: Incredible.
6:25: OK, let’s pivot to the camera itself.
6:28: The LSST camera.
6:29: Big digital camera ever built, right?
6:31: Size of a small car, 2800 kg, captures 3.2 gigapixel images, just staggering numbers.
6:38: They really are, and the engineering inside is just as staggering.
6:41: That Socal plane where the light actually hits.
6:43: It’s made up of 189 individual CCD sensors.
6:47: Yep, 4K by 4K CCDs grouped into 21 rafts.
6:50: They give them like tiles, and each CCD has 16 amplifiers reading it out.
6:54: Why so many amplifiers?
6:56: Speed.
6:56: Each amplifier reads out about a million pixels.
6:59: By dividing the job up like that, they can read out the entire 3.2 gigapixel sensor in just 2 seconds.
7:04: 2 seconds for that much data.
7:05: Wow.
7:06: It’s essential for the survey’s rapid cadence.
7:09: Getting all those 189 CCDs perfectly flat must have been, I mean, are they delicate?
7:15: Unbelievably delicate.
7:16: They’re silicon wafers only 100 microns thick.
7:18: How thick is that really?
7:19: about the thickness of a human hair.
7:22: You could literally break one by breathing on it wrong, apparently, seriously, yeah.
7:26: And the challenge was aligning all 189 of them across this 650 millimeter wide focal plane, so the entire surface is flat.
7:34: To within just 24 microns, peak to valley.
7:37: 24 microns.
7:39: That sounds impossibly flat.
7:40: It’s like, imagine the entire United States.
7:43: Now imagine the difference between the lowest point and the highest point across the whole country was only 100 ft.
7:49: That’s the kind of relative flatness they achieved on the camera sensor.
7:52: OK, that puts it in perspective.
7:53: And why is that level of flatness so critical?
7:56: Because the telescope focuses light.
7:58: terribly.
7:58: It’s an F1.2 system, which means it has a very shallow depth of field.
8:02: If the sensors aren’t perfectly in that focal plane, even by a few microns, parts of the image go out of focus.
8:08: Gotcha.
8:08: And the pixels themselves, the little light buckets on the CCDs, are they special?
8:14: They’re custom made, definitely.
8:16: They settled on 10 micron pixels.
8:18: They figured anything smaller wouldn’t actually give them more useful scientific information.
8:23: Because you start hitting the limits of what the atmosphere and the telescope optics themselves can resolve.
8:28: So 10 microns was the optimal size, right?
8:31: balancing sensor tech with physical limits.
8:33: Now, keeping something that sensitive cool, that sounds like a nightmare, especially with all those electronics.
8:39: Oh, it’s a huge thermal engineering challenge.
8:42: The camera actually has 3 different cooling zones, 3 distinct temperature levels inside.
8:46: 3.
8:47: OK.
8:47: First, the CCDs themselves.
8:49: They need to be incredibly cold to minimize noise.
8:51: They operate at -125 °C.
8:54: -125C, how do they manage that?
8:57: With a special evaporator plate connected to the CCD rafts by flexible copper braids, which pulls heat away very effectively.
9:04: Then you’ve got the cameras, electronics, the readout boards and stuff.
9:07: They run cooler than room temp, but not that cold, around -50 °C.
9:12: OK.
9:12: That requires a separate liquid cooling loop delivered through these special vacuum insulated tubes to prevent heat leaks.
9:18: And the third zone.
9:19: That’s for the electronics in the utility trunk at the back of the camera.
9:23: They generate a fair bit of heat, about 3000 watts, like a few hair dryers running constantly.
9:27: Exactly.
9:28: So there’s a third liquid cooling system just for them, keeping them just slightly below the ambient room temperature in the dome.
9:35: And all this cooling, it’s not just to keep the parts from overheating, right?
9:39: It affects the images, absolutely critical for image quality.
9:44: If the outer surface of the camera body itself is even slightly warmer or cooler than the air inside the dome, it creates tiny air currents, turbulence right near the light path.
9:57: And that shows up as little wavy distortions in the images, messing up the precision.
10:02: So even the outside temperature of the camera matters.
10:04: Yep, it’s not just a camera.
10:06: They even have to monitor the heat generated by the motors that move the massive dome, because that heat could potentially cause enough air turbulence inside the dome to affect the image quality too.
10:16: That’s incredible attention to detail, and the camera interior is a vacuum you mentioned.
10:21: Yes, a very strong vacuum.
10:23: They pump it down about once a year, first using turbopumps spinning at like 80,000 RPM to get it down to about 102 tor.
10:32: Then they use other methods to get it down much further.
10:34: The 107 tor, that’s an ultra high vacuum.
10:37: Why the vacuum?
10:37: Keep frost off the cold part.
10:39: Exactly.
10:40: Prevents condensation and frost on those negatives when it 25 degree CCDs and generally ensures everything works optimally.
10:47: For normal operation, day to day, they use something called an ion pump.
10:51: How does that work?
10:52: It basically uses a strong electric field to ionize any stray gas molecules, mostly hydrogen, and trap them, effectively removing them from the vacuum space, very efficient for maintaining that ultra-high vacuum.
11:04: OK, so we have this incredible camera taking these massive images every few seconds.
11:08: Once those photons hit the CCDs and become digital signals, What happens next?
11:12: How does Ruben handle this absolute flood of data?
11:15: Yeah, this is where Ruben becomes, you know, almost as much a data processing machine as a telescope.
11:20: It’s designed for the data output.
11:22: So photons hit the CCDs, get converted to electrical signals.
11:27: Then, interestingly, they get converted back into light signals, photonic signals back to light.
11:32: Why?
11:33: To send them over fiber optics.
11:34: They’re about 6 kilometers of fiber optic cable running through the observatory building.
11:39: These signals go to FPGA boards, field programmable gate arrays in the data acquisition system.
11:46: OK.
11:46: And those FPGAs are basically assembling the complete image data packages from all the different CCDs and amplifiers.
11:53: That sounds like a fire hose of data leaving the camera.
11:56: How does it get off the mountain and where does it need to go?
11:58: And what about all the like operational data, temperatures, positions?
12:02: Good question.
12:03: There are really two main data streams all that telemetry you mentioned, sensor readings, temperatures, actuator positions, command set, everything about the state of the observatory that all gets collected into something called the Engineering facility database or EFD.
12:16: They use Kafka for transmitting that data.
12:18: It’s good for high volume streams, and store it in an influx database, which is great for time series data like sensor readings.
12:26: And astronomers can access that.
12:28: Well, there’s actually a duplicate copy of the EFD down at SLAC, the research center in California.
12:34: So scientists and engineers can query that copy without bogging down the live system running on the mountain.
12:40: Smart.
12:41: How much data are we talking about there?
12:43: For the engineering data, it’s about 20 gigabytes per night, and they plan to keep about a year’s worth online.
12:49: OK.
12:49: And the image data, the actual science pixels.
12:52: That takes a different path. [All of the data from Rubin to SLAC travels over the same network.]
12:53: It travels over dedicated high-speed network links, part of ESET, the research network, all the way from Chile, usually via Boca Raton, Florida, then Atlanta, before finally landing at SLAC.
13:05: And how fast does that need to be?
13:07: The goal is super fast.
13:09: They aim to get every image from the telescope in Chile to the data center at SLAC within 7 seconds of the shutter closing.
13:15: 7 seconds for gigabytes of data.
13:18: Yeah.
13:18: Sometimes network traffic bumps it up to maybe 30 seconds or so, but the target is 7.
13:23: It’s crucial for the next step, which is making sense of it all.
13:27: How do astronomers actually use this, this torrent of images and data?
13:30: Right.
13:31: This really changes how astronomy might be done.
13:33: Because Ruben is designed to generate alerts, real-time notifications about changes in the sky.
13:39: Alerts like, hey, something just exploded over here.
13:42: Pretty much.
13:42: It takes an image compared to the previous images of the same patch of sky and identifies anything that’s changed, appeared, disappeared, moved, gotten brighter, or fainter.
13:53: It expects to generate about 10,000 such alerts per image.
13:57: 10,000 per image, and they take an image every every 20 seconds or so on average, including readouts. [Images are taken every 34 seconds: a 30 second exposure, and then about 4 seconds for the telescope to move and settle.]
14:03: So you’re talking around 10 million alerts every single night.
14:06: 10 million a night.
14:07: Yep.
14:08: And the goal is to get those alerts out to the world within 60 seconds of the image being taken.
14:13: That’s insane.
14:14: What’s in an alert?
14:15: It contains the object’s position, brightness, how it’s changed, and little cut out images, postage stamps in the last 12 months of observations, so astronomers can quickly see the history.
14:24: But surely not all 10 million are real astronomical events satellites, cosmic rays.
14:30: Exactly.
14:31: The observatory itself does a first pass filter, masking out known issues like satellite trails, cosmic ray hits, atmospheric effects, with what they call real bogus stuff.
14:41: OK.
14:42: Then, this filtered stream of potentially real alerts goes out to external alert brokers.
14:49: These are systems run by different scientific groups around the world.
14:52: Yeah, and what did the brokers do?
14:53: They ingest the huge stream from Ruben and apply their own filters, based on what their particular community is interested in.
15:00: So an astronomer studying supernovae can subscribe to a broker that filters just for likely supernova candidates.
15:06: Another might filter for near Earth asteroids or specific types of variable stars.
15:12: so it makes the fire hose manageable.
15:13: You subscribe to the trickle you care about.
15:15: Precisely.
15:16: It’s a way to distribute the discovery potential across the whole community.
15:19: So it’s not just raw images astronomers get, but these alerts and presumably processed data too.
15:25: Oh yes.
15:26: Rubin provides the raw images, but also fully processed images, corrected for instrument effects, calibrated called processed visit images.
15:34: And also template images, deep combinations of previous images used for comparison.
15:38: And managing all that data, 15 petabytes you mentioned, how do you query that effectively?
15:44: They use a system called Keyserve. [The system is "QServ."]
15:46: It’s a distributed relational database, custom built basically, designed to handle these enormous astronomical catalogs.
15:53: The goal is to let astronomers run complex searches across maybe 15 petabytes of catalog data and get answers back in minutes, not days or weeks.
16:02: And how do individual astronomers actually interact with it?
16:04: Do they download petabytes?
16:06: No, definitely not.
16:07: For general access, there’s a science platform, the front end of which runs on Google Cloud.
16:11: Users interact mainly through Jupiter notebooks.
16:13: Python notebooks, familiar territory for many scientists.
16:17: Exactly.
16:18: They can write arbitrary Python code, access the catalogs directly, do analysis for really heavy duty stuff like large scale batch processing.
16:27: They can submit jobs to the big compute cluster at SLEC, which sits right next to the data storage.
16:33: That’s much more efficient.
16:34: Have they tested this?
16:35: Can it handle thousands of astronomers hitting it at once?
16:38: They’ve done extensive testing, yeah, scaled it up with hundreds of users already, and they seem confident they can handle up to maybe 3000 simultaneous users without issues.
16:49: And a key point.
16:51: After an initial proprietary period for the main survey team, all the data and importantly, all the software algorithms used to process it become public.
17:00: Open source algorithms too.
17:01: Yes, the idea is, if the community can improve on their processing pipelines, they’re encouraged to contribute those solutions back.
17:08: It’s meant to be a community resource.
17:10: That open approach is fantastic, and even the way the images are presented visually has some deep thought behind it, doesn’t it?
17:15: You mentioned Robert Leptina’s perspective.
17:17: Yes, this is fascinating.
17:19: It’s about how you assign color to astronomical images, which usually combine data from different filters, like red, green, blue.
17:28: It’s not just about making pretty pictures, though they can be beautiful.
17:31: Right, it should be scientifically meaningful.
17:34: Exactly.
17:35: Lepton’s approach tries to preserve the inherent color information in the data.
17:40: Many methods saturate bright objects, making their centers just white blobs.
17:44: Yeah, you see that a lot.
17:46: His algorithm uses a different mathematical scaling, more like a logarithmic scale, that avoids this saturation.
17:52: It actually propagates the true color information back into the centers of bright stars and galaxies.
17:57: So, a galaxy that’s genuinely redder, because it’s red shifted, will actually look redder in the image, even in its bright core.
18:04: Precisely, in a scientifically meaningful way.
18:07: Even if our eyes wouldn’t perceive it quite that way directly through a telescope, the image renders the data faithfully.
18:13: It helps astronomers visually interpret the physics.
18:15: It’s a subtle but powerful detail in making the data useful.
18:19: It really is.
18:20: Beyond just taking pictures, I heard Ruben’s wide view is useful for something else entirely gravitational waves.
18:26: That’s right.
18:26: It’s a really cool synergy.
18:28: Gravitational wave detectors like Lego and Virgo, they detect ripples in space-time, often from emerging black holes or neutron stars, but they usually only narrow down the location to a relatively large patch of sky, maybe 10 square degrees or sometimes much more.
18:41: Ruben’s camera has a field of view of about 9.6 square degrees.
18:45: That’s huge for a telescope.
18:47: It almost perfectly matches the typical LIGO alert area.
18:51: so when LIGO sends an alert, Ruben can quickly scan that whole error box, maybe taking just a few pointings, looking for any new point of light.
19:00: The optical counterpart, the Killanova explosion, or whatever light accompany the gravitational wave event.
19:05: It’s a fantastic follow-up machine.
19:08: Now, stepping back a bit, this whole thing sounds like a colossal integration challenge.
19:13: A huge system of systems, many parts custom built, pushed to their limits.
19:18: What were some of those big integration hurdles, bringing it all together?
19:22: Yeah, classic system of systems is a good description.
19:25: And because nobody’s built an observatory quite like this before, a lot of the commissioning phase, getting everything working together involves figuring out the procedures as they go.
19:34: Learning by doing on a massive scale.
19:36: Pretty much.
19:37: They’re essentially, you know, teaching the system how to walk.
19:40: And there’s this constant tension, this balancing act.
19:43: Do you push forward, maybe build up some technical debt, things you know you’ll have to fix later, or do you stop and make sure every little issue is 100% perfect before moving on, especially with a huge distributed team?
19:54: I can imagine.
19:55: And you mentioned the dome motors earlier.
19:57: That discovery about heat affecting images sounds like a perfect example of unforeseen integration issues.
20:03: Exactly.
20:03: Marina Pavvich described that.
20:05: They ran the dome motors at full speed, something maybe nobody had done for extended periods in that exact configuration before, and realized, huh.
20:13: The heat these generate might actually cause enough air turbulence to mess with our image quality.
20:19: That’s the kind of thing you only find when you push the integrated system.
20:23: Lots of unexpected learning then.
20:25: What about interacting with the outside world?
20:27: Other telescopes, the atmosphere itself?
20:30: How does Ruben handle atmospheric distortion, for instance?
20:33: that’s another interesting point.
20:35: Many modern telescopes use lasers.
20:37: They shoot a laser up into the sky to create an artificial guide star, right, to measure.
20:42: Atmospheric turbulence.
20:43: Exactly.
20:44: Then they use deformable mirrors to correct for that turbulence in real time.
20:48: But Ruben cannot use a laser like that.
20:50: Why?
20:51: Because its field of view is enormous.
20:53: It sees such a wide patch of sky at once.
20:55: A single laser beam, even a pinpoint from another nearby observatory, would contaminate a huge fraction of Ruben’s image.
21:03: It would look like a giant streak across, you know, a quarter of the sky for Ruben.
21:06: Oh, wow.
21:07: OK.
21:08: Too much interference.
21:09: So how does it correct for the atmosphere?
21:11: Software.
21:12: It uses a really clever approach called forward modeling.
21:16: It looks at the shapes of hundreds of stars across its wide field of view in each image.
21:21: It knows what those stars should look like, theoretically.
21:25: Then it builds a complex mathematical model of the atmosphere’s distorting effect across the entire field of view that would explain the observed star shapes.
21:33: It iterates this model hundreds of times per image until it finds the best fit. [The model is created by iterating on the image data, but iteration is not necessary for every image.]
21:38: Then it uses that model to correct the image, removing the atmospheric blurring.
21:43: So it calculates the distortion instead of measuring it directly with a laser.
21:46: Essentially, yes.
21:48: Now, interestingly, there is an auxiliary telescope built alongside Ruben, specifically designed to measure atmospheric properties independently.
21:55: Oh, so they could use that data.
21:57: They could, but currently, they’re finding their software modeling approach using the science images themselves, works so well that they aren’t actively incorporating the data from the auxiliary telescope for that correction right now.
22:08: The software solution is proving powerful enough on its own.
22:11: Fascinating.
22:12: And they still have to coordinate with other telescopes about their lasers, right?
22:15: Oh yeah.
22:15: They have agreements about when nearby observatories can point their lasers, and sometimes Ruben might have to switch to a specific filter like the Iband, which is less sensitive to the laser.
22:25: Light if one is active nearby while they’re trying to focus.
22:28: So many interacting systems.
22:30: What an incredible journey through the engineering of Ruben.
22:33: Just the sheer ingenuity from the custom steel pier and the capacitor banks, the hexapods, that incredibly flat camera, the data systems.
22:43: It’s truly a machine built to push boundaries.
22:45: It really is.
22:46: And it’s important to remember, this isn’t just, you know, a bigger version of existing telescopes.
22:51: It’s a fundamentally different kind of machine.
22:53: How so?
22:54: By creating this massive all-purpose data set, imaging the entire southern sky over 800 times, cataloging maybe 40 billion objects, it shifts the paradigm.
23:07: Astronomy becomes less about individual scientists applying for time to point a telescope at one specific thing and more about statistical analysis, about mining this unprecedented ocean of data that Rubin provides to everyone.
23:21: So what does this all mean for us, for science?
23:24: Well, it’s a generational investment in fundamental discovery.
23:27: They’ve optimized this whole system, the telescope, the camera, the data pipeline.
23:31: For finding, quote, exactly the stuff we don’t know we’ll find.
23:34: Optimized for the unknown, I like that.
23:36: Yeah, we’re basically generating this incredible resource that will feed generations of astronomers and astrophysicists.
23:42: They’ll explore it, they’ll harvest discoveries from it, they’ll find patterns and objects and phenomena within billions and billions of data points that we can’t even conceive of yet.
23:50: And that really is the ultimate excitement, isn’t it?
23:53: Knowing that this monumental feat of engineering isn’t just answering old questions, but it’s poised to open up entirely new questions about the universe, questions we literally don’t know how to ask today.
24:04: Exactly.
24:05: So, for you, the listener, just think about that.
24:08: Consider the immense, the completely unknown discoveries that are waiting out there just waiting to be found when an entire universe of data becomes accessible like this.
24:16: What might we find?
Back to top
spectrum.ieee.org
Evan Ackerman
@evanackerman.bsky.social
· Jun 20
Video Friday: Jet-Powered Humanoid Robot Lifts Off
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
RSS 2025 : 21–25 June 2025, LOS ANGELES ETH Robotics Summer School : 21–27 June 2025, GENEVA IAS 2025 : 30 June–4 July 2025, GENOA, ITALY ICRES 2025 : 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics : 8–11 July 2025, SUWON, SOUTH KOREA IFAC Symposium on Robotics : 15–18 July 2025, PARIS RoboCup 2025 : 15–21 July 2025, BAHIA, BRAZIL RO-MAN 2025 : 25–29 August 2025, EINDHOVEN, THE NETHERLANDS CLAWAR 2025 : 5–7 September 2025, SHENZHEN CoRL 2025 : 27–30 September 2025, SEOUL IEEE Humanoids : 30 September–2 October 2025, SEOUL World Robot Summit : 10–12 October 2025, OSAKA, JAPAN IROS 2025 : 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos!
This is the first successful vertical takeoff of a jet-powered flying humanoid robot, developed by Artificial and Mechanical Intelligence (AMI) at Istituto Italiano di Tecnologia (IIT). The robot lifted ~50 cm off the ground while maintaining dynamic stability, thanks to advanced AI-based control systems and aerodynamic modeling.
We will have much more on this in the coming weeks!
[ Nature ] via [ IIT ]
As a first step towards our mission of deploying general purpose robots, we are pushing the frontiers of what end-to-end AI models can achieve in the real world. We’ve been training models and evaluating their capabilities for dexterous sensorimotor policies across different embodiments, environments, and physical interactions. We’re sharing capability demonstrations on tasks stressing different aspects of manipulation: fine motor control, spatial and temporal precision, generalization across robots and settings, and robustness to external disturbances.
[ Generalist AI ]
Thanks, Noah!
Ground Control Robotics is introducing SCUTTLE, our newest elongate multilegged platform for mobility anywhere!
[ Ground Control Robotics ]
Teleoperation has been around for a while, but what hasn’t been is precise, real-time force feedback.That’s where Flexiv steps in to shake things up. Now, whether you’re across the room or across the globe, you can experience seamless, high-fidelity remote manipulation with a sense of touch.
This sort of thing usually takes some human training, for which you’d be best served by robot arms with precise, real-time force feedback . Hmm, I wonder where you’d find those...
[ Flexiv ]
The 1X World Model is a data-driven simulator for humanoid robots, built with a grounded understanding of physics. It allows us to predict—or “hallucinate”—the outcomes of NEO’s actions before they’re taken in the real world. Using the 1X World Model, we can instantly assess the performance of AI models—compressing development time and providing a clear benchmark for continuous improvement.
[ 1X ]
SLAPBOT is an interactive robotic artwork by Hooman Samani and Chandler Cheng, exploring the dynamics of physical interaction, artificial agency, and power. The installation features a robotic arm fitted with a soft, inflatable hand that delivers slaps through pneumatic actuation, transforming a visceral human gesture into a programmed robotic response.
I asked, of course, whether SLAPBOT slaps people, and it does not: “Despite its provocative concept and evocative design, SLAPBOT does not make physical contact with human participants. It simulates the gesture of slapping without delivering an actual strike. The robotic arm’s movements are precisely choreographed to suggest the act, yet it maintains a safe distance.”
[ SLAPBOT ]
Thanks, Hooman!
Inspecting the bowels of ships is something we’d really like robots to be doing for us, please and thank you.
[ Norwegian University of Science and Technology ] via [ GitHub ]
Thanks, Kostas!
H2L Corporation (hereinafter referred to as H2L) has unveiled a new product called “Capsule Interface,” which transmits whole-body movements and strength, enabling new shared experiences with robots and avatars. A product introduction video depicting a synchronization never before experienced by humans was also released.
[ H2L Corp. ] via [ RobotStart ]
How do you keep a robot safe without requiring it to look at you? Radar !
[ Paper ] via [ IEEE Sensors Journal ]
Thanks, Bram!
We propose Aerial Elephant Trunk, an aerial continuum manipulator inspired by the elephant trunk, featuring a small-scale quadrotor and a dexterous, compliant tendon-driven continuum arm for versatile operation in both indoor and outdoor settings.
[ Adaptive Robotics Controls Lab ]
This video demonstrates a heavy weight lifting test using the ARMstrong Dex robot, focusing on a 40 kg bicep curl motion. ARMstrong Dex is a human-sized, dual-arm hydraulic robot currently under development at the Korea Atomic Energy Research Institute (KAERI) for disaster response applications. Designed to perform tasks flexibly like a human while delivering high power output, ARMstrong Dex is capable of handling complex operations in hazardous environments.
[ Korea Atomic Energy Research Institute ]
Micro-robots that can inspect water pipes, diagnose cracks and fix them autonomously – reducing leaks and avoiding expensive excavation work – have been developed by a team of engineers led by the University of Sheffield.
[ University of Sheffield ]
We’re growing in size, scale, and impact! We’re excited to announce the opening of our serial production facility in the San Francisco Bay Area, the very first purpose-built robotaxi assembly facility in the United States. More space means more innovation, production, and opportunities to scale our fleet.
[ Zoox ]
Watch multipick in action as our pickle robot rapidly identifies, picks, and places multiple boxes in a single swing of an arm.
[ Pickle ]
And now, this.
[ Aibo ]
Cargill’s Amsterdam Multiseed facility enlists Spot and Orbit to inspect machinery and perform visual checks, enhanced by all-new AI features, as part of their “Plant of the Future” program.
[ Boston Dynamics ]
This ICRA 2025 plenary talk is from Raffaello D’Andrea, entitled “Models are Dead, Long Live Models!”
[ ICRA 2025 ]
Will data solve robotics and automation? Absolutely! Never! Who knows! Let’s argue about it!
[ ICRA 2025 ]
spectrum.ieee.org
Evan Ackerman
@evanackerman.bsky.social
· Jun 13
Video Friday: AI Model Gives Neo Robot Autonomy
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
2025 Energy Drone & Robotics Summit : 16–18 June 2025, HOUSTON RSS 2025 : 21–25 June 2025, LOS ANGELES ETH Robotics Summer School : 21–27 June 2025, GENEVA IAS 2025 : 30 June–4 July 2025, GENOA, ITALY ICRES 2025 : 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics : 8–11 July 2025, SUWON, SOUTH KOREA IFAC Symposium on Robotics : 15–18 July 2025, PARIS RoboCup 2025 : 15–21 July 2025, BAHIA, BRAZIL RO-MAN 2025 : 25–29 August 2025, EINDHOVEN, THE NETHERLANDS CLAWAR 2025 : 5–7 September 2025, SHENZHEN CoRL 2025 : 27–30 September 2025, SEOUL IEEE Humanoids : 30 September–2 October 2025, SEOUL World Robot Summit : 10–12 October 2025, OSAKA, JAPAN IROS 2025 : 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos!
Introducing Redwood—1X’s breakthrough AI model capable of doing chores around the home. For the first time, NEO Gamma moves, understands, and interacts autonomously in complex human environments. Built to learn from real-world experiences, Redwood empowers NEO to perform end-to-end mobile manipulation tasks like retrieving objects for users, opening doors, and navigating around the home gracefully, on top of hardware designed for compliance, safety, and resilience.
- YouTube www.youtube.com
[ 1X Technology ]
Marek Michalowski , who co-created Keepon , has not posted to his YouTube channel in 17 years. Until this week. The new post? It’s about a project from 10 years ago!
[ Project Sundial ]
Helix can now handle a wider variety of packaging approaching human-level dexterity and speed, bringing us closer to fully autonomous package sorting. This rapid progress underscores the scalability of Helix’s learning-based approach to robotics, translating quickly into real-world application.
[ Figure ]
This is certainly an atypical Video Friday selection, but I saw this Broadway musical called “Maybe Happy Ending” a few months ago because the main characters are deprecated humanoid home service robots. It was utterly charming, and it just won the Tony award for best new musical among others.
[ "Maybe Happy Ending " ]
Boston Dynamics brought a bunch of Spots to “America’s Got Talent,” and kudos to them for recovering so gracefully from an on stage failure.
[ Boston Dynamics ]
I think this is the first time I’ve seen end-effector changers used for either feet or heads.
[ CNRS-AIST Joint Robotics Laboratory ]
ChatGPT has gone fully Navrim—complete with existential dread and maximum gloom! Watch as the most pessimistic ChatGPT-powered robot yet moves chess pieces across a physical board, deeply contemplating both chess strategy and the futility of existence. Experience firsthand how seamlessly AI blends with robotics, even if Navrim insists there’s absolutely no point.
Not bad for $219 all in.
[ Vassar Robotics ]
We present a single layer multimodal sensory skin made using only a highly sensitive hydrogel membrane. Using electrical impedance tomography techniques, we access up to 863,040 conductive pathways across the membrane, allowing us to identify at least six distinct types of multimodal stimuli, including human touch, damage, multipoint insulated presses, and local heating. To demonstrate our approach’s versatility, we cast the hydrogel into the shape and size of an adult human hand.
[ Bio-Inspired Robotics Laboratory ] paper published by [ Science Robotics ]
This paper introduces a novel robot designed to exhibit two distinct modes of mobility: rotational aerial flight and terrestrial locomotion. This versatile robot comprises a sturdy external frame, two motors, and a single wing embodying its fuselage. The robot is capable of vertical takeoff and landing in mono-wing flight mode, with the unique ability to fly in both clockwise and counterclockwise directions, setting it apart from traditional mono-wings.
[ AIR Lab paper ] published in [ The International journal of Robotics Research ]
When TRON 1 goes to work all he does is steal snacks from hoomans. Apparently.
[ LimX Dynamics ]
The 100,000th robot has just rolled off the line at Pudu Robotics’ Super Factory! This key milestone highlights our cutting-edge manufacturing strength and marks a global shipment volume of over 100,000 units delivered worldwide.
[ Pudu Robotics ]
Now that is a big saw.
[ Kuka Robotics ]
NASA Jet Propulsion Laboratory has developed the Exploration Rover for Navigating Extreme Sloped Terrain or ERNEST. This rover could lead to a new class of low-cost planetary rovers for exploration of previously inaccessible locations on Mars and the moon.
[ NASA Jet Propulsion Laboratory paper ]
Brett Adcock, Founder and CEO, Figure AI speaks with Bloomberg Television’s Ed Ludlow about how it is training humanoid robots for logistics, manufacturing, and future roles in the home at Bloomberg Tech in San Francisco.
[ Figure ]
Peggy Johnson, CEO of Agility Robotics, discusses how humanoid robots like Digit are transforming logistics and manufacturing. She speaks with Bloomberg Businessweek’s Brad Stone about the rapid advances in automation and the next era of robots in the workplace at Bloomberg Tech in San Francisco.
[ Agility Robotics ]
This ICRA 2025 Plenary is from Allison Okamura, entitled “Rewired: The Interplay of Robots and Society.”
[ ICRA 2025 ]
spectrum.ieee.org
Evan Ackerman
@evanackerman.bsky.social
· May 31
This Little Mars Rover Stayed Home
As a mere earthling, I remember watching in fascination as
Sojourner sent back photos of the Martian surface during the summer of 1997. I was not alone. The servers at NASA’s Jet Propulsion Lab slowed to a crawl when they got more than 47 million hits (a record number!) from people attempting to download those early images of the Red Planet. To be fair, it was the late 1990s, the Internet was still young, and most people were using dial-up modems. By the end of the 83-day mission, Sojourner had sent back 550 photos and performed more than 15 chemical analyses of Martian rocks and soil.
Sojourner , of course, remains on Mars. Pictured here is Marie Curie, its twin. Functionally identical, either one of the rovers could have made the voyage to Mars, but one of them was bound to become the famous face of the mission, while the other was destined to be left behind in obscurity. Did I write this piece because I feel a little bad for Marie Curie ? Maybe. But it also gave me a chance to revisit this pioneering Mars mission, which established that robots could effectively explore the surface of planets and captivate the public imagination.
Sojourner ’s sojourn on Mars
On 4 July 1997, the
Mars Pathfinder parachuted through the Martian atmosphere and bounced about 15 times on glorified airbags before finally coming to a rest. The lander, renamed the Carl Sagan Memorial Station , carried precious cargo stowed inside. The next day, after the airbags retracted, the solar-powered Sojourner eased its way down the ramp, the first human-made vehicle to roll around on the surface of another planet. (It wasn’t the first extraterrestrial body, though. The Soviet Lunokhod rovers conducted two successful missions on the moon in 1970 and 1973. The Soviets had also landed a rover on Mars back in 1971, but communication was lost before the PROP-M ever deployed.)
This giant sandbox at JPL provided Marie Curie with an approximation of Martian terrain. Mike Nelson/AFP/Getty Images
The six-wheeled, 10.6-kilogram, microwave-oven-size
Sojourner was equipped with three low-resolution cameras (two on the front for black-and-white images and a color camera on the rear), a laser hazard–avoidance system, an alpha-proton X-ray spectrometer, experiments for testing wheel abrasion and material adherence, and several accelerometers. The robot also demonstrated the value of the six-wheeled “rocker-bogie” suspension system that became NASA’s go-to design for all later Mars rovers. Sojourner never roamed more than about 12 meters from the lander due to the limited range of its radio.
Pathfinder had landed in Ares Vallis , an assumed ancient floodplain chosen because of the wide variety of rocks present. Scientists hoped to confirm the past existence of water on the surface of Mars. Sojourner did discover rounded pebbles that suggested running water, and later missions confirmed it.
A highlight of Sojourner ’s 83-day mission on Mars was its encounter with a rock nicknamed Barnacle Bill [to the rover’s left]. JPL/NASA
As its first act of exploration,
Sojourner rolled forward 36 centimeters and encountered a rock, dubbed Barnacle Bill due to its rough surface. The rover spent about 10 hours analyzing the rock, using its spectrometer to determine the elemental composition. Over the next few weeks, while the lander collected atmospheric information and took photos, the rover studied rocks in detail and tested the Martian soil.
Marie Curie ’s sojourn…in a JPL sandbox
Meanwhile back on Earth, engineers at JPL used
Marie Curie to mimic Sojourner’s movements in a Mars-like setting. During the original design and testing of the rovers, the team had set up giant sandboxes, each holding thousands of kilograms of playground sand, in the Space Flight Operations Facility at JPL. They exhaustively practiced the remote operation of Sojourner , including an 11-minute delay in communications between Mars and Earth. (The actual delay can vary from 7 to 20 minutes.) Even after Sojourner landed, Marie Curie continued to help them strategize.
Initially, Sojourner was remotely operated from Earth, which was tricky given the lengthy communication delay. Mike Nelson/AFP/Getty Images
During its first few days on Mars,
Sojourner was maneuvered by an Earth-based operator wearing 3D goggles and using a funky input device called a Spaceball 2003 . Images pieced together from both the lander and the rover guided the operator. It was like a very, very slow video game—the rover sometimes moved only a few centimeters a day. NASA then turned on Sojourner’s hazard-avoidance system, which allowed the rover some autonomy to explore its world. A human would suggest a path for that day’s exploration, and then the rover had to autonomously avoid any obstacles in its way, such as a big rock, a cliff, or a steep slope.
JPL designed
Sojourner to operate for a week. But the little rover that could kept chugging along for 83 Martian days before NASA finally lost contact, on 7 October 1997. The lander had conked out on 27 September. In all, the mission collected 1.2 gigabytes of data (which at the time was a lot ) and sent back 10,000 images of the planet’s surface.
NASA held on to
Marie Curie with the hopes of sending it on another mission to Mars. For a while, it was slated to be part of the Mars 2001 set of missions, but that didn’t happen. In 2015, JPL transferred the rover to the Smithsonian’s National Air and Space Museum .
When NASA Embraced Faster, Better, Cheaper
The
Pathfinder mission was the second one in NASA administrator Daniel S. Goldin ’s Discovery Program, which embodied his “faster, better, cheaper” philosophy of making NASA more nimble and efficient. (The first Discovery mission was to the asteroid Eros.) In the financial climate of the early 1990s, the space agency couldn’t risk a billion-dollar loss if a major mission failed. Goldin opted for smaller projects; the Pathfinder mission’s overall budget, including flight and operations, was capped at US $300 million.
RELATED: How NASA Built Its Mars Rovers
In his 2014 book
Curiosity: An Inside Look at the Mars Rover Mission and the People Who Made It Happen (Prometheus), science writer Rod Pyle interviews Rob Manning , chief engineer for the Pathfinder mission and subsequent Mars rovers. Manning recalled that one of the best things about the mission was its relatively minimal requirements. The team was responsible for landing on Mars, delivering the rover, and transmitting images—technically challenging, to be sure, but beyond that the team had no constraints.
Sojourner was succeeded by the rovers Spirit , Opportunity , and Curiosity . Shown here are four mission spares, including Marie Curie [foreground]. JPL-Caltech/NASA
The real mission was to prove to Congress and the American public that NASA could do groundbreaking work more efficiently. Behind the scenes, there was a little bit of accounting magic happening, with the “faster, better, cheaper” missions often being silently underwritten by larger, older projects. For example, the radioisotope heater units that kept Sojourner ’s electronics warm enough to operate were leftover spares from the Galileo mission to Jupiter, so they were “free.”
Not only was the
Pathfinder mission successful but it captured the hearts of Americans and reinvigorated an interest in exploring Mars. In the process, it set the foundation for the future missions that allowed the rovers Spirit , Opportunity , and Curiosity (which, incredibly, is still operating nearly 13 years after it landed) to explore even more of the Red Planet.
How the rovers Sojourner and Marie Curie got their names
To name its first Mars rovers, NASA launched a student contest in March 1994, with the specific guidance of choosing a “heroine.” Entry essays were judged on their quality and creativity, the appropriateness of the name for a rover, and the student’s knowledge of the woman to be honored as well as the mission’s goals. Students from all over the world entered.
Twelve-year-old Valerie Ambroise of Bridgeport, Conn., won for her essay on
Sojourner Truth , while 18-year-old Deepti Rohatgi of Rockville, Md., came in second for hers on Marie Curie . Truth was a Black woman born into slavery at the end of the 18th century. She escaped with her infant daughter and two years later won freedom for her son through legal action. She became a vocal advocate for civil rights, women’s rights, and alcohol temperance. Curie was a Polish-French physicist and chemist famous for her studies of radioactivity, a term she coined. She was the first woman to win a Nobel Prize, as well as the first person to win a second Nobel.
NASA subsequently recognized several other women with named structures. One of the last women to be so honored was
Nancy Grace Roman , the space agency’s first chief of astronomy. In May 2020, NASA announced it would name the Wide Field Infrared Survey Telescope after Roman; the space telescope is set to launch as early as October 2026, although the Trump administration has repeatedly said it wants to cancel the project .
Related:
A Trillion Rogue Planets and Not One Sun to Shine on Them
These days, NASA tries to avoid naming its major projects after people. It quietly changed
its naming policy in December 2022 after allegations came to light that James Webb, for whom the James Webb Space Telescope is named, had fired LGBTQ+ employees at NASA and, before that, the State Department. A NASA investigation couldn’t substantiate the allegations, and so the telescope retained Webb’s name. But the bar is now much higher for NASA projects to memorialize anyone, deserving or otherwise. (The agency did allow the hopping lunar robot IM-2 Micro Nova Hopper , built by Intuitive Machines, to be named for computer-software pioneer Grace Hopper .)
And so
Marie Curie and Sojourner will remain part of a rarefied clique. Sojourner , inducted into the Robot Hall of Fame in 2003, will always be the celebrity of the pair. And Marie Curie will always remain on the sidelines. But think about it this way: Marie Curie is now on exhibit at one of the most popular museums in the world, where millions of visitors can see the rover up close. That’s not too shabby a legacy either.
Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.
An abridged version of this article appears in the June 2025 print issue.
References
Curator Matthew Shindell of the National Air and Space Museum first suggested I feature Marie Curie . I found additional information from the museum’s collections website , an article by David Kindy in Smithsonian magazine , and the book After Sputnik: 50 Years of the Space Age (Smithsonian Books/HarperCollins, 2007) by Smithsonian curator Martin Collins.
NASA has numerous resources documenting the Mars Pathfinder mission, such as the mission website , fact sheet , and many lovely photos (including some of Barnacle Bill and a composite of Marie Curie during a prelaunch test).
Curiosity: An Inside Look at the Mars Rover Mission and the People Who Made It Happen (Prometheus, 2014) by Rod Pyle and Roving Mars: Spirit, Opportunity, and the Exploration of the Red Planet (Hyperion, 2005) by planetary scientist Steve Squyres are both about later Mars missions and their rovers, but they include foundational information about Sojourner .
spectrum.ieee.org