Evan Ackerman
@evanackerman.bsky.social
1.3K followers 34 following 96 posts
Senior editor at IEEE Spectrum. I hug robots. spectrum.ieee.org
Posts Media Videos Starter Packs
evanackerman.bsky.social
Video Friday: Non-Humanoid Hands for Humanoid Robots https://spectrum.ieee.org/video-friday-robotic-hands-2674168909
Video Friday: Non-Humanoid Hands for Humanoid Robots
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. World Robot Summit : 10–12 October 2025, OSAKA, JAPAN IROS 2025 : 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos! There are two things that I really appreciate about this video on grippers from Boston Dynamics . First, building a gripper while keeping in mind that the robot will inevitably fall onto it, because I’m seeing lots of very delicate looking five-fingered hands on humanoids and I’m very skeptical of their ruggedness. And second, understanding that not only is a five-fingered hand very likely unnecessary for the vast majority of tasks, but also robot hands don’t have to be constrained by a human hand’s range of motion. [ Boston Dynamics ] Yes, okay, it’s a fancy looking robot, but I’m still stuck on what useful, practical things can it reliably and cost effectively and safely DO. - YouTube youtu.be [ Figure ] Life on Earth has evolved in constant relation to gravity, yet we rarely consider how deeply it shapes living systems, until we imagine a place without it. In MycoGravity, pink oyster mushrooms grow inside a custom-built bioreactor mounted on a KUKA robotic arm. Inspired by NASA’s random positioning machines, the robot’s programmed movement simulates altered gravity. Over time, sculptural mushrooms emerge, shaped by their environment without a stable gravitational direction. [ MycoGravity ] A new technological advancement gives robotic systems a natural sense of touch without extra skins or sensors. With advanced force sensing and deep learning, this robot can feel where you touch, recognize symbols, and even use virtual buttons—paving the way for more natural and flexible human-robot interaction. [ Science Robotics ] Thanks, Maged! The creator of Mini Pupper introduces Hey Santa , which can be yours for under $60. [ Kickstarter campaign ] I think humanoid robotics companies are starting to realize that they’re going to need to differentiate themselves somehow. [ DEEP Robotics ] Drone swarm performances---synchronized, expressive aerial displays set to music---have emerged as a captivating application of modern robotics. Yet designing smooth, safe choreographies remains a complex task requiring expert knowledge. We present SwarmGPT, a language-based choreographer that leverages the reasoning power of large language models (LLMs) to streamline drone performance design. [ SwarmGPT ] Dr. Mark Draelos, assistant professor of robotics and ophthalmology, received the National Institutes of Health (NIH) Director’s New Innovator Award for a project which seeks to improve how delicate microsurgeries are conducted by scaling up tissue to a size where surgeons could “walk across the retina” in virtual reality and operate on tissue as if “raking leaves.” [ University of Michigan ] The intricate mechanisms of the most sophisticated laboratory on Mars are revealed in Episode 4 of the ExoMars Rosalind Franklin series, called “Sample processing.” [ European Space Agency ] There’s currently a marketplace for used industrial robots, and it makes me wonder what’s next. Used humanoids, anyone? [ Kuka ] On October 2, 2025, the 10th “Can We Build Baymax?” Workshop Part 10: What Can We Build Today? & BYOB (Bring Your Own Baymax) was held in Seoul, Korea. To celebrate the 10th anniversary, Baymax delivered a special message from his character designer, Jin Kim. [ Baymax ] I am only sharing this to declare that iRobot has gone off the deep end with their product names: Meet the “Roomba® Max 705 Combo Robot + AutoWash™ Dock.” [ iRobot ] Daniel Piedrahita, Navigation Team Lead, presents on his team’s recent work rebuilding Digit’s navigation stack, including a significant upgrade to foostep path planning. [ Agility Robotics ] A bunch of videos from ICRA@40 have just been posted, and here are a few of my favorites. [ ICRA@40 ]
spectrum.ieee.org
evanackerman.bsky.social
Video Friday: Drone Easily Lands on Speeding Vehicle https://spectrum.ieee.org/video-friday-speedy-drone-landing
Video Friday: Drone Easily Lands on Speeding Vehicle
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. World Robot Summit : 10–12 October 2025, OSAKA, JAPAN IROS 2025 : 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos! We demonstrate a new landing system that lets drones safely land on moving vehicles at speeds up to 110 km/h. By combining lightweight shock absorbers with reverse thrust, our approach drastically expands the landing envelope, making it far more robust to wind, timing, and vehicle motion. This breakthrough opens the door to reliable high-speed drone landings in real-world conditions. [ Createk Design Lab ] Thanks, Alexis! This video presents an academic parody inspired by KAIST’s humanoid robot moonwalk. While KAIST demonstrated the iconic move with robot legs, we humorously reproduced it using the Tesollo DG-5F robot hand. A playful experiment to show that not only humanoid robots but also robotic fingers can “dance.” [ Hangyang University ] 20 years ago, Universal Robots built the first collaborative robot . You turned it into something bigger. Our cobot was never just technology. In your hands, it became something more: a teammate, a problem-solver, a spark for change. From factories to labs, from classrooms to warehouses. That’s the story of the past 20 years. That’s what we celebrate today. [ Universal Robots ] The assistive robot Maya, newly developed at DLR, is designed to enable people with severe physical disabilities to lead more independent lives. The new robotic arm is built for seamless wheelchair integration, with optimized kinematics for stowing, ground-level access, and compatibility with standing functions. [ DLR ] Contoro and HARCO Lab have launched an open-source initiative, ROS-MCP-Server, which connects AI models (e.g., Claude, GPT, Gemini) with robots using ROS and MCP. This software enables AI to communicate with multiple ROS nodes in the language of robots. We believe it will allow robots to perform tasks previously impossible due to limited intelligence, help robotics engineers program robots more efficiently, and enable non-experts to interact with robots without deep robotics knowledge. [ GitHub ] Thanks, Mok! Here’s a quick look at the Conference on Robotic Learning (CoRL) exhibit hall, thanks to PNDbotics. [ PNDbotics ] Old and busted: sim to real. New hotness: real to sim! [ Paper ] Any humanoid video with tennis balls should be obligated to show said humanoid failing to walk over them. [ LimX ] Thanks, Jinyan! The correct answer to the question, ‘can you beat a robot arm at Tic-Tac-Toe’ should be no, no you cannot. And you can’t beat a human, either, if they know what they’re doing. [ AgileX ] It was an honor to host the team from Microsoft AI as part of their larger educational collaboration with The University of Texas at Austin. During their time here, they shared this wonderful video of our lab facilities. Moody lighting is second only to random primary colored lighting when it comes to making a lab look sciency. [ The University of Texas at Austin HCRL ] Robots aren’t just sci-fi anymore. They’re evolving fast. AI is teaching them how to adapt, learn and even respond to open-ended questions with advanced intelligence. Aaron Saunders, CTO of Boston Dynamics, explains how this leap is transforming everything, from simple controls to full-motion capabilities. While there are some challenges related to safety and reliability, AI is significantly helping robots become valuable partners at home and on the job. [ IBM ]
spectrum.ieee.org
evanackerman.bsky.social
Why the World Needs a Flying Robot Baby https://spectrum.ieee.org/ironcub-jet-powered-flying-robot
Why the World Needs a Flying Robot Baby
One of the robotics projects that I’ve been most excited about for years now is iRonCub , from Daniele Pucci’s Artificial and Mechanical Intelligence Lab at IIT in Genoa, Italy. Since 2017 , Pucci has been developing a jet propulsion system that will enable an iCub robot (originally designed to be the approximate shape and size of a five year old child) to fly like Iron Man. Over the summer, after nearly 10 years of development, iRonCub3 achieved liftoff and stable flight for the first time , with its four jet engines lifting it 50 centimeters off the ground for several seconds. The long-term vision is for iRonCub (or a robot like it) to operate as a disaster response platform, Pucci tells us. In an emergency situation like a flood or a fire, iRonCub could quickly get to a location without worrying about obstacles, and then on landing, start walking for energy efficiency while using its arms and hands to move debris and open doors. “We believe in contributing to something unique in the future,” says Pucci. “We have to explore new things, and this is wild territory at the scientific level.” Obviously, this concept for iRonCub and the practical experimentation attached to it is really cool. But coolness in of itself is usually not enough of a reason to build a robot, especially a robot that’s a (presumably rather expensive) multi-year project involving a bunch of robotics students, so let’s get into a little more detail about why a flying robot baby is actually something that the world needs. In an emergency situation like a flood or a fire, iRonCub could quickly get to a location without worrying about obstacles, and then on landing, start walking for energy efficiency while using its arms and hands to move debris and open doors. IIT Getting a humanoid robot to do this sort of thing is quite a challenge. Together, the jet turbines mounted to iRonCub’s back and arms can generate over 1000 N of thrust, but because it takes time for the engines to spool up or down, control has to come from the robot itself as it moves its arm-engines to maintain stability. “What is not visible from the video,” Pucci tells us, “is that the exhaust gas from the turbines is at 800 degrees Celsius and almost supersonic speed. We have to understand how to generate trajectories in order to avoid the fact that the cones of emission gasses were impacting the robot.” Even if the exhaust doesn’t end up melting the robot, there are still aerodynamic forces involved that have until this point really not been a consideration for humanoid robots at all—in June, Pucci’s group published a paper in Nature Engineering Communications , offering a “comprehensive approach to model and control aerodynamic forces [for humanoid robots] using classical and learning techniques.” “The exhaust gas from the turbines is at 800 degrees Celsius and almost supersonic speed.” —Daniele Pucci, IIT Whether or not you’re on board with Pucci’s future vision for iRonCub as a disaster response platform, derivatives of current research can be immediately applied beyond flying humanoid robots. The algorithms for thrust estimation can be used with other flying platforms that rely on directed thrust, like eVTOL aircraft. Aerodynamic compensation is relevant for humanoid robots even if they’re not airborne, if we expect them to be able to function when it’s windy outside. More surprising, Pucci describes a recent collaboration with an industrial company developing a new pneumatic gripper. “At a certain point, we had to do force estimation for controlling the gripper, and we realized that the dynamics looked really similar to those of the jet turbines, and so we were able to use the same tools for gripper control. That was an ‘ah-ha’ moment for us: first you do something crazy, but then you build the tools and methods, and then you can actually use those tools in an industrial scenario. That’s how to drive innovation.” What’s Next for iRonCub: Attracting Talent and Future Enhancements There’s one more important reason to be doing this, he says: “It’s really cool.” In practice, a really cool flagship project like iRonCub not only attracts talent to Pucci’s lab, but also keeps students and researchers passionate and engaged. I saw this firsthand when I visited IIT last year, where I got a similar vibe to watching the DARPA Robotics Challenge and DARPA SubT —when people know they’re working on something really cool , there’s this tangible, pervasive, and immersive buzzing excitement that comes through. It’s projects like iRonCub that can get students to really love robotics. In the near future, a new jetpack with an added degree of freedom will make yaw control of iRonCub easier, and Pucci would also like to add wings for more efficient long distance flight. But the logistics of testing the robot are getting more complicated—there’s only so far that the team can go with their current test stand (which is on the roof of their building), and future progress will likely require coordinating with the Genoa airport. It’s not going to be easy, but as Pucci makes clear, “this is not a joke. It’s something that we believe in. And that feeling of doing something exceptional, or possibly historical, something that’s going to be remembered—that’s something that’s kept us motivated. And we’re just getting started.”
spectrum.ieee.org
evanackerman.bsky.social
Video Friday: Gemini Robotics Improves Motor Skills https://spectrum.ieee.org/video-friday-google-gemini-robotics
Video Friday: Gemini Robotics Improves Motor Skills
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. CoRL 2025 : 27–30 September 2025, SEOUL IEEE Humanoids : 30 September–2 October 2025, SEOUL World Robot Summit : 10–12 October 2025, OSAKA, JAPAN IROS 2025 : 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos! Gemini Robotics 1.5 is our most capable vision-language-action (VLA) model that turns visual information and instructions into motor commands for a robot to perform a task. This model thinks before taking action and shows its process, helping robots assess and complete complex tasks more transparently. It also learns across embodiments, accelerating skill learning . [ Google DeepMind ] A simple “force pull” gesture brings Carter straight into her hand. This is a fantastic example of how an intuitive interaction can transform complex technology into an extension of our intent. [ Robust.ai ] I can’t help it, I feel bad for this poor little robot. [ Urban Robotics Laboratory, KAIST ] Hey look, no legs! [ Kinisi Robotics ] Researchers at the University of Michigan and Shanghai Jiao Tong University have developed a soft robot that can crawl along a flat path and climb up vertical surfaces using its unique origami structure. The robot can move with an accuracy typically seen only in rigid robots. [ University of Michigan Robotics ] Unitree G1 has learned the “Anti-Gravity” mode: stability is greatly improved under any action sequence, and even if it falls, it can quickly get back up. [ Unitree ] Kepler Robotics has commenced mass production of the K2 Bumblebee, the world’s first commercially available humanoid robot powered by Tesla’s hybrid architecture. [ Kepler Robotics ] Reinforcement learning (RL)-based legged locomotion controllers often require meticulous reward tuning to track velocities or goal positions while preserving smooth motion on various terrains. Motion imitation methods via RL using demonstration data reduce reward engineering but fail to generalize to novel environments. We address this by proposing a hierarchical RL framework in which a low-level policy is first pre-trained to imitate animal motions on flat ground, thereby establishing motion priors. Real-world experiments with an ANYmal-D quadruped robot confirm our policy’s capability to generalize animal-like locomotion skills to complex terrains, demonstrating smooth and efficient locomotion and local navigation performance amidst challenging terrains with obstacles. [ ETHZ RSL ] I think we have entered the ‘differentiation-through-novelty’ phase of robot vacuums. [ Roborock ] In this work, we present Kinethreads: a new full-body haptic exosuit design built around string-based motor-pulley mechanisms, which keeps our suit lightweight ( [ ACM Symposium on User Interface and Software Technology ] In this episode of the IBM AI in Action podcast, Aaron Saunders, CTO of Boston Dynamics, delves into the transformative potential of AI-powered robotics, highlighting how robots are becoming safer, more cost-effective and widely accessible through Robotics as a Service (RaaS). [ IBM ] This CMU RI Seminar is by Michael T. Tolley from UCSD, on ‘Biologically Inspired Soft Robotics.’ Robotics has the potential to address many of today’s pressing problems in fields ranging from healthcare to manufacturing to disaster relief. However, the traditional approaches used on the factory floor do not perform well in unstructured environments. The key to solving many of these challenges is to explore new, non-traditional designs. Fortunately, nature surrounds us with examples of novel ways to navigate and interact with the real world. Dr. Tolley’s Bioinspired Robotics and Design Lab seeks to borrow the key principles of operation from biological systems and apply them to robotic design. [ Carnegie Mellon University Robotics Institute ]
spectrum.ieee.org
evanackerman.bsky.social
Exploit Allows for Takeover of Fleets of Unitree Robots https://spectrum.ieee.org/unitree-robot-exploit
Exploit Allows for Takeover of Fleets of Unitree Robots
A critical vulnerability in the Bluetooth Low Energy (BLE) Wi-Fi configuration interface used by several different Unitree robots can result in a root level takeover by an attacker, security researchers disclosed on 20 September . The exploit impacts Unitree’s Go2 and B2 quadrupeds and G1 and H1 humanoids. Because the vulnerability is wireless, and the resulting access to the affected platform is complete, the vulnerability becomes wormable, say the researchers , meaning “an infected robot can simply scan for other Unitree robots in BLE range and automatically compromise them, creating a robot botnet that spreads without user intervention.” Initially discovered by security researchers Andreas Makris and Kevin Finisterre, UniPwn takes advantage of several security lapses that are still present in the firmware of Unitree robots as of 20 September, 2025. As far as IEEE Spectrum is aware, this is the first major public exploit of a commercial humanoid platform. Unitree Robots’ BLE Security Flaw Exposed Like many robots, Unitree’s robots use an initial BLE connection to make it easier for a user to set up a Wi-Fi network connection. The BLE packets that the robot accepts are encrypted, but those encryption keys are hardcoded and were published on X (formerly Twitter) by Makris in July. Although the robot does validate the contents of the BLE packets to make sure that the user is authenticated, the researchers say that all it takes to become an authenticated user is to encrypt the string ‘unitree’ with the hardcoded keys and the robot will let someone in. From there, an attacker can inject arbitrary code masquerading as the Wi-Fi SSID and password, and when the robot attempts to connect to Wi-Fi, it will execute that code without any validation and with root privileges. “A simple attack might be just to reboot the robot, which we published as a proof-of-concept,” explains Makris. “But an attacker could do much more sophisticated things: It would be possible to have a trojan implanted into your robot’s startup routine to exfiltrate data while disabling the ability to install new firmware without the user knowing. And as the vulnerability uses BLE, the robots can easily infect each other, and from there the attacker might have access to an army of robots.” Makris and Finisterre first contacted Unitree in May in an attempt to responsibly disclose this vulnerability. After some back and forth with little progress, Unitree stopped responding to the researchers in July, and the decision was made to make the vulnerability public. “We have had some bad experiences communicating with them,” Makris tells us, citing an earlier backdoor vulnerability he discovered with the Unitree Go1. “So we need to ask ourselves—are they introducing vulnerabilities like this on purpose, or is it sloppy development? Both answers are equally bad.” Unitree has not responded to a request for comment from IEEE Spectrum as of press time. “Unitree, as other manufacturers do, has simply ignored prior security disclosures and repeated outreach attempts,” says Víctor Mayoral-Vilches, the founder of robotics cybersecurity company Alias Robotics . “This is not the right way to cooperate with security researchers.” Mayoral-Vilches was not involved in publishing the UniPwn exploit, but he has found other security issues with Unitree robots, including undisclosed streaming of telemetry data to servers in China which could potentially include audio, visual, and spatial data. Mayoral-Vilches explains that security researchers are focusing on Unitree primarily because the robots are available and affordable. This makes them not just more accessible for the researchers, but also more relevant, since Unitree’s robots are already being deployed by users around the world who are likely not aware of the security risks. For example, Makris is concerned that the Nottinghamshire Police in the UK have begun testing a Unitree Go2 , which can be exploited by UniPwn. “We tried contacting them and would have disclosed the vulnerability upfront to them before going public, but they ignored us. What would happen if an attacker implanted themselves into one of these police dogs?” How to Secure Unitree Robots In the short term, Mayoral-Vilches suggests that people using Unitree robots can protect themselves by only connecting the robots to isolated Wi-Fi networks and disabling their Bluetooth connectivity. “You need to hack the robot to secure it for real,” he says. “This is not uncommon and why security research in robotics is so important.” Both Mayoral-Vilches and Makris believe that fundamentally it’s up to Unitree to make their robots secure in the long term, and that the company needs to be much more responsive to users and security researchers. But Makris says: “There will never be a 100 percent secure system.” Mayoral-Vilches agrees. “Robots are very complex systems, with wide attack surfaces to protect, and a state-of-the-art humanoid exemplifies that complexity.” Unitree, of course, is not the only company offering complex state-of-the-art quadrupeds and humanoids, and it seems likely (if not inevitable) that similar exploits will be discovered in other platforms. The potential consequences here can’t be overstated—the idea that robots can be taken over and used for nefarious purposes is already a science fiction trope, but the impact of a high-profile robot hack on the reputation of the commercial robotics industry is unclear. Robots companies are barely talking about security in public, despite how damaging even the perception of an unsecured robot might be. A robot that is not under control has the potential to be a real physical danger. At the IEEE Humanoids Conference in Seoul from 30 September to 2 October, Mayoral-Vilches has organized a workshop on Cybersecurity for Humanoids , where he will present a brief (co-authored with Makris and Finisterre) titled Humanoid Robots as Attack Vectors . Despite the title, their intent is not to overhype the problem but instead to encourage roboticists (and robotics companies) to take security seriously, and not treat it as an afterthought. As Mayoral-Vilches points out, “robots are only safe if secure.”
spectrum.ieee.org
evanackerman.bsky.social
Video Friday: A Billion Dollars for Humanoid Robots https://spectrum.ieee.org/video-friday-billion-humanoid-robots
Video Friday: A Billion Dollars for Humanoid Robots
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. ACTUATE 2025 : 23–24 September 2025, SAN FRANCISCO CoRL 2025 : 27–30 September 2025, SEOUL IEEE Humanoids : 30 September–2 October 2025, SEOUL World Robot Summit : 10–12 October 2025, OSAKA, JAPAN IROS 2025 : 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos! A billion dollars is a lot of money. And this is actual money, not just a valuation. but Figure already had a lot of money. So what are they going to be able to do now that they weren’t already doing, I wonder? [ Figure ] Robots often succeed in simulation but fail in reality. With PACE, we introduce a systematic approach to sim-to-real transfer. [ Paper ] Anthropomorphic robotic hands are essential for robots to learn from humans and operate in human environments. While most designs loosely mimic human hand kinematics and structure, achieving the dexterity and emergent behaviors present in human hands, anthropomorphic design must extend to also match passive compliant properties while simultaneously strictly having kinematic matching. We present ADAPT-Teleop, a system combining a robotic hand with human-matched kinematics, skin, and passive dynamics, along with a robotic arm for intuitive teleoperation. [ Paper ] This robot can walk without any electronic components in its body, because the power is transmitted through wires from motors concentrated outside of its body. Also, this robot’s front and rear legs are optimally coupled, and can walk with just 4 wires. [ JSK Lab ] Thanks, Takahiro! Five teams of Los Alamos engineers competed to build the ultimate hole-digging robot dog in a recent engineering sprint. In just days, teams programmed their robot dogs to dig, designing custom “paws” from materials like sheet metal, foam and 3D-printed polymers. The paws mimicked animal digging behaviors — from paddles and snowshoes to dew claws — and helped the robots avoid sinking into a 30-gallon soil bucket. Teams raced to see whose dog could dig the biggest hole and dig under a fence the fastest. [ Los Alamos ] This work presents UniPilot, a compact hardware-software autonomy payload that can be integrated across diverse robot embodiments to enable resilient autonomous operation in GPS-denied environments. The system integrates a multi-modal sensing suite including LiDAR, radar, vision, and inertial sensing for robust operation in conditions where uni-modal approaches may fail. A large number of experiments are conducted across diverse environments and on a variety of robot platforms to validate the mapping, planning, and safe navigation capabilities enabled by the payload. [ NTNU ] Thanks, Kostas! KAIST Humanoid v0.5. Developed at the DRCD Lab, KAIST, with a control policy trained via reinforcement learning. [ KAIST ] I just like the determined little hops. [ AgileX ] I’m always a little bit suspicious of robotics labs that are exceptionally clean and organized. [ PNDbotics ] Er, has PAL Robotics ever actually seen a kangaroo...? [ PAL ] See Spots push. Push, Spots, push. [ Tufts ] Training humanoid robots to hike could accelerate development of embodied AI for tasks like autonomous search and rescue, ecological monitoring in unexplored places and more, say University of Michigan researchers who developed an AI model that equips humanoids to hit the trails. [ Michigan ] I am dangerously close to no longer being impressed by breakdancing humanoid robots. [ Fourier ] This, though, would impress me. [ Inria ] In this interview, Clone’s co-founder and CEO Dhanush Radhakrishnan discusses the company’s path to creating the synthetic humans straight out of science fiction. (If YouTube brilliantly attempts to auto-dub this for you, switch the audio track to original (which YouTube thinks is Polish) and the video will still be in English.) [ Clone ] This documentary takes you behind the scenes of HMND 01 Alpha release: the breakthroughs, the failures, and the late nights of building the UK’s first industrial humanoid robot. [ Humanoid ] What is the role of ethical considerations in the development and deployment of robotic and automation technologies and what are the responsibilities of researchers to ensure that these technologies advance in ways that are transparent, fair, and aligned with the broader well-being of society? [ ICRA@40 ] This UPenn GRASP SFI lecture is from Tairan He at NVIDIA, on “Scalable Sim-to-Real Learning for General-Purpose Humanoid Skills”. Humanoids represent the most versatile robotic platform, capable of walking, manipulating, and collaborating with people in human-centered environments. Yet, despite recent advances, building humanoids that can operate reliably in the real world remains a fundamental challenge. Progress has been hindered by difficulties in whole-body control, robust perceptive reasoning, and bridging the sim-to-real gap. In this talk, I will discuss how scalable simulation and learning can systematically overcome these barriers. [ UPenn ]
spectrum.ieee.org
evanackerman.bsky.social
Video Friday: A Soft Robot Companion
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. ACTUATE 2025 : 23–24 September 2025, SAN FRANCISCO CoRL 2025 : 27–30 September 2025, SEOUL IEEE Humanoids : 30 September–2 October 2025, SEOUL World Robot Summit : 10–12 October 2025, OSAKA, JAPAN IROS 2025 : 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos! Fourier’s First Care-bot GR-3. This full-size “Care-bot” is designed for interactive companion. Its soft-touch outer shell and multimodal emotional interaction system bring the concept of “warm tech companionship” to life. I like that it’s soft to the touch , although I’m not sure that encouraging touch is safe. Reminds me a little bit of Valkyrie , where NASA put a lot of thought into the soft aspects of the robot. [ Fourier ] TAKE MY MONEY This 112 gram micro air vehicle (MAV) features foldable propeller arms that can lock into a compact rectangular profile comparable to the size of a smartphone. The vehicle can be launched by simply throwing it in the air, at which point the arms would unfold and autonomously stabilize to a hovering state. Multiple flight tests demonstrated the capability of the feedback controller to stabilize the MAV from different initial conditions including tumbling rates of up to 2500 deg/s. [ AVFL ] The U.S. Naval Research Laboratory (NRL), in collaboration with NASA, is advancing space robotics by deploying reinforcement learning algorithms onto Astrobee , a free-flying robotic assistant on board the International space station. This video highlights how NRL researchers are leveraging artificial intelligence to enable robots to learn, adapt, and perform tasks autonomously. By integrating reinforcement learning, Astrobee can improve maneuverability and optimize energy use. [ NRL ] Every day I’m scuttlin’ [ Ground Control Robotics ] Trust is built. Every part of our robot Proxie—from wheels to eyes—is designed with trust in mind. Cobot CEO Brad Porter explains the intent behind its design. [ Cobot ] Phase 1: Build lots of small quadruped robots. Phase 2: ? Phase 3: Profit! [ DEEP Robotics ] LAPP USA partnered with Corvus Robotics to solve a long-standing supply chain challenge: labor-intensive, error-prone inventory counting. [ Corvus ] I’m pretty sure that 95 percent of all science consists of moving small amounts of liquid from one container to another. [ Flexiv ] Raffaello D’Andrea , interviewed at ICRA 2025. [ Verity ] Tessa Lau, interviewed at ICRA 2025. [ Dusty Robotics ] Ever wanted to look inside the mind behind a cutting-edge humanoid robot? In this special episode, we have Dr.Aaron, our Product Manager at LimX Dynamics, for an exclusive deep dive into the LimX Oli. [ LimX Dynamics ]
spectrum.ieee.org
evanackerman.bsky.social
Reality Is Ruining the Humanoid Robot Hype https://spectrum.ieee.org/humanoid-robot-scaling
Reality Is Ruining the Humanoid Robot Hype
Over the next several years, humanoid robots will change the nature of work. Or at least, that’s what humanoid robotics companies have been consistently promising, enabling them to raise hundreds of millions of dollars at valuations that run into the billions. Delivering on these promises will require a lot of robots. Agility Robotics expects to ship “hundreds ” of its Digit robots in 2025 and has a factory in Oregon capable of building over 10,000 robots per year. Tesla is planning to produce 5,000 of its Optimus robots in 2025, and at least 50,000 in 2026. Figure believes “there is a path to 100,000 robots ” by 2029. And these are just three of the largest companies in an increasingly crowded space. Amplifying this message are many financial analysts: Bank of America Global Research , for example, predicts that global humanoid robot shipments will reach 18,000 units in 2025. And Morgan Stanley Research estimates that by 2050 there could be over 1 billion humanoid robots, part of a US $5 trillion market. But as of now, the market for humanoid robots is almost entirely hypothetical. Even the most successful companies in this space have deployed only a small handful of robots in carefully controlled pilot projects. And future projections seem to be based on an extraordinarily broad interpretation of jobs that a capable, efficient, and safe humanoid robot—which does not currently exist—might conceivably be able to do. Can the current reality connect with the promised scale? What Will It Take to Scale Humanoid Robots? Physically building tens of thousands, or even hundreds of thousands, of humanoid robots, is certainly possible in the near term. In 2023, on the order of 500,000 industrial robots were installed worldwide . Under the basic assumption that a humanoid robot is approximately equivalent to four industrial arms in terms of components, existing supply chains should be able to support even the most optimistic near-term projections for humanoid manufacturing. But simply building the robots is arguably the easiest part of scaling humanoids, says Melonee Wise , who served as chief product officer at Agility Robotics until this month. “The bigger problem is demand—I don’t think anyone has found an application for humanoids that would require several thousand robots per facility.” Large deployments, Wise explains, are the most realistic way for a robotics company to scale its business, since onboarding any new client can take weeks or months. An alternative approach to deploying several thousand robots to do a single job is to deploy several hundred robots that can each do 10 jobs, which seems to be what most of the humanoid industry is betting on in the medium to long term. While there’s a belief across much of the humanoid robotics industry that rapid progress in AI must somehow translate into rapid progress toward multipurpose robots, it’s not clear how, when, or if that will happen. “I think what a lot of people are hoping for is they’re going to AI their way out of this,” says Wise. “But the reality of the situation is that currently AI is not robust enough to meet the requirements of the market.” Bringing Humanoid Robots to Market Market requirements for humanoid robots include a slew of extremely dull, extremely critical things like battery life, reliability, and safety. Of these, battery life is the most straightforward—for a robot to usefully do a job, it can’t spend most of its time charging. The next version of Agility’s Digit robot, which can handle payloads of up to 16 kilograms, includes a bulky “backpack” containing a battery with a charging ratio of 10 to 1: The robot can run for 90 minutes, and fully recharge in 9 minutes. Slimmer humanoid robots from other companies must necessarily be making compromises to maintain their svelte form factors. In operation, Digit will probably spend a few minutes charging after running for 30 minutes. That’s because 60 minutes of Digit’s runtime is essentially a reserve in case something happens in its workspace that requires it to temporarily pause, a not-infrequent occurrence in the logistics and manufacturing environments that Agility is targeting. Without a 60-minute reserve, the robot would be much more likely to run out of power mid-task and need to be manually recharged. Consider what that might look like with even a modest deployment of several hundred robots weighing over a hundred kilograms each. “No one wants to deal with that,” comments Wise. Potential customers for humanoid robots are very concerned with downtime. Over the course of a month, a factory operating at 99 percent reliability will see approximately 5 hours of downtime. Wise says that any downtime that stops something like a production line can cost tens of thousands of dollars per minute, which is why many industrial customers expect a couple more 9s of reliability: 99.99 percent. Wise says that Agility has demonstrated this level of reliability in some specific applications, but not in the context of multipurpose or general-purpose functionality. Humanoid Robot Safety A humanoid robot in an industrial environment must meet general safety requirements for industrial machines. In the past, robotic systems like autonomous vehicles and drones have benefited from immature regulatory environments to scale quickly. But Wise says that approach can’t work for humanoids, because the industry is already heavily regulated—the robot is simply considered another piece of machinery. There are also more specific safety standards currently under development for humanoid robots, explains Matt Powers, associate director of autonomy R&D at Boston Dynamics. He notes that his company is helping develop an International Organization for Standardization (ISO) safety standard for dynamically balancing legged robots . “We’re very happy that the top players in the field, like Agility and Figure, are joining us in developing a way to explain why we believe that the systems that we’re deploying are safe,” Powers says. These standards are necessary because the traditional safety approach of cutting power may not be a good option for a dynamically balancing system. Doing so will cause a humanoid robot to fall over, potentially making the situation even worse. There is no simple solution to this problem, and the initial approach that Boston Dynamics expects to take with its Atlas robot is to keep the robot out of situations where simply powering it off might not be the best option. “We’re going to start with relatively low-risk deployments, and then expand as we build confidence in our safety systems,” Powers says. “I think a methodical approach is really going to be the winner here.” In practice, low risk means keeping humanoid robots away from people. But humanoids that are restricted by what jobs they can safely do and where they can safely move are going to have more trouble finding tasks that provide value. Are Humanoids the Answer? The issues of demand, battery life, reliability, and safety all need to be solved before humanoid robots can scale. But a more fundamental question to ask is whether a bipedal robot is actually worth the trouble. Dynamic balancing with legs would theoretically enable these robots to navigate complex environments like a human. Yet demo videos show these humanoid robots as either mostly stationary or repetitively moving short distances over flat floors. The promise is that what we’re seeing now is just the first step toward humanlike mobility. But in the short to medium term, there are much more reliable, efficient, and cost-effective platforms that can take over in these situations: robots with arms, but with wheels instead of legs. Safe and reliable humanoid robots have the potential to revolutionize the labor market at some point in the future. But potential is just that, and despite the humanoid enthusiasm, we have to be realistic about what it will take to turn potential into reality. This article appears in the October 2025 print issue as “Why Humanoid Robots Aren’t Scaling.”
spectrum.ieee.org
evanackerman.bsky.social
Large Behavior Models Are Helping Atlas Get to Work https://spectrum.ieee.org/boston-dynamics-atlas-scott-kuindersma
Large Behavior Models Are Helping Atlas Get to Work
Boston Dynamics can be forgiven, I think, for the relative lack of acrobatic prowess displayed by the new version of Atlas in (most of ) its latest videos. In fact, if you look at this Atlas video from late last year, and compare it to Atlas’ most recent video , it’s doing what looks to be more or less the same logistics-y stuff—all of which is far less visually exciting than backflips. But I would argue that the relatively dull tasks Atlas is working on now, moving car parts and totes and whatnot, are just as impressive. Making a humanoid that can consistently and economically and safely do useful things over the long term could very well be the hardest problem in robotics right now, and Boston Dynamics is taking it seriously. Last October, Boston Dynamics announced a partnership with Toyota Research Institute with the goal of general-purpose-izing Atlas. We’re now starting to see the results of that partnership, and Boston Dynamics’ vice president of robotics research, Scott Kuindersma , takes us through the progress they’ve made. Building AI Generalist Robots While the context of this work is “building AI generalist robots,” I’m not sure that anyone really knows what a “generalist robot” would actually look like, or even how we’ll even know when someone has achieved it. Humans are generalists, sort of—we can potentially do a lot of things, and we’re fairly adaptable and flexible in many situations, but we still require training for most tasks. I bring this up just to try and contextualize expectations, because I think a successful humanoid robot doesn’t have to actually be a generalist, but instead just has to be capable of doing several different kinds of tasks, and to be adaptable and flexible in the context of those tasks. And that’s already difficult enough. The approach that the two companies are taking is to leverage large behavior models (LBMs), which combine more general world knowledge with specific task knowledge to help Atlas with that adaptability and flexibility thing. As Boston Dynamics points out in a recent blog post , “the field is steadily accumulating evidence that policies trained on a large corpus of diverse task data can generalize and recover better than specialist policies that are trained to solve one or a small number of tasks.” Essentially, the goal is to develop a foundational policy that covers things like movement and manipulation, and then add more specific training (provided by humans) on top of that for specific tasks. This video below shows how that’s going so far. - YouTube What the video doesn’t show is the training system that Boston Dynamics uses to teach Atlas to do these tasks. Essentially imitation learning, an operator wearing a motion tracking system teleoperates Atlas through motion and manipulation tasks. There’s a one-to-one mapping between the operator and the robot, making it fairly intuitive, although as anyone who has tried to teleoperate a robot with a surfeit of degrees of freedom can attest to, it takes some practice to do it well. A motion tracking system provides high-quality task training data for Atlas. Boston Dynamics This interface provides very high-quality demonstration data for Atlas, but it’s not the easiest to scale—just one of the challenges of deploying a multipurpose (different than generalist!) humanoid. For more about what’s going on behind the scenes in this video and Boston Dynamics’ strategy with Atlas, IEEE Spectrum spoke with Kuindersma. In a video from last October just as your partnership with Toyota Research Institute was beginning, Atlas was shown moving parts around and performing whole-body manipulation. What’s the key difference between that demonstration and what we’re seeing in the new video? Scott Kuindersma: The big difference is how we programmed the behavior. The previous system was a more traditional robotics stack involving a combination of model-based controllers, planners, and machine learning models for perception all architected together to do end-to-end manipulation. Programming a new task on that system generally required roboticists or system integrators to touch code and tell the robot what to do. For this new video, we replaced most of that system with a single neural network that was trained on demonstration data. This is much more flexible because there’s no task-specific programming or other open-ended creative engineering required. Basically, if you can teleoperate the robot to do a task, you can train the network to reproduce that behavior. This approach is more flexible and scalable because it allows people without advanced degrees in robotics to “program” the robot. We’re talking about a large behavior model (LBM) here, right? What would you call the kind of learning that this model does? Kuindersma: It is a kind of imitation learning. We collect many teleoperation demonstrations and train a neural network to reproduce the input-output behaviors in the data. The inputs are things like raw robot camera images, natural language descriptions of the task, and proprioception, and the outputs are the same teleop commands sent by the human interface. What makes it a large behavior model is that we collect data from many different tasks and, in some cases, many different robot embodiments, using all of that as training data for the robot to end up with a single policy that knows how to do many things. The idea is that by training the network on a much wider variety of data and tasks and robots, its ability to generalize will be better. As a field, we are still in the early days of gathering evidence that this is actually the case (our [Toyota Research Institute] collaborators are among those leading the charge ), but we expect it is true based on the empirical trends we see in robotics and other AI domains. So the idea with the behavior model is that it will be more generalizable, more adaptable, or require less training because it will have a baseline understanding of how things work? Kuindersma: Exactly, that’s the idea. At a certain scale, once the model has seen enough through its training data, it should have some ability to take what it’s learned from one set of tasks and apply those learnings to new tasks. One of the things that makes these models flexible is that they are conditioned on language. We collect teleop demonstrations and then post-annotate that data with language, having humans or language models describing in English what is happening. The network then learns to associate these language prompts with the robot’s behaviors. Then, you can tell the model what to do in English, and it has a chance of actually doing it. At a certain scale, we hope it won’t take hundreds of demonstrations for the robot to do a task; maybe only a couple, and maybe way in the future, you might be able to just tell the robot what to do in English, and it will know how to do it, even if the task requires dexterity beyond simple object pick-and-place. There are a lot of robot videos out there of robots doing stuff that might look similar to what we’re seeing here. Can you tell me how what Boston Dynamics and Toyota Research Institute are doing is unique? Kuindersma: Many groups are using AI tools for robot demos, but there are some differences in our strategic approach. From our perspective, it’s crucial for the robot to perform the full breadth of humanoid manipulation tasks. That means, if you use a data-driven approach, you need to somehow funnel those embodied experiences into the dataset you’re using to train the model. We spent a lot of time building a highly expressive teleop interface for Atlas, which allows operators to move the robot around quickly, take steps, balance on one foot, reach the floor and high shelves, throw and catch things, and so on. The ability to directly mirror a human body in real time is vital for Atlas to act like a real humanoid laborer. If you’re just standing in front of a table and moving things around, sure, you can do that with a humanoid, but you can do it with much cheaper and simpler robots, too. If you instead want to, say, bend down and pick up something from between your legs, you have to make careful adjustments to the entire body while doing manipulation. The tasks we’ve been focused on with Atlas over the last couple months have been focused more on collecting this type of data, and we’re committed to making these AI models extremely performant so the motions are smooth, fast, beautiful, and fully cover what humanoids can do. Is it a constraint that you’re using imitation learning, given that Atlas is built to move in ways that humans can’t? How do you expand the operating envelope with this kind of training? Kuindersma: That’s a great question. There are a few ways to think about it: Atlas can certainly do things like continuous joint rotation that people can’t. While those capabilities might offer efficiency benefits, I would argue that if Atlas only behaved exactly like a competent human, that would be amazing, and we would be very happy with that. We could extend our teleop interface to make available types of motions the robot can do but a person can’t. The downside is this would probably make teleoperation less intuitive, requiring a more highly trained expert, which reduces scalability. We may be able to co-train our large behavior models with data sources that are not just teleoperation-based. For example, in simulation, you could use rollouts from reinforcement learning policies or programmatic planners as augmented demonstrations that include these high-range-of-motion capabilities. The LBM can then learn to leverage that in conjunction with teleop demonstrations. This is not just a hypothetical, we’ve actually found that co-training with simulation data has improved performance in the real robot, which is quite promising. Can you tell me what Atlas was directed to do in the video? Is it primarily trying to mirror its human-based training, or does it have some capacity to make decisions? Kuindersma: In this case, Atlas is responding primarily to visual and language queues to perform the task. At our current scale and with the model’s training, there’s a limited ability to completely innovate behaviors. However, you can see a lot of variety and responsiveness in the details of the motion, such as where specific parts are in the bin or where the bin itself is. As long as those experiences are reflected somewhere in the training data, the robot uses its real-time sensor observations to produce the right type of response. So, if the bin was too far away for the robot to reach, without specific training, would it move itself to the bin? Kuindersma: We haven’t done that experiment, but if the bin was too far away, I think it might take a step forward because we varied the initial conditions of the bin when we collected data, which sometimes required the operator to walk the robot to the bin. So there is a good chance that it would step forward, but there is also a small chance that it might try to reach and not succeed. It can be hard to make confident predictions about model behavior without running experiments, which is one of the fun features of working with models like this. It’s interesting how a large behavior model, which provides world knowledge and flexibility, interacts with this instance of imitation learning, where the robot tries to mimic specific human actions. How much flexibility can the system take on when it’s operating based on human imitation? Kuindersma: It’s primarily a question of scale. A large behavior model is essentially imitation learning at scale, similar to a large language model. The hypothesis with large behavior models is that as they scale, generalization capabilities improve, allowing them to handle more real-world corner cases and require less training data for new tasks. Currently, the generalization of these models is limited, but we’re addressing that by gathering more data not only through teleoperating robots but also by exploring other scaling bets like non-teleop human demonstrations and sim/synthetic data. These other sources might have more of an “embodiment gap” to the robot, but the model’s ability to assimilate and translate between data sources could lead to better generalization. How much skill or experience does it take to effectively train Atlas through teleoperation? Kuindersma: We’ve had people on day tours jump in and do some teleop, moving the robot and picking things up. This ease of entry is thanks to our teams building a really nice interface: The user wears a VR headset, where they’re looking at a re-projection of the robot’s stereo RGB cameras, which are aligned to provide a 3D sense of vision, and there are built-in visual augmentations like desired hand locations and what the robot is actually doing to give people situational awareness. So novice users can do things fairly easily, they’re probably not generating the highest quality motions for training policies. To generate high-quality data, and to do that consistently over a period of several hours, it typically takes a couple of weeks of onboarding. We usually start with manipulation tasks and then progress to tasks involving repositioning the entire robot. It’s not trivial, but it’s doable. The people doing it now are not roboticists; we have a team of ‘robot teachers’ who are hired for this, and they’re awesome. It gives us a lot of hope for scaling up the operation as we build more robots. How is what you’re doing different from other companies that might lean much harder on scaling through simulation? Are you focusing more on how humans do things? Kuindersma: Many groups are doing similar things, with differences in technical approach, platform, and data strategy. You can characterize the strategies people are taking by thinking about a “data pyramid,” where the top of the pyramid is the highest quality, hardest-to-get data, which is typically teleoperation on the robot you’re working with. The middle of the pyramid might be egocentric data collected on people (e.g., by wearing sensorized gloves), simulation data, or other synthetic world models. And the bottom of the pyramid is data from YouTube or the rest of the Internet. Different groups allocate finite resources to different distributions of these data sources. For us, we believe it’s really important to have as large a baseline of actual on-robot data (at the top of the pyramid) as possible. Simulation and synthetic data are almost certainly part of the puzzle, and we’re investing resources there, but we’re taking a somewhat balanced data strategy rather than throwing all of our eggs in one basket. Ideally you want the top of the pyramid to be as big as possible, right? Kuindersma: Ideally, yes. But you won’t get to the scale you need by just doing that. You need the whole pyramid, but having as much high-quality data at the top as possible only helps. But it’s not like you can just have a super large bottom to the pyramid and not need the top? Kuindersma: I don’t think so. I believe there needs to be enough high-quality data for these models to effectively translate into the specific embodiment that they are executing on. There needs to be enough of that “top” data for the translation to happen, but no one knows the exact distribution, like whether you need 5 percent real robot data and 95 percent simulation, or some other ratio. Is that a box of ‘Puny-os ’ on the shelf in the video? Part of this self-balancing robot. Boston Dynamics Kuindersma: Yeah! Alex Alspach from [Toyota Research Institute] brought it in to put in the background as an easter egg. What’s next for Atlas? Kuindersma: We’re really focused on maximizing the performance manipulation behaviors. I think one of the things that we’re uniquely positioned to do well is reaching the full behavioral envelope of humanoids, including mobile bimanual manipulation, repetitive tasks, and strength, and getting the robot to move smoothly and dynamically using these models. We’re also developing repeatable processes to climb the robustness curve for these policies—we think reinforcement learning may play a key role in achieving this. We’re also looking at other types of scaling bets around these systems. Yes, it’s going to be very important that we have a lot of high-quality on-robot on task data that we’re using as part of training these models. But we also think there are real opportunities and being able to leverage other data sources, whether that’s observing or instrumenting human workers or scaling up synthetic and simulation data, and understanding how those things can mix together to improve the performance of our models.
spectrum.ieee.org
evanackerman.bsky.social
Video Friday: Robot Vacuum Climbs Stairs https://spectrum.ieee.org/video-friday-eufy-robot-vacuum
Video Friday: Robot Vacuum Climbs Stairs
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. ACTUATE 2025 : 23–24 September 2025, SAN FRANCISCO CoRL 2025 : 27–30 September 2025, SEOUL IEEE Humanoids : 30 September–2 October 2025, SEOUL World Robot Summit : 10–12 October 2025, OSAKA, JAPAN IROS 2025 : 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos! This is ridiculous and I love it. [ Eufy ] At ICRA 2024, We met Paul Nadan to learn about how his LORIS robot climbs up walls by sticking itself to rocks. [ CMU ] If a humanoid robot is going to load my dishwasher, I expect it to do so optimally, not all haphazardly like a puny human. [ Figure ] Humanoid robots have recently achieved impressive progress in locomotion and whole-body control, yet they remain constrained in tasks that demand rapid interaction with dynamic environments through manipulation. Table tennis exemplifies such a challenge: with ball speeds exceeding 5 m/s, players must perceive, predict, and act within sub-second reaction times, requiring both agility and precision. To address this, we present a hierarchical framework for humanoid table tennis that integrates a model-based planner for ball trajectory prediction and racket target planning with a reinforcement learning–based whole-body controller. [ Hybrid Robotics ] Despite their promise, today’s biohybrid robots typically underperform their fully synthetic counterparts and their potential as predicted from a reductionist assessment of constituents. Many systems represent enticing proofs of concept with limited practical applicability. Most remain confined to controlled laboratory settings and lack feasibility in complex real-world environments. Developing biohybrid robots is currently a painstaking, bespoke process, and the resulting systems are routinely inadequately characterized. Complex, intertwined relationships between component, interface, and system performance are poorly understood, and methodologies to guide informed design of biohybrid systems are lacking. The HyBRIDS ARC opportunity seeks ideas to address the question: How can synthetic and biological components be integrated to enable biohybrid platforms that outperform traditional robotic systems? [ DARPA ] Robotic systems will play a key role in future lunar missions, and a great deal of research is currently being conducted in this area. One such project is SAMLER-KI (Semi-Autonomous Micro Rover for Lunar Exploration Using Artificial Intelligence), a collaboration between the German Research Center for Artificial Intelligence (DFKI) and the University of Applied Sciences Aachen (FH Aachen), Germany. The project focuses on the conceptual design of a semi-autonomous micro rover that is capable of surviving lunar nights while remaining within the size class of a micro rover. During development, conditions on the Moon such as dust exposure, radiation, and the vacuum of space are taken into account, along with the 14-Earth-day duration of a lunar night. [ DFKI ] ARMstrong Dex is a human-scale dual-arm hydraulic robot developed by the Korea Atomic Energy Research Institute (KAERI) for disaster response applications. It is capable of lifting its own body through vertical pull-ups and manipulating objects over 50 kg, demonstrating strength beyond human capabilities. In this test, ARMstrong Dex used a handheld saw to cut through a thick 40×90 mm wood beam. Sawing is a physically demanding task involving repetitive force application, fine trajectory control, and real-time coordination. [ KAERI ] This robot stole my “OMG I HAVE JUICE” face. [ Pudu Robotics ] The best way of doging a punch to the face is to just have a big hole where your face should be. I do wish they wouldn’t call it a combat robot, though. [ Unitree ] It really might be fun to have a DRC-style event for quadrupeds. [ DEEP Robotics ] CMU researchers are developing new technology to enable robots to physically interact with people who are not able to care for themselves. These breakthroughs are being deployed in the real world, making it possible for individuals with neurological diseases, stroke, multiple sclerosis, ALS and dementia to be able to eat, clean and get dressed fully on their own. [ CMU ] Caracol’s additive manufacturing platforms use KUKA robotic arms to produce large-scale industrial parts with precision and flexibility. This video outlines how Caracol integrates multi-axis robotics, modular extruders, and proprietary software to support production in sectors like aerospace, marine, automotive, and architecture. [ KUKA ] There were a couple of robots at ICRA 2025, as you might expect. [ ICRA ] On June 6, 1990, following the conclusion of Voyager’s planetary explorations, mission representatives held a news conference at NASA’s Jet Propulsion Laboratory in Southern California to summarize key findings and answer questions from the media. In the briefing, Voyager’s longtime project scientist Ed Stone, along with renowned science communicator Carl Sagan, also revealed the mission’s “Solar System Family Portrait,” a mosaic comprising images of six of the solar system’s eight planets. Carl Sagan was a member of the Voyager imaging team and instrumental in capturing these images and bringing them to the public. Carl Sagan, man. Carl Sagan. Blue Dot unveil was right around 57:00, if you missed it. [ JPL ]
spectrum.ieee.org
evanackerman.bsky.social
Video Friday: Spot’s Got Talent
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. CLAWAR 2025 : 5–7 September 2025, SHENZHEN, CHINA ACTUATE 2025 : 23–24 September 2025, SAN FRANCISCO CoRL 2025 : 27–30 September 2025, SEOUL IEEE Humanoids : 30 September–2 October 2025, SEOUL World Robot Summit : 10–12 October 2025, OSAKA, JAPAN IROS 2025 : 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos! Boston Dynamics is back and their dancing robot dogs are bigger, better, and bolder than ever! Watch as they bring a “dead” robot to life and unleash a never before seen synchronized dance routine to “Good Vibrations.” And much more interestingly, here’s a discussion of how they made it work: [ Boston Dynamics ] I don’t especially care whether a robot falls over . I care whether it gets itself back up again. [ LimX Dynamics ] The robot autonomously connects multiple wires to the environment using small flying anchors—drones equipped with anchoring mechanisms at the wire tips. Guided by an onboard RGB-D camera for control and environmental recognition, the system enables wire attachment in unprepared environments and supports simultaneous multi-wire connections, expanding the operational range of wire-driven robots. [ JSK Robotics Laboratory ] at [ University of Tokyo ] Thanks, Shintaro! For a robot that barely has a face, this is some pretty good emoting. [ Pollen ] Learning skills from human motions offers a promising path toward generalizable policies for whole-body humanoid control, yet two key cornerstones are missing: (1) a scalable, high-quality motion tracking framework that faithfully transforms kinematic references into robust, extremely dynamic motions on real hardware, and (2) a distillation approach that can effectively learn these motion primitives and compose them to solve downstream tasks. We address these gaps with BeyondMimic, a real-world framework to learn from human motions for versatile and naturalistic humanoid control via guided diffusion. [ Hybrid Robotics ] Introducing our open-source metal-made bipedal robot MEVITA. All components can be procured through e-commerce, and the robot is built with a minimal number of parts. All hardware, software, and learning environments are released as open source. [ MEVITA ] Thanks, Kento! I’ve always thought that being able to rent robots (or exoskeletons) to help you move furniture or otherwise carry stuff would be very useful. [ DEEP Robotics ] A new study explains how tiny water bugs use fan-like propellers to zip across streams at speeds up to 120 body lengths per second. The researchers then created a similar fan structure and used it to propel and maneuver an insect-sized robot. The discovery offers new possibilities for designing small machines that could operate during floods or other challenging situations. [ Georgia Tech ] Dynamic locomotion of legged robots is a critical yet challenging topic in expanding the operational range of mobile robots. To achieve generalized legged locomotion on diverse terrains while preserving the robustness of learning-based controllers, this paper proposes to learn an attention-based map encoding conditioned on robot proprioception, which is trained as part of the end-to-end controller using reinforcement learning. We show that the network learns to focus on steppable areas for future footholds when the robot dynamically navigates diverse and challenging terrains. [ Paper ] from [ ETH Zurich ] In the fifth installment of our Moonshot Podcast Deep Dive video interview series, X’s Captain of Moonshots Astro Teller sits down with Google DeepMind’s Chief Scientist Jeff Dean for a conversation about the origin of Jeff’s pioneering work scaling neural networks. They discuss the first time AI captured Jeff’s imagination, the earliest Google Brain framework, the team’s stratospheric advancements in image recognition and speech-to-text, how AI is evolving, and more. [ Moonshot Podcast ]
spectrum.ieee.org
evanackerman.bsky.social
Video Friday: Inaugural World Humanoid Robot Games Held https://spectrum.ieee.org/world-humanoid-robot-games
Video Friday: Inaugural World Humanoid Robot Games Held
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RO-MAN 2025 : 25–29 August 2025, EINDHOVEN, THE NETHERLANDS CLAWAR 2025 : 5–7 September 2025, SHENZHEN, CHINA ACTUATE 2025 : 23–24 September 2025, SAN FRANCISCO CoRL 2025 : 27–30 September 2025, SEOUL IEEE Humanoids : 30 September–2 October 2025, SEOUL World Robot Summit : 10–12 October 2025, OSAKA, JAPAN IROS 2025 : 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos! The First World Humanoid Robot Games Conclude Successfully! Unitree Strikes Four Golds (1500m, 400m, 100m Obstacle, 4×100m Relay). [ Unitree ] Steady! PNDbotics Adam has become the only full-size humanoid robot athlete to successfully finish the 100m Obstacle Race at the World Humanoid Robot Games! [ PNDbotics ] Introducing Field Foundation Models (FFMs) from FieldAI - a new class of “physics-first” foundation models built specifically for embodied intelligence. Unlike conventional vision or language models retrofitted for robotics, FFMs are designed from the ground up to grapple with uncertainty, risk, and the physical constraints of the real world. This enables safe and reliable robot behaviors when managing scenarios that they have not been trained on, navigating dynamic, unstructured environments without prior maps, GPS, or predefined paths. [ Field AI ] Multiply Labs, leveraging Universal Robots’ collaborative robots, has developed a groundbreaking robotic cluster that is fundamentally transforming the manufacturing of life-saving cell and gene therapies. The Multiply Labs solution drives a staggering 74% cost reduction and enables up to 100x more patient doses per square foot of cleanroom. [ Universal Robots ] In this video, we put Vulcan V3, the world’s first ambidextrous humanoid robotic hand capable of performing the full American Sign Language (ASL) alphabet, to the ultimate test—side by side with a real human! [ Hackaday ] Thanks, Kelvin! More robots need to have this form factor. [ Texas A & M University ] Robotic vacuums are so pervasive now that it’s easy to forget how much of an icon the iRobot Roomba has been. [ iRobot ] This is quite possibly the largest robotic hand I’ve ever seen. [ CAFE Project ] via [ BUILT ] Modular robots built by Dartmouth researchers are finding their feet outdoors. Engineered to assemble into structures that best suit the task at hand, the robots are pieced together from cube-shaped robotic blocks that combine rigid rods and soft, stretchy strings whose tension can be adjusted to deform the blocks and control their shape. [ Dartmouth ] Our quadruped robot X30 has completed extreme-environment missions in Hoh Xil—supporting patrol teams, carrying vital supplies, and protecting fragile ecosystems. [ DEEP Robotics ] We propose a base-shaped robot named “koboshi” that moves everyday objects. This koboshi has a spherical surface in contact with the floor, and by moving a weight inside using built-in motors, it can rock up and down, and side to side. By placing everyday items on this koboshi, users can impart new movement to otherwise static objects. The koboshi is equipped with sensors to measure its posture, enabling interaction with users. Additionally, it has communication capabilities, allowing multiple units to communicate with each other. [ Paper ] Bi-LAT is the world’s first Vision-Language-Action (VLA) model that integrates bilateral control into imitation learning, enabling robots to adjust force levels based on natural language instructions. [ Bi-LAT ] to be presented at [ IEEE RO-MAN 2025 ] Thanks, Masato! Look at this jaunty little guy! Although, they very obviously cut the video right before it smashes face first into furniture more than once. [ Paper ] to be presented at [ 2025 IEEE-RAS International Conference on Humanoid Robotics ] This research has been conducted at the Human Centered Robotics Lab at UT Austin. The video shows our latest experimental bipedal robot, dubbed Mercury, which has passive feet. This means that there are no actuated ankles, unlike humans, forcing Mercury to gain balance by dynamically stepping. [ University of Texas at Austin Human Centered Robotics Lab ] We put two RIVR delivery robots to work with an autonomous vehicle — showing how Physical AI can handle the full last mile, from warehouse to consumers’ doorsteps. [ Rivr ] The KR TITAN ultra is a high-performance industrial robot weighing 4.6 tonnes and capable of handling payloads up to 1.5 tonnes. [ Kuka ] CMU MechE’s Ding Zhao and Ph.D. student Yaru Niu describe LocoMan, a robotic assistant they have been developing. [ Carnegie Mellon University ] Twenty-two years ago, Silicon Valley executive Henry Evans had a massive stroke that left him mute and paralyzed from the neck down. But that didn’t prevent him from becoming a leading advocate of adaptive robotic tech to help disabled people – or from writing country songs, one letter at a time. Correspondent John Blackstone talks with Evans about his upbeat attitude and unlikely pursuits. [ CBS News ]
spectrum.ieee.org
evanackerman.bsky.social
Video Friday: SCUTTLE
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RO-MAN 2025 : 25–29 August 2025, EINDHOVEN, THE NETHERLANDS CLAWAR 2025 : 5–7 September 2025, SHENZHEN, CHINA ACTUATE 2025 : 23–24 September 2025, SAN FRANCISCO CoRL 2025 : 27–30 September 2025, SEOUL IEEE Humanoids : 30 September–2 October 2025, SEOUL World Robot Summit : 10–12 October 2025, OSAKA, JAPAN IROS 2025 : 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos! Check out our latest innovations on SCUTTLE, advancing multilegged mobility anywhere. [ GCR ] That laundry folding robot we’ve been working on for 15 years is still not here yet. Honestly I think Figure could learn a few tricks from vintage UC Berkeley PR2, though: - YouTube [ Figure ] Tensegrity robots are so cool, but so hard—it’s good to see progress. [ Michigan Robotics ] We should find out next week how quick this is. [ Unitree ] We introduce a methodology for task-specific design optimization of multirotor Micro Aerial Vehicles. By leveraging reinforcement learning, Bayesian optimization, and covariance matrix adaptation evolution strategy, we optimize aerial robot designs guided only by their closed-loop performance in a considered task. Our approach systematically explores the design space of motor pose configurations while ensuring manufacturability constraints and minimal aerodynamic interference. Results demonstrate that optimized designs achieve superior performance compared to conventional multirotor configurations in agile waypoint navigation tasks, including against fully actuated designs from the literature. We build and test one of the optimized designs in the real world to validate the sim2real transferability of our approach. [ ARL ] Thanks, Kostas! I guess legs are required for this inspection application because of the stairs right at the beginning? But sometimes, that’s how the world is. [ DEEP Robotics ] The Institute of Robotics and Mechatronics at DLR has a long tradition in developing multi-fingered hands, creating novel mechatronic concepts as well as autonomous grasping and manipulation capabilities. The range of hands spans from Rotex, a first two-fingered gripper for space applications, to the highly anthropomorphic Awiwi Hand and variable stiffness end effectors. This video summarizes the developments of DLR in this field over the past 30 years, starting with the Rotex experiment in 1993. [ DLR RM ] The quest for agile quadrupedal robots is limited by handcrafted reward design in reinforcement learning. While animal motion capture provides 3D references, its cost prohibits scaling. We address this with a novel video-based framework. The proposed framework significantly advances robotic locomotion capabilities. [ Arc Lab ] Serious question: Why don’t humanoid robots sit down more often? [ EngineAI ] And now, this. [ LimX Dynamics ] NASA researchers are currently using wind tunnel and flight tests to gather data on an electric vertical takeoff and landing (eVTOL) scaled-down small aircraft that resembles an air taxi that aircraft manufacturers can use for their own designs. By using a smaller version of a full-sized aircraft called the RAVEN Subscale Wind Tunnel and Flight Test (RAVEN SWFT) vehicle, NASA is able to conduct its tests in a fast and cost-effective manner. [ NASA ] This video details the advances in orbital manipulation made by DLR’s Robotic and Mechatronics Center over the past 30 years, paving the way for the development of robotic technology for space sustainability. [ DLR RM ] This summer, a team of robots explored a simulated Martian landscape in Germany, remotely guided by an astronaut aboard the International Space Station. This marked the fourth and final session of the Surface Avatar experiment, a collaboration between ESA and the German Aerospace Center (‪DLR) to develop how astronauts can control robotic teams to perform complex tasks on the Moon and Mars. [ ESA ]
spectrum.ieee.org
evanackerman.bsky.social
Video Friday: Unitree’s A2 Quadruped Goes Exploring https://spectrum.ieee.org/video-friday-exploration-robots
Video Friday: Unitree’s A2 Quadruped Goes Exploring
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. World Humanoid Robot Games : 15–17 August 2025, BEIJING RO-MAN 2025 : 25–29 August 2025, EINDHOVEN, THE NETHERLANDS CLAWAR 2025 : 5–7 September 2025, SHENZHEN, CHINA ACTUATE 2025 : 23–24 September 2025, SAN FRANCISCO CoRL 2025 : 27–30 September 2025, SEOUL IEEE Humanoids : 30 September–2 October 2025, SEOUL World Robot Summit : 10–12 October 2025, OSAKA, JAPAN IROS 2025 : 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos! The A2 sets a new standard in quadruped robots, balancing endurance, strength, speed, and perception. The A2 weighs 37 kg (81.6 lbs) unloaded. Fully loaded with a 25 kg (55 lbs) payload, it can continuously walk for 3 hours or approximately 12.5 km. Unloaded, it can continuously walk for 5 hours or approximately 20 km. Hot-swappable dual batteries enable seamless battery swap and continuous runtime for any mission. [ Unitree ] Thanks, William! ABB is working with Cosmic Buildings to reshape how communities rebuild and transform construction after disaster. In response to the 2025 Southern California wildfires, Cosmic Buildings are deploying mobile robotic microfactories to build modular homes on-site—cutting construction time by 70% and costs by 30%. [ ABB ] Thanks, Caitlin! How many slightly awkward engineers can your humanoid robot pull? [ MagicLab ] The physical robot hand does some nifty stuff at about 1 minute in. [ ETH Zurich Soft Robotics Lab ] Biologists, you can all go home now. [ AgileX ] The World Humanoid Robot Games starts next week in Beijing, and of course Tech United Eindhoven are there. [ Tech United ] Our USX-1 Defiant is a new kind of autonomous maritime platform , with the potential to transform the way we design and build ships. As the team prepares Defiant for an extended at-sea demonstration, program manager Greg Avicola shares the foundational thinking behind the breakthrough vessel. [ DARPA ] After loss, how do you translate grief into creation? Meditation Upon Death is Paul Kirby’s most personal and profound painting—a journey through love, loss, and the mystery of the afterlife. Inspired by a conversation with a Native American shaman and years of artistic exploration, Paul fuses technology and traditional art to capture the spirit’s passage beyond. With 5,796 brushstrokes, a custom-built robotic painting system, and a vision shaped by memory and devotion, this is the most important painting he has ever made. [ Dulcinea ] Thanks, Alexandra! In the fourth installment of our Moonshot Podcast Deep Dive video interview series, X’s Captain of Moonshots Astro Teller sits down with Andrew Ng, the founder of Google Brain and DeepLearning.AI, for a conversation about the history of neural network research and how Andrew’s pioneering ideas led to some of the biggest breakthroughs in modern-day AI. [ Moonshot Podcast ]
spectrum.ieee.org
evanackerman.bsky.social
Video Friday: Dance With CHILD
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RO-MAN 2025 : 25–29 August 2025, EINDHOVEN, THE NETHERLANDS CLAWAR 2025 : 5–7 September 2025, SHENZHEN, CHINA ACTUATE 2025 : 23–24 September 2025, SAN FRANCISCO CoRL 2025 : 27–30 September 2025, SEOUL IEEE Humanoids : 30 September–2 October 2025, SEOUL World Robot Summit : 10–12 October 2025, OSAKA, JAPAN IROS 2025 : 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos! Many parents naturally teach motions to their child while using a baby carrier. In this setting, the parent’s range of motion fully encompasses the child’s, making it intuitive to scale down motions in a puppeteering manner. This inspired UIUC KIMLAB to build CHILD: Controller for Humanoid Imitation and Live Demonstration. The role of teleoperation has grown increasingly important with the rising interest in collecting physical data in the era of Physical/Embodied AI. We demonstrate the capabilities of CHILD through loco-manipulation and full-body control experiments using the Unitree G1 and other PAPRAS dual-arm systems. To promote accessibility and reproducibility, we open-source the hardware design. [ KIMLAB ] This costs less than US $6,000. [ Unitree ] If I wasn’t sold on one of these little Reachy Minis before, I definitely am now. [ Pollen ] In this study, we propose a falconry-like interaction system in which a flapping-wing drone performs autonomous palm landing motion on a human hand. To achieve a safe approach toward humans, our motion planning method considers both physical and psychological factors. I should point out that palm landings are not falconry-like at all, and that if you’re doing falconry right, the bird should be landing on your wrist instead. I have other hobbies besides robots, you know! [ Paper ] I’m not sure that augmented reality is good for all that much, but I do like this use case of interactive robot help. [ MRHaD ] Thanks, Masato! LimX Dynamics officially launched its general-purpose full-size humanoid robot LimX Oli. It’s currently available only in Mainland China. A global version is coming soon. Standing at 165 cm and equipped with 31 active degrees of freedom (excluding end-effectors), LimX Oli adopts a general-purpose humanoid configuration with modular hardware-software architecture and is supported by a development toolchain. It is built to advance embodied AI development from algorithm research to real-world deployment. [ LimX Dynamics ] Thanks, Jinyan! Meet Treadward – the newest robot from HEBI Robotics, purpose-built for rugged terrain, inspection missions, and real-world fieldwork. Treadward combines high mobility with extreme durability, making it ideal for challenging environments like waterlogged infrastructure, disaster zones, and construction sites. With a compact footprint and treaded base, it can climb over debris, traverse uneven ground, and carry substantial payloads. [ HEBI ] PNDbotics made a stunning debut at the 2025 World Artificial Intelligence Conference (WAIC) with the first-ever joint appearance of its full-sized humanoid robot Adam and its intelligent data-collection counterpart Adam-U. [ PNDbotics ] This paper presents the design, development, and validation of a fully autonomous dual-arm aerial robot capable of mapping, localizing, planning, and grasping parcels in an intra-logistics scenario. The aerial robot is intended to operate in a scenario comprising several supply points, delivery points, parcels with tags, and obstacles, generating the mission plan from voice the commands given by the user. [ GRVC ] We left the room. They took over. No humans. No instructions. Just robots... moving, coordinating, showing off. It almost felt like… they were staging something. [ AgileX ] TRI’s internship program offers a unique opportunity to work closely with our researchers on technologies to improve the quality of life for individuals and society. Here’s a glimpse into that experience from some of our 2025 interns! [ TRI ] In the third installment of our Moonshot Podcast Deep Dive video interview series, X’s Captain of Moonshots Astro Teller sits down with Dr. Catie Cuan, robot choreographer and former artist in residence at Everyday Robots, for a conversation about how dance can be used to build beautiful and useful robots that people want to be around. [ Moonshot Podcast ]
spectrum.ieee.org
evanackerman.bsky.social
Video Friday: Skyfall Takes on Mars With Swarm Helicopter Concept https://spectrum.ieee.org/video-friday-skyfall-mars-helicopter
Video Friday: Skyfall Takes on Mars With Swarm Helicopter Concept
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RO-MAN 2025 : 25–29 August 2025, EINDHOVEN, THE NETHERLANDS CLAWAR 2025 : 5–7 September 2025, SHENZHEN, CHINA ACTUATE 2025 : 23–24 September 2025, SAN FRANCISCO CoRL 2025 : 27–30 September 2025, SEOUL IEEE Humanoids : 30 September–2 October 2025, SEOUL World Robot Summit : 10–12 October 2025, OSAKA, JAPAN IROS 2025 : 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos! AeroVironment revealed Skyfall—a potential future mission concept for next-generation Mars Helicopters developed with NASA’s Jet Propulsion Laboratory (JPL) to help pave the way for human landing on Mars through autonomous aerial exploration. The concept is heavily focused on rapidly delivering an affordable, technically mature solution for expanded Mars exploration that would be ready for launch by 2028. Skyfall is designed to deploy six scout helicopters on Mars, where they would explore many of the sites selected by NASA and industry as top candidate landing sites for America’s first Martian astronauts. While exploring the region, each helicopter can operate independently, beaming high-resolution surface imaging and sub-surface radar data back to Earth for analysis, helping ensure crewed vehicles make safe landings at areas with maximum amounts of water, ice, and other resources. The concept would be the first to use the “Skyfall Maneuver”–an innovative entry, descent, and landing technique whereby the six rotorcraft deploy from their entry capsule during its descent through the Martian atmosphere. By flying the helicopters down to the Mars surface under their own power, Skyfall would eliminate the necessity for a landing platform–traditionally one of the most expensive, complex, and risky elements of any Mars mission. [ AeroVironment ] By far the best part of videos like these is watching the expressions on the faces of the students when their robot succeeds at something. [ RaiLab ] This is just a rendering of course, but the real thing should be showing up on August 6. [ Fourier ] Top performer in its class! Less than two weeks after its last release, MagicLab unveils another breakthrough — MagicDog-W, the wheeled quadruped robot. Cyber-flex, dominate all terrains! [ MagicLab ] Inspired by the octopus’s remarkable ability to wrap and grip with precision, this study introduces a vacuum-driven, origami-inspired soft actuator that mimics such versatility through self-folding design and high bending angles. Its crease-free, 3D-printable structure enables compact, modular robotics with enhanced grasping force—ideal for handling objects of various shapes and sizes using octopus-like suction synergy. [ Paper ] via [ IEEE Transactions on Robots ] Thanks, Bram! Is it a plane? Is it a helicopter? Yes. [ Robotics and Intelligent Systems Laboratory, City University of Hong Kong ] You don’t need wrist rotation as long as you have the right gripper . [ Nature Machine Intelligence ] ICRA 2026 will be in Vienna next June! [ ICRA 2026 ] Boing, boing, boing! [ Robotics and Intelligent Systems Laboratory, City University of Hong Kong ] ROBOTERA Unveils L7: Next-Generation Full-Size Bipedal Humanoid Robot with Powerful Mobility and Dexterous Manipulation! [ ROBOTERA ] Meet UBTECH New-Gen of Industrial Humanoid Robot—Walker S2 makes multiple industry-leading breakthroughs! Walker S2 is the world’s first humanoid robot to achieve 3-minute autonomous battery swapping and 24/7 continuous operation. [ UBTECH ] ARMstrong Dex is a human-scale dual-arm hydraulic robot developed by the Korea Atomic Energy Research Institute (KAERI) for disaster response. It can perform vertical pull-ups and manipulate loads over 50 kg, demonstrating strength beyond human capabilities. However, disaster environments also require agility and fast, precise movement. This test evaluated ARMstrong Dex’s ability to throw a 500 ml water bottle (0.5 kg) into a target container. The experiment assessed high-speed coordination, trajectory control, and endpoint accuracy, which are key attributes for operating in dynamic rescue scenarios. [ KAERI ] This is not a humanoid robot, it’s a data acquisition platform. [ PNDbotics ] Neat feature on this drone to shift the battery back and forth to compensate for movement of the arm. [ Paper ] via [ Drones journal ] As residential buildings become taller and more advanced, the demand for seamless and secure in-building delivery continues to grow. In high-end apartments and modern senior living facilities where couriers cannot access upper floors, robots like FlashBot Max are becoming essential. In this featured elderly care residence, FlashBot Max completes 80-100 deliveries daily, seamlessly navigating elevators, notifying residents upon arrival, and returning to its charging station after each delivery. [ Pudu Robotics ] “How to Shake Trees With Aerial Manipulators.” [ GRVC ] We see a future where seeing a cobot in a hospital delivering supplies feels as normal as seeing a tractor in a field. Watch our CEO Brad Porter share what robots moving in the world should feel like. [ Cobot ] Introducing the Engineered Arts UI for robot Roles, it’s now simple to set up a robot to behave exactly the way you want it to. We give a quick overview of customization for languages, personality, knowledge and abilities. All of this is done with no code. Just simple LLM prompts, drop down list selections and some switches to enable the features you need. [ Engineered Arts ] Unlike most quadrupeds, CARA doesn’t use any gears or pulleys. Instead, her joints are driven by rope through capstan drives. Capstan drives offer several advantages: zero backlash, high torque transparency, low inertia, low cost, and quiet operation. These qualities make them an ideal speed reducer for robotics. [ CARA ]
spectrum.ieee.org
evanackerman.bsky.social
Video Friday: Robot Metabolism
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics robotics . We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RO-MAN 2025 : 25–29 August 2025, EINDHOVEN, THE NETHERLANDS CLAWAR 2025 : 5–7 September 2025, SHENZHEN, CHINA ACTUATE 2025 : 23–24 September 2025, SAN FRANCISCO CoRL 2025 : 27–30 September 2025, SEOUL IEEE Humanoids : 30 September–2 October 2025, SEOUL World Robot Summit : 10–12 October 2025, OSAKA, JAPAN IROS 2025 : 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos! Columbia University researchers introduce a process that allows machines to “grow” physically by integrating parts from their surroundings or from other robots, demonstrating a step towards self-sustaining robot ecologies. [ Robot Metabolism ] via [ Columbia ] We challenged ourselves to see just how far we could push Digit’s ability to stabilize itself in response to a disturbance. Utilizing state-of-the-art AI technology and robust physical intelligence, Digit can adapt to substantial disruptions, all without the use of visual perception. [ Agility Robotics ] We are presenting the Figure 03 (F.03) battery — a significant advancement in our core humanoid robot technology roadmap. The effort that was put into safety for this battery is impressive. But I would note two things: the battery life is “5 hours of run time at peak performance” without saying what “peak performance” actually means, and 2-kilowatt fast charge still means over an hour to fully charge. [ Figure ] Well this is a nifty idea. [ UBTECH ] PAPRLE is a plug-and-play robotic limb environment for flexible configuration and control of robotic limbs across applications. With PAPRLE, user can use diverse configurations of leader-follower pair for teleoperation. In the video, we show several teleoperation examples supported by PAPRLE. [ PAPRLE ] Thanks, Joohyung! Always nice to see a robot with a carefully thought out commercial use case in which it can just do robot stuff like a robot. [ Cohesive Robotics ] Thanks, David! We are interested in deploying autonomous legged robots in diverse environments, such as industrial facilities and forests. As part of the DigiForest project, we are working on new systems to autonomously build forest inventories with legged platforms, which we have deployed in the UK, Finland, and Switzerland. [ Oxford ] Thanks, Matias! In this research we introduce a self-healing, biocompatible strain sensor using Galinstan and a Diels-Alder polymer, capable of restoring both mechanical and sensing functions after repeated damage. This highly stretchable and conductive sensor demonstrates strong performance metrics—including 80% mechanical healing efficiency and 105% gauge factor recovery—making it suitable for smart wearable applications. [ Paper ] Thanks, Bram! The “Amazing Hand” from Pollen Robotics costs under $250. [ Pollen ] Welcome to our Unboxing Day! After months of waiting, our humanoid robot has finally arrived at Fraunhofer IPA in Stuttgart. I used to take stretching classes from a woman who could do this backwards in 5.43 seconds . [ Fraunhofer ] At the Changchun stop of the VOYAGEX Music Festival on July 12, PNDbotics’ full-sized humanoid robot Adam took the stage as a keytar player with the famous Chinese musician Hu Yutong’s band. [ PNDbotics ] Material movement is the invisible infrastructure of hospitals, airports, cities–everyday life. We build robots that support the people doing this essential, often overlooked work. Watch our CEO Brad Porter reflect on what inspired Cobot. [ Cobot ] Yes please. [ Pollen ] I think I could get to the point of being okay with this living in my bathroom. [ Paper ] Thanks to its social perception, high expressiveness and out-of-the-box integration, TIAGo Head offers the ultimate human-robot interaction experience. [ PAL Robotics ] Sneak peek: Our No Manning Required Ship (NOMARS) Defiant unmanned surface vessel is designed to operate for up to a year at sea without human intervention. In-water testing is preparing it for an extended at-sea demonstration of reliability and endurance. Excellent name for any ship. [ DARPA ] At the 22nd International Conference on Ubiquitous Robots (UR2025), high school student and robotics researcher Ethan Hong was honored as a Special Invited Speaker for the conference banquet and “Robots With Us” panel. In this heartfelt and inspiring talk, Ethan shares the story behind Food Angel — a food delivery robot he designed and built to support people experiencing homelessness in Los Angeles. Motivated by the growing crises of homelessness and food insecurity, Ethan asked a simple but profound question: “Why not use robots to help the unhoused?” [ UR2025 ]
spectrum.ieee.org
evanackerman.bsky.social
Video Friday: Reachy Mini Brings Cute to Open-Source Robotics https://spectrum.ieee.org/video-friday-reachy-mini
Video Friday: Reachy Mini Brings Cute to Open-Source Robotics
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. IFAC Symposium on Robotics : 15–18 July 2025, PARIS RoboCup 2025 : 15–21 July 2025, BAHIA, BRAZIL RO-MAN 2025 : 25–29 August 2025, EINDHOVEN, THE NETHERLANDS CLAWAR 2025 : 5–7 September 2025, SHENZHEN, CHINA ACTUATE 2025 : 23–24 September 2025, SAN FRANCISCO CoRL 2025 : 27–30 September 2025, SEOUL IEEE Humanoids : 30 September–2 October 2025, SEOUL World Robot Summit : 10–12 October 2025, OSAKA, JAPAN IROS 2025 : 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos! Reachy Mini is an expressive, open-source robot designed for human-robot interaction, creative coding, and AI experimentation. Fully programmable in Python (and soon JavaScript, Scratch) and priced from $299, it’s your gateway into robotics AI: fun, customizable, and ready to be part of your next coding project. I’m so happy that Pollen and Reachy found a home with Hugging Face, but I hope they understand that they are never, ever allowed to change that robot’s face. O-o [ Reachy Mini ] via [ Hugging Face ] General-purpose robots promise a future where household assistance is ubiquitous and aging in place is supported by reliable, intelligent help. These robots will unlock human potential by enabling people to shape and interact with the physical world in transformative new ways. At the core of this transformation are Large Behavior Models (LBMs) - embodied AI systems that take in robot sensor data and output actions. LBMs are pretrained on large, diverse manipulation datasets and offer the key to realizing robust, general-purpose robotic intelligence. Yet despite their growing popularity, we still know surprisingly little about what today’s LBMs actually offer - and at what cost. This uncertainty stems from the difficulty of conducting rigorous, large-scale evaluations in real-world robotics. As a result, progress in algorithm and dataset design is often guided by intuition rather than evidence, hampering progress. Our work aims to change that. [ Toyota Research Institute ] Kinisi Robotics is advancing the frontier of physical intelligence by developing AI-driven robotic platforms capable of high-speed, autonomous pick-and-place operations in unstructured environments. This video showcases Kinisi’s latest wheeled-base humanoid performing dexterous bin stacking and item sorting using closed-loop perception and motion planning. The system combines high-bandwidth actuation, multi-arm coordination, and real-time vision to achieve robust manipulation without reliance on fixed infrastructure. By integrating custom hardware with onboard intelligence, Kinisi enables scalable deployment of general-purpose robots in dynamic warehouse settings, pushing toward broader commercial readiness for embodied AI systems. [ Kinisi Robotics ] Thanks, Bren! In this work, we develop a data collection system where human and robot data are collected and unified in a shared space, and propose a modularized cross-embodiment Transformer that is pretrained on human data and fine-tuned on robot data. This enables high data efficiency and effective transfer from human to quadrupedal embodiments, facilitating versatile manipulation skills for unimanual and bimanual, non-prehensile and prehensile, precise tool-use, and long-horizon tasks, such as cat litter scooping! [ Human2LocoMan ] Thanks, Yaru! LEIYN is a quadruped robot equipped with an active waist joint. It achieves the world’s fastest chimney climbing through dynamic motions learned via reinforcement learning. [ JSK Lab ] Thanks, Keita! Quadrupedal robots are really just bipedal robots that haven’t learned to walk on two legs yet. [ Adaptive Robotic Controls Lab, University of Hong Kong ] This study introduces a biomimetic self-healing module for tendon-driven legged robots that uses robot motion to activate liquid metal sloshing, which removes surface oxides and significantly enhances healing strength. Validated on a life-sized monopod robot, the module enables repeated squatting after impact damage, marking the first demonstration of active self-healing in high-load robotic applications. [ University of Tokyo ] Thanks, Kento! That whole putting wheels on quadruped robots thing was a great idea that someone had way back when. [ Pudu Robotics ] I know nothing about this video except that it’s very satisfying and comes from a YouTube account that hasn’t posted in 6 years. [ Young-jae Bae YouTube ] Our AI WORKER now comes in a new Swerve Drive configuration, optimized for logistics environments. With its agile and omnidirectional movement, the swerve-type mobile base can efficiently perform various logistics tasks such as item transport, shelf navigation, and precise positioning in narrow aisles. Wait, you can have a bimanual humanoid without legs? I am shocked. [ ROBOTIS ] I can’t tell whether I need an office assistant, or if I just need snacks. [ PNDbotics ] “MagicBot Z1: Atomic kinetic energy, the brave are fearless,” says the MagicBot website. Hard to argue with that! [ MagicLab ] We’re excited to announce our new HQ in Palo Alto [CA]. As we grow, consolidating our Sunnyvale [CA] and Moss [Norway] team under one roof will accelerate our speed to ramping production and getting NEO into homes near you. I’m not entirely sure that moving from Norway to California is an upgrade, honestly. [ 1X ] Jim Kernan, Chief Product Officer at Engineered Arts, shares how they’re commercializing humanoid robots—blending AI, expressive design, and real-world applications to build trust and engagement. [ Humanoids Summit ] In the second installment of our Moonshot Podcast Deep Dive video interview series, X’s Captain of Moonshots Astro Teller sits down with André Prager, former Chief Engineer at Wing, for a conversation about the early days of Wing and how the team solved some of their toughest engineering challenges to develop simple, lightweight, inexpensive delivery drones that are now being used every day across three continents. [ Moonshot Podcast ]
spectrum.ieee.org
evanackerman.bsky.social
Video Friday: Cyborg Beetles May Speed Disaster Response One Day https://spectrum.ieee.org/video-friday-cyborg-beetles
Video Friday: Cyborg Beetles May Speed Disaster Response One Day
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. IEEE World Haptics : 8–11 July 2025, SUWON, SOUTH KOREA IFAC Symposium on Robotics : 15–18 July 2025, PARIS RoboCup 2025 : 15–21 July 2025, BAHIA, BRAZIL RO-MAN 2025 : 25–29 August 2025, EINDHOVEN, THE NETHERLANDS CLAWAR 2025 : 5–7 September 2025, SHENZHEN, CHINA ACTUATE 2025 : 23–24 September 2025, SAN FRANCISCO CoRL 2025 : 27–30 September 2025, SEOUL IEEE Humanoids : 30 September–2 October 2025, SEOUL World Robot Summit : 10–12 October 2025, OSAKA, JAPAN IROS 2025 : 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos! Common beetles equipped with microchip backpacks could one day be used to help search and rescue crews locate survivors within hours instead of days following disasters such as building and mine collapses. The University of Queensland’s Dr. Thang Vo-Doan and Research Assistant Lachlan Fitzgerald have demonstrated they can remotely guide darkling beetles (Zophobas morio) fitted with the packs via video game controllers. [ Paper ] via [ University of Queensland ] Thanks, Thang! This is our latest work about six-doF hand-based teleoperation for omnidirectional aerial robots, which shows an intuitive teleoperation system for advanced aerial robot. This work has been presented in 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2025). [ DRAGON Lab ] Thanks, Moju! Pretty sure we’ve seen this LimX humanoid before, and we’re seeing it again right now, but hey, the first reveal is just ahead! [ LimX Dynamics ] Thanks, Jinyan! Soft robot arms use soft materials and structures to mimic the passive compliance of biological arms that bend and extend. Here, we show how relying on patterning structures instead of inherent material properties allows soft robotic arms to remain compliant while continuously transmitting torque to their environment. We demonstrate a soft robotic arm made from a pair of mechanical metamaterials that act as compliant constant-velocity joints. [ Paper ] via [ Transformative Robotics Lab ] Selling a platform is really hard, but I hope K-Scale can succeed with their open source humanoid. [ K-Scale ] MIT CSAIL researchers combined GenAI and a physics simulation engine to refine robot designs . The result: a machine that out-jumped a robot designed by humans. [ MIT News ] ARMstrong Dex is a human-scale dual-arm hydraulic robot under development at the Korea Atomic Energy Research Institute (KAERI) for disaster response applications. Designed with dimensions similar to an adult human, it combines human-equivalent reach and dexterity with force output that exceeds human physical capabilities, enabling it to perform extreme heavy-duty tasks in hazardous environments. [ Korea Atomic Energy Research Institute ] This is a demonstration of in-hand object rotation with Torobo Hand. Torobo Hand is modeled in simulation, and a control policy is trained within several hours using large-scale parallel reinforcement learning in Isaac Sim. The trained policy can be executed without any additional training in both a different simulator (MuJoCo) and on the real Torobo Hand. [ Tokyo Robotics ] Since 2005, Ekso Bionics has been developing and manufacturing exoskeleton bionic devices that can be strapped on as wearable robots to enhance the strength, mobility, and endurance of soldiers, patients, and workers. These robots have a variety of applications in the medical, military, industrial, and consumer markets, helping rehabilitation patients walk again and workers preserve their strength. [ Ekso Bionics ] Sponsored by Raytheon, an RTX business, the 2025 east coast Autonomous Vehicle Competition was held at XElevate in Northern Virginia. Student Engineering Teams from five universities participated in a two-semester project to design, develop, integrate, and compete two autonomous vehicles that could identify, communicate, and deliver a medical kit with the best accuracy and time. [ RTX ] This panel is from the Humanoids Summit in London: “Investing in the Humanoids Robotics Ecosystem—a VC Perspective.” [ Humanoids Summit ]
spectrum.ieee.org
evanackerman.bsky.social
Video Friday: This Quadruped Throws With Its Whole Body https://spectrum.ieee.org/robot-arm-thrower
Video Friday: This Quadruped Throws With Its Whole Body
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. IAS 2025 : 30 June–4 July 2025, GENOA, ITALY ICRES 2025 : 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics : 8–11 July 2025, SUWON, SOUTH KOREA IFAC Symposium on Robotics : 15–18 July 2025, PARIS RoboCup 2025 : 15–21 July 2025, BAHIA, BRAZIL RO-MAN 2025 : 25–29 August 2025, EINDHOVEN, THE NETHERLANDS CLAWAR 2025 : 5–7 September 2025, SHENZHEN CoRL 2025 : 27–30 September 2025, SEOUL IEEE Humanoids : 30 September–2 October 2025, SEOUL World Robot Summit : 10–12 October 2025, OSAKA, JAPAN IROS 2025 : 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos! Throwing is a fundamental skill that enables robots to manipulate objects in ways that extend beyond the reach of their arms. We present a control framework that combines learning and model-based control for prehensile whole-body throwing with legged mobile manipulators. This work provides an early demonstration of prehensile throwing with quantified accuracy on hardware, contributing to progress in dynamic whole-body manipulation. [ Paper ] from [ ETH Zurich ] As it turns out, in many situations humanoid robots don’t necessarily need legs at all. [ ROBOTERA ] Picking-in-Motion is a brand new feature as part of Autopicker 2.0. Instead of remaining stationary while picking an item, Autopicker begins traveling toward its next destination immediately after retrieving a storage tote – completing the pick while on the move. The robot then drops off the first storage tote at an empty slot near the next pick location before collecting the next tote. [ Brightpick ] Thanks, Gilmarie! I am pretty sure this is not yet real, but boy is it shiny. [ SoftBank ] via [ RobotStart ] Why use one thumb when you can instead use two thumbs? [ TU Berlin ] Kirigami offers unique opportunities for guided morphing by leveraging the geometry of the cuts. This work presents inflatable kirigami crawlers created by introducing cut patterns into heat-sealable textiles to achieve locomotion upon cyclic pneumatic actuation. We found that the kirigami actuators exhibit directional anisotropic friction properties when inflated, having higher friction coefficients against the direction of the movement, enabling them to move across surfaces with varying roughness. We further enhanced the functionality of inflatable kirigami actuators by introducing multiple channels and segments to create functional soft robotic prototypes with versatile locomotion capabilities. [ Paper ] from [ SDU Soft Robotics ] Lockheed Martin wants to get into the Mars Sample Return game for a mere US$3 billion. [ Lockheed Martin ] This is pretty gross and exactly what you want a robot to be doing: dealing with municipal solid waste. [ ZenRobotics ] Drag your mouse or move your phone to explore this 360-degree panorama provided by NASA’s Curiosity Mars rover. This view shows some of the rover’s first looks at a region that has only been viewed from space until now, and where the surface is crisscrossed with spiderweblike patterns. [ NASA Jet Propulsion Laboratory ] In case you were wondering, iRobot is still around. [ iRobot ] Legendary roboticist Cynthia Breazeal talks about the equally legendary Personal Robots Group at the MIT Media Lab. [ MIT Personal Robots Group ] In the first installment of our Moonshot Podcast Deep Dive video interview series, X’s Captain of Moonshots Astro Teller sits down with Sebastian Thrun , co-founder of the Moonshot Factory, for a conversation about the history of Waymo and Google X, the ethics of innovation, the future of AI, and more. [ Google X, The Moonshot Factory ]
spectrum.ieee.org
evanackerman.bsky.social
How the Rubin Observatory Will Reinvent Astronomy https://spectrum.ieee.org/vera-rubin-observatory-first-images
How the Rubin Observatory Will Reinvent Astronomy
Night is falling on Cerro Pachón. Stray clouds reflect the last few rays of golden light as the sun dips below the horizon. I focus my camera across the summit to the westernmost peak of the mountain. Silhouetted within a dying blaze of red and orange light looms the sphinxlike shape of the Vera C. Rubin Observatory . “Not bad,” says William O’Mullane , the observatory’s deputy project manager, amateur photographer, and master of understatement. We watch as the sky fades through reds and purples to a deep, velvety black. It’s my first night in Chile. For O’Mullane, and hundreds of other astronomers and engineers, it’s the culmination of years of work, as the Rubin Observatory is finally ready to go “on sky.” Rubin is unlike any telescope ever built. Its exceptionally wide field of view, extreme speed, and massive digital camera will soon begin the 10-year Legacy Survey of Space and Time (LSST ) across the entire southern sky. The result will be a high-resolution movie of how our solar system, galaxy, and universe change over time, along with hundreds of petabytes of data representing billions of celestial objects that have never been seen before. Stars begin to appear overhead, and O’Mullane and I pack up our cameras. It’s astronomical twilight, and after nearly 30 years, it’s time for Rubin to get to work. On 23 June, the Vera C. Rubin Observatory released the first batch of images to the public. One of them, shown here, features a small section of the Virgo cluster of galaxies. Visible are two prominent spiral galaxies (lower right), three merging galaxies (upper right), several groups of distant galaxies, and many stars in the Milky Way galaxy. Created from over 10 hours of observing data, this image represents less than 2 percent of the field of view of a single Rubin image. NSF-DOE Rubin Observatory A second image reveals clouds of gas and dust in the Trifid and Lagoon nebulae, located several thousand light-years from Earth. It combines 678 images taken by the Rubin Observatory over just seven hours, revealing faint details—like nebular gas and dust—that would otherwise be invisible. NSF-DOE Rubin Observatory Engineering the Simonyi Survey Telescope The top of Cerro Pachón is not a big place. Spanning about 1.5 kilometers at 2,647 meters of elevation, its three peaks are home to the Southern Astrophysical Research Telescope (SOAR ), the Gemini South Telescope , and for the last decade, the Vera Rubin Observatory construction site. An hour’s flight north of the Chilean capital of Santiago, these foothills of the Andes offer uniquely stable weather. The Humboldt Current flows just offshore, cooling the surface temperature of the Pacific Ocean enough to minimize atmospheric moisture, resulting in some of the best “seeing,” as astronomers put it, in the world. It’s a complicated but exciting time to be visiting. It’s mid-April of 2025, and I’ve arrived just a few days before “first photon,” when light from the night sky will travel through the completed telescope and into its camera for the first time. In the control room on the second floor, engineers and astronomers make plans for the evening’s tests. O’Mullane and I head up into a high bay that contains the silvering chamber for the telescope’s mirrors and a clean room for the camera and its filters. Increasingly exhausting flights of stairs lead to the massive pier on which the telescope sits, and then up again into the dome. I suddenly feel very, very small. The Simonyi Survey Telescope towers above us—350 tonnes of steel and glass, nestled within the 30-meter-wide, 650-tonne dome. One final flight of stairs and we’re standing on the telescope platform. In its parked position, the telescope is pointed at horizon, meaning that it’s looking straight at me as I step in front of it and peer inside. The telescope’s enormous 8.4-meter primary mirror is so flawlessly reflective that it’s essentially invisible. Made of a single piece of low-expansion borosilicate glass covered in a 120-nanometer-thick layer of pure silver, the huge mirror acts as two different mirrors, with a more pronounced curvature toward the center. Standing this close means that different reflections of the mirrors, the camera, and the structure of the telescope all clash with one another in a way that shifts every time I move. I feel like if I can somehow look at it in just the right way, it will all make sense. But I can’t, and it doesn’t. I’m rescued from madness by O’Mullane snapping photos next to me. “Why?” I ask him. “You see this every day, right?” “This has never been seen before,” he tells me. “It’s the first time, ever, that the lens cover has been off the camera since it’s been on the telescope.” Indeed, deep inside the nested reflections I can see a blue circle, the r-band filter within the camera itself. As of today, it’s ready to capture the universe. Rubin’s Wide View Unveils the Universe Back down in the control room, I find director of construction Željko Ivezić. He’s just come up from the summit hotel, which has several dozen rooms for lucky visitors like myself, plus a few even luckier staff members. The rest of the staff commutes daily from the coastal town of La Serena, a 4-hour round trip. To me, the summit hotel seems luxurious for lodgings at the top of a remote mountain. But Ivezić has a slightly different perspective. “The European-funded telescopes,” he grumbles, “have swimming pools at their hotels. And they serve wine with lunch! Up here, there’s no alcohol. It’s an American thing.” He’s referring to the fact that Rubin is primarily funded by the U.S. National Science Foundation and the U.S. Department of Energy’s Office of Science , which have strict safety requirements. Originally, Rubin was intended to be a dark-matter survey telescope, to search for the 85 percent of the mass of the universe that we know exists but can’t identify. In the 1970s, astronomer Vera C. Rubin pioneered a spectroscopic method to measure the speed at which stars orbit around the centers of their galaxies, revealing motion that could be explained only by the presence of a halo of invisible mass at least five times the apparent mass of the galaxies themselves. Dark matter can warp the space around it enough that galaxies act as lenses, bending light from even more distant galaxies as it passes around them. It’s this gravitational lensing that the Rubin observatory was designed to detect on a massive scale. But once astronomers considered what else might be possible with a survey telescope that combined enormous light-collecting ability with a wide field of view, Rubin’s science mission rapidly expanded beyond dark matter. Trading the ability to focus on individual objects for a wide field of view that can see tens of thousands of objects at once provides a critical perspective for understanding our universe, says Ivezić. Rubin will complement other observatories like the Hubble Space Telescope and the James Webb Space Telescope . Hubble’s Wide Field Camera 3 and Webb’s Near Infrared Camera have fields of view of less than 0.05 square degrees each, equivalent to just a few percent of the size of a full moon. The upcoming Nancy Grace Roman Space Telescope will see a bit more, with a field of view of about one full moon. Rubin, by contrast, can image 9.6 square degrees at a time—about 45 full moons’ worth of sky. RELATED: A Trillion Rogue Planets and Not One Sun to Shine on Them That ultrawide view offers essential context, Ivezić explains. “My wife is American, but I’m from Croatia,” he says. “Whenever we go to Croatia, she meets many people. I asked her, ‘Did you learn more about Croatia by meeting many people very superficially, or because you know me very well?’ And she said, ‘You need both. I learn a lot from you, but you could be a weirdo, so I need a control sample.’ ” Rubin is providing that control sample, so that astronomers know just how weird whatever they’re looking at in more detail might be. Every night, the telescope will take a thousand images, one every 34 seconds. After three or four nights, it’ll have the entire southern sky covered, and then it’ll start all over again. After a decade, Rubin will have taken more than 2 million images, generated 500 petabytes of data, and visited every object it can see at least 825 times. In addition to identifying an estimated 6 million bodies in our solar system, 17 billion stars in our galaxy, and 20 billion galaxies in our universe, Rubin’s rapid cadence means that it will be able to delve into the time domain, tracking how the entire southern sky changes on an almost daily basis. Cutting-Edge Technology Behind Rubin’s Speed Achieving these science goals meant pushing the technical envelope on nearly every aspect of the observatory. But what drove most of the design decisions is the speed at which Rubin needs to move (3.5 degrees per second)—the phrase most commonly used by the Rubin staff is “crazy fast.” Crazy fast movement is why the telescope looks the way it does. The squat arrangement of the mirrors and camera centralizes as much mass as possible. Rubin’s oversize supporting pier is mostly steel rather than mostly concrete so that the movement of the telescope doesn’t twist the entire pier. And then there’s the megawatt of power required to drive this whole thing, which comes from huge banks of capacitors slung under the telescope to prevent a brownout on the summit every 30 seconds all night long. Rubin is also unique in that it utilizes the largest digital camera ever built. The size of a small car and weighing 2,800 kilograms, the LSST camera captures 3.2-gigapixel images through six swappable color filters ranging from near infrared to near ultraviolet. The camera’s focal plane consists of 189 4K-by-4K charge-coupled devices grouped into 21 “rafts.” Every CCD is backed by 16 amplifiers that each read 1 million pixels, bringing the readout time for the entire sensor down to 2 seconds flat. Astronomy in the Time Domain As humans with tiny eyeballs and short lifespans who are more or less stranded on Earth, we have only the faintest idea of how dynamic our universe is. To us, the night sky seems mostly static and also mostly empty. This is emphatically not the case. In 1995, the Hubble Space Telescope pointed at a small and deliberately unremarkable part of the sky for a cumulative six days. The resulting image, called the Hubble Deep Field , revealed about 3,000 distant galaxies in an area that represented just one twenty-four-millionth of the sky. To observatories like Hubble, and now Rubin, the sky is crammed full of so many objects that it becomes a problem. As O’Mullane puts it, “There’s almost nothing not touching something.” One of Rubin’s biggest challenges will be deblending—­identifying and then separating things like stars and galaxies that appear to overlap. This has to be done carefully by using images taken through different filters to estimate how much of the brightness of a given pixel comes from each object. At first, Rubin won’t have this problem. At each location, the camera will capture one 30-second exposure before moving on. As Rubin returns to each location every three or four days, subsequent exposures will be combined in a process called coadding. In a coadded image, each pixel represents all of the data collected from that location in every previous image, which results in a much longer effective exposure time. The camera may record only a few photons from a distant galaxy in each individual image, but a few photons per image added together over 825 images yields much richer data. By the end of Rubin’s 10-year survey, the coadding process will generate images with as much detail as a typical Hubble image, but over the entire southern sky. A few lucky areas called “deep drilling fields ” will receive even more attention, with each one getting a staggering 23,000 images or more. Rubin will add every object that it detects to its catalog, and over time, the catalog will provide a baseline of the night sky, which the observatory can then use to identify changes. Some of these changes will be movement—Rubin may see an object in one place, and then spot it in a different place some time later, which is how objects like near-Earth asteroids will be detected. But the vast majority of the changes will be in brightness rather than movement. RELATED: Three Steps to Stopping Killer Asteroids Every image that Rubin collects will be compared with a baseline image, and any change will automatically generate a software alert within 60 seconds of when the image was taken. Rubin’s wide field of view means that there will be a lot of these alerts—on the order of 10,000 per image, or 10 million alerts per night. Other automated systems will manage the alerts. Called alert brokers, they ingest the alert streams and filter them for the scientific community. If you’re an astronomer interested in Type Ia supernovae, for example, you can subscribe to an alert broker and set up a filter so that you’ll get notified when Rubin spots one. Many of these alerts will be triggered by variable stars, which cyclically change in brightness. Rubin is also expected to identify somewhere between 3 million and 4 million supernovae —that works out to over a thousand new supernovae for every night of observing. And the rest of the alerts? Nobody knows for sure, and that’s why the alerts have to go out so quickly, so that other telescopes can react to make deeper observations of what Rubin finds. Managing Rubin’s Vast Data Output After the data leaves Rubin’s camera, most of the processing will take place at the SLAC National Accelerator Laboratory in Menlo Park, Calif., over 9,000 kilometers from Cerro Pachón. It takes less than 10 seconds for an image to travel from the focal plane of the camera to SLAC, thanks to a 600-gigabit fiber connection from the summit to La Serena, and from there, a dedicated 100-gigabit line and a backup 40-gigabit line that connect to the Department of Energy’s science network in the United States. The 20 terabytes of data that Rubin will produce nightly makes this bandwidth necessary. “There’s a new image every 34 seconds,” O’Mullane tells me. “If I can’t deal with it fast enough, I start to get behind. So everything has to happen on the cadence of half a minute if I want to keep up with the data flow.” At SLAC, each image will be calibrated and cleaned up, including the removal of satellite trails. Rubin will see a lot of satellites, but since the satellites are unlikely to appear in the same place in every image, the impact on the data is expected to be minimal when the images are coadded. The processed image is compared with a baseline image and any alerts are sent out, by which time processing of the next image has already begun. As Rubin’s catalog of objects grows, astronomers will be able to query it in all kinds of useful ways. Want every image of a particular patch of sky? No problem. All the galaxies of a certain shape? A little trickier, but sure. Looking for 10,000 objects that are similar in some dimension to 10,000 other objects? That might take a while, but it’s still possible. Astronomers can even run their own code on the raw data. “Pretty much everyone in the astronomy community wants something from Rubin,” O’Mullane explains, “and so they want to make sure that we’re treating the data the right way. All of our code is public. It’s on GitHub . You can see what we’re doing, and if you’ve got a better solution, we’ll take it.” One better solution may involve AI. “I think as a community we’re struggling with how we do this,” says O’Mullane. “But it’s probably something we ought to do—curating the data in such a way that it’s consumable by machine learning, providing foundation models, that sort of thing.” The data management system is arguably as much of a critical component of the Rubin observatory as the telescope itself. While most telescopes make targeted observations that get distributed to only a few astronomers at a time, Rubin will make its data available to everyone within just a few days, which is a completely different way of doing astronomy. “We’ve essentially promised that we will take every image of everything that everyone has ever wanted to see,” explains Kevin Reil , Rubin observatory scientist. “If there’s data to be collected, we will try to collect it. And if you’re an astronomer somewhere, and you want an image of something, within three or four days we’ll give you one. It’s a colossal challenge to deliver something on this scale.” The more time I spend on the summit, the more I start to think that the science that we know Rubin will accomplish may be the least interesting part of its mission. And despite their best efforts, I get the sense that everyone I talk to is wildly understating the impact it will have on astronomy. The sheer volume of objects, the time domain, the 10 years of coadded data—what new science will all of that reveal? Astronomers have no idea, because we’ve never looked at the universe in this way before. To me, that’s the most fascinating part of what’s about to happen. Reil agrees. “You’ve been here,” he says. “You’ve seen what we’re doing. It’s a paradigm shift, a whole new way of doing things. It’s still a telescope and a camera, but we’re changing the world of astronomy. I don’t know how to capture—I mean, it’s the people, the intensity, the awesomeness of it. I want the world to understand the beauty of it all.” The Intersection of Science and Engineering Because nobody has built an observatory like Rubin before, there are a lot of things that aren’t working exactly as they should, and a few things that aren’t working at all. The most obvious of these is the dome. The capacitors that drive it blew a fuse the day before I arrived, and the electricians are off the summit for the weekend. The dome shutter can’t open either. Everyone I talk to takes this sort of thing in stride—they have to, because they’ve been troubleshooting issues like these for years. I sit down with Yousuke Utsumi , a camera operations scientist who exudes the mixture of excitement and exhaustion that I’m getting used to seeing in the younger staff. “Today is amazingly quiet,” he tells me. “I’m happy about that. But I’m also really tired. I just want to sleep.” Just yesterday, Utsumi says, they managed to finally solve a problem that the camera team had been struggling with for weeks—an intermittent fault in the camera cooling system that only seemed to happen when the telescope was moving. This was potentially a very serious problem, and Utsumi’s phone would alert him every time the fault occurred, over and over again in the middle of the night. The fault was finally traced to a cable within the telescope’s structure that used pins that were slightly too small, leading to a loose connection. Utsumi’s contract started in 2017 and was supposed to last three years, but he’s still here. “I wanted to see first photon,” he says. “I’m an astronomer. I’ve been working on this camera so that it can observe the universe. And I want to see that light, from those photons from distant galaxies.” This is something I’ve also been thinking about—those lonely photons traveling through space for billions of years, and within the coming days, a lucky few of them will land on the sensors Utsumi has been tending, and we’ll get to see them. He nods, smiling. “I don’t want to lose one, you know?” Rubin’s commissioning scientists have a unique role, working at the intersection of science and engineering to turn a bunch of custom parts into a functioning science instrument. Commissioning scientist Marina Pavlovic is a postdoc from Serbia with a background in the formation of supermassive black holes created by merging galaxies. “I came here last year as a volunteer,” she tells me. “My plan was to stay for three months, and 11 months later I’m a commissioning scientist. It’s crazy!” Pavlovic’s job is to help diagnose and troubleshoot whatever isn’t working quite right. And since most things aren’t working quite right, she’s been very busy. “I love when things need to be fixed because I am learning about the system more and more every time there’s a problem—every day is a new experience here.” I ask her what she’ll do next, once Rubin is up and running. “If you love commissioning instruments, that is something that you can do for the rest of your life, because there are always going to be new instruments,” she says. Before that happens, though, Pavlovic has to survive the next few weeks of going on sky. “It’s going to be so emotional. It’s going to be the beginning of a new era in astronomy, and knowing that you did it, that you made it happen, at least a tiny percent of it, that will be a priceless moment.” “I had to learn how to calm down to do this job,” she admits, “because sometimes I get too excited about things and I cannot sleep after that. But it’s okay. I started doing yoga, and it’s working.” From First Photon to First Light My stay on the summit comes to an end on 14 April, just a day before first photon, so as soon as I get home I check in with some of the engineers and astronomers that I met to see how things went. Guillem Megias Homar manages the adaptive optics system—232 actuators that flex the surfaces of the telescope’s three mirrors a few micrometers at a time to bring the image into perfect focus. Currently working on his Ph.D., he was born in 1997, one year after the Rubin project started. First photon, for him, went like this: “I was in the control room, sitting next to the camera team. We have a microphone on the camera, so that we can hear when the shutter is moving. And we hear the first click. And then all of a sudden, the image shows up on the screens in the control room, and it was just an explosion of emotions. All that we have been fighting for is finally a reality. We are on sky!” There were toasts (with sparkling apple juice, of course), and enough speeches that Megias Homar started to get impatient: “I was like, when can we start working? But it was only an hour, and then everything became much more quiet.” Another newly released image showing a small section of the Rubin Observatory’s total view of the Virgo cluster of galaxies. Visible are bright stars in the Milky Way galaxy shining in the foreground, and many distant galaxies in the background. NSF-DOE Rubin Observatory “It was satisfying to see that everything that we’d been building was finally working,” Victor Krabbendam , project manager for Rubin construction, tells me a few weeks later. “But some of us have been at this for so long that first photon became just one of many firsts.” Krabbendam has been with the observatory full-time for the last 21 years. “And the very moment you succeed with one thing, it’s time to be doing the next thing.” Since first photon, Rubin has been undergoing calibrations, collecting data for the first images that it’s now sharing with the world, and preparing to scale up to begin its survey. Operations will soon become routine, the commissioning scientists will move on, and eventually, Rubin will largely run itself, with just a few people at the observatory most nights. But for astronomers, the next 10 years will be anything but routine. “It’s going to be wildly different,” says Krabbendam. “Rubin will feed generations of scientists with trillions of data points of billions of objects. Explore the data. Harvest it. Develop your idea, see if it’s there. It’s going to be phenomenal.” Listen to a Conversation About the Rubin Observatory As part of an experiment with AI storytelling tools, author Evan Ackerman—who visited the Vera C. Rubin Observatory in Chile for four days this past April—fed over 14 hours of raw audio from his interviews and other reporting notes into NotebookLM , an AI-powered research assistant developed by Google. The result is a podcast-style audio experience that you can listen to here. While the script and voices are AI-generated, the conversation is grounded in Ackerman’s original reporting, and includes many details that did not appear in the article above. Ackerman reviewed and edited the audio to ensure accuracy, and there are minor corrections in the transcript. Let us know what you think of this experiment in AI narration. Your browser does not support the audio tag. See transcript 0:01: Today we’re taking a deep dive into the engineering marvel that is the Vera C. Rubin Observatory. 0:06: And and it really is a marvel. 0:08: This project pushes the limits, you know, not just for the science itself, like mapping the Milky Way or exploring dark energy, which is amazing, obviously. 0:16: But it’s also pushing the limits in just building the tools, the technical ingenuity, the, the sheer human collaboration needed to make something this complex actually work. 0:28: That’s what’s really fascinating to me. 0:29: Exactly. 0:30: And our mission for this deep dive is to go beyond the headlines, isn’t it? 0:33: We want to uncover those specific Kind of hidden technical details, the stuff from the audio interviews, the internal docs that really define this observatory. 0:41: The clever engineering solutions. 0:43: Yeah, the nuts and bolts, the answers to challenges nobody’s faced before, stuff that anyone who appreciates, you know, complex systems engineering would find really interesting. 0:53: Definitely. 0:54: So let’s start right at the heart of it. 0:57: The Simonyi survey telescope itself. 1:00: It’s this 350 ton machine inside a 600 ton dome, 30 m wide, huge. [The dome is closer to 650 tons.] 1:07: But the really astonishing part is its speed, speed and precision. 1:11: How do you even engineer something that massive to move that quickly while keeping everything stable down to the submicron level? [Micron level is more accurate.] 1:18: Well, that’s, that’s the core challenge, right? 1:20: This telescope, it can hit a top speed of 3.5 degrees per second. 1:24: Wow. 1:24: Yeah, and it can, you know, move to basically any point in the sky. 1:28: In under 20 seconds, 20 seconds, which makes it by far the fastest moving large telescope ever built, and the dome has to keep up. 1:36: So it’s also the fastest moving dome. 1:38: So the whole building is essentially racing along with the telescope. 1:41: Exactly. 1:41: And achieving that meant pretty much every component had to be custom designed like the pier holding the telescope up. 1:47: It’s mostly steel, not concrete. 1:49: Oh, interesting. 1:50: Why steel? 1:51: Specifically to stop it from twisting or vibrating when the telescope makes those incredibly fast moves. 1:56: Concrete just wouldn’t handle the torque the same way. [The pier is more steel than concrete, but it's still substantially concrete.] 1:59: OK, that makes sense. 1:59: And the power needed to accelerate and decelerate, you know, 300 tons, that must be absolutely massive. 2:06: Oh. 2:06: The instantaneous draw would be enormous. 2:09: How did they manage that without like dimming the lights on the whole. 2:12: Mountaintop every 30 seconds. 2:14: Yeah, that was a real concern, constant brownouts. 2:17: The solution was actually pretty elegant, involving these onboard capacitor banks. 2:22: Yep, slung right underneath the telescope structure. 2:24: They can slowly sip power from the grid, store it up over time, and then bam, discharge it really quickly for those big acceleration surges. 2:32: like a giant camera flash, but for moving a telescope, of yeah. 2:36: It smooths out the demand, preventing those grid disruptions. 2:40: Very clever engineering. 2:41: And beyond the movement, the mirrors themselves, equally critical, equally impressive, I imagine. 2:47: How did they tackle designing and making optics that large and precise? 2:51: Right, so the main mirror, the primary mirror, M1M3. 2:55: It’s a single piece of glass, 8.4 m across, low expansion borosilicate glass. 3:01: And that 8.4 m size, was that just like the biggest they could manage? 3:05: Well, it was a really crucial early decision. 3:07: The science absolutely required something at least 7 or 8 m wide. 3:13: But going much bigger, say 10 or 12 m, the logistics became almost impossible. 3:19: The big one was transport. 3:21: There’s a tunnel on the mountain road up to the summit, and a mirror, much larger than 8.4 m, physically wouldn’t fit through it. 3:28: No way. 3:29: So the tunnel actually set an upper limit on the mirror size. 3:31: Pretty much, yeah. 3:32: Building new road or some other complex transport method. 3:36: It would have added enormous cost and complexity. 3:38: So 8.4 m was that sweet spot between scientific need. 3:42: And, well, physical reality. 3:43: Wow, a real world constraint driving fundamental design. 3:47: And the mirror itself, you said M1 M3, it’s not just one simple mirror surface. 3:52: Correct. 3:52: It’s technically two mirror surfaces ground into that single piece of glass. 3:57: The central part has a more pronounced curvature. 3:59: It’s M1 and M3 combined. 4:00: OK, so fabricating that must have been tricky, especially with what, 10 tons of glass just in the center. 4:07: Oh, absolutely novel and complicated. 4:09: And these mirrors, they don’t support their own weight rigidly. 4:12: So just handling them during manufacturing, polishing, even getting them out of the casting mold, was a huge engineering challenge. 4:18: You can’t just lift it like a dinner plate. 4:20: Not quite, and then there’s maintaining it, re-silvering. 4:24: They hope to do it every 5 years. 4:26: Well, traditionally, big mirrors like this often need it more, like every 1.5 to 2 years, and it’s a risky weeks-long job. 4:34: You have to unbolt this priceless, unique piece of equipment, move it. 4:39: It’s nerve-wracking. 4:40: I bet. 4:40: And the silver coating itself is tiny, right? 4:42: Incredibly thin, just a few nanometers of pure silver. 4:46: It takes about 24 g for the whole giant surface, bonded with the adhesive layers that are measured in Angstroms. [It's closer to 26 grams of silver.] 4:52: It’s amazing precision. 4:54: So tying this together, you have this fast moving telescope, massive mirrors. 4:59: How do they keep everything perfectly focused, especially with multiple optical elements moving relative to each other? 5:04: that’s where these things called hexapods come in. 5:08: Really crucial bits of kit. 5:09: Hexapods, like six feet? 5:12: Sort of. 5:13: They’re mechanical systems with 6 adjustable arms or struts. 5:17: A simpler telescope might just have one maybe on the camera for basic focusing, but Ruben needs more because it’s got the 3 mirrors plus the camera. 5:25: Exactly. 5:26: So there’s a hexapod mounted on the secondary mirror, M2. 5:29: Its job is to keep M2 perfectly positioned relative to M1 and M3, compensating for tiny shifts or flexures. 5:36: And then there’s another hexapod on the camera itself. 5:39: That one adjusts the position and tilt of the entire camera’s sensor plane, the focal plane. 5:43: To get that perfect focus across the whole field of view. 5:46: And these hexapods move in 6 ways. 5:48: Yep, 6 degrees of freedom. 5:50: They can adjust position along the X, Y, and Z axis, and they can adjust rotation or tilt around those 3 axes as well. 5:57: It allows for incredibly fine adjustments, microp precision stuff. 6:00: So they’re constantly making these tiny tweaks as the telescope moves. 6:04: Constantly. 6:05: The active optics system uses them. 6:07: It calculates the needed corrections based on reference stars in the images, figures out how the mirror might be slightly bending. 6:13: And then tells the hexapods how to compensate. 6:15: It’s controlling like 26 g of silver coating on the mirror surface down to micron precision, using the mirror’s own natural bending modes. 6:24: It’s pretty wild. 6:24: Incredible. 6:25: OK, let’s pivot to the camera itself. 6:28: The LSST camera. 6:29: Big digital camera ever built, right? 6:31: Size of a small car, 2800 kg, captures 3.2 gigapixel images, just staggering numbers. 6:38: They really are, and the engineering inside is just as staggering. 6:41: That Socal plane where the light actually hits. 6:43: It’s made up of 189 individual CCD sensors. 6:47: Yep, 4K by 4K CCDs grouped into 21 rafts. 6:50: They give them like tiles, and each CCD has 16 amplifiers reading it out. 6:54: Why so many amplifiers? 6:56: Speed. 6:56: Each amplifier reads out about a million pixels. 6:59: By dividing the job up like that, they can read out the entire 3.2 gigapixel sensor in just 2 seconds. 7:04: 2 seconds for that much data. 7:05: Wow. 7:06: It’s essential for the survey’s rapid cadence. 7:09: Getting all those 189 CCDs perfectly flat must have been, I mean, are they delicate? 7:15: Unbelievably delicate. 7:16: They’re silicon wafers only 100 microns thick. 7:18: How thick is that really? 7:19: about the thickness of a human hair. 7:22: You could literally break one by breathing on it wrong, apparently, seriously, yeah. 7:26: And the challenge was aligning all 189 of them across this 650 millimeter wide focal plane, so the entire surface is flat. 7:34: To within just 24 microns, peak to valley. 7:37: 24 microns. 7:39: That sounds impossibly flat. 7:40: It’s like, imagine the entire United States. 7:43: Now imagine the difference between the lowest point and the highest point across the whole country was only 100 ft. 7:49: That’s the kind of relative flatness they achieved on the camera sensor. 7:52: OK, that puts it in perspective. 7:53: And why is that level of flatness so critical? 7:56: Because the telescope focuses light. 7:58: terribly. 7:58: It’s an F1.2 system, which means it has a very shallow depth of field. 8:02: If the sensors aren’t perfectly in that focal plane, even by a few microns, parts of the image go out of focus. 8:08: Gotcha. 8:08: And the pixels themselves, the little light buckets on the CCDs, are they special? 8:14: They’re custom made, definitely. 8:16: They settled on 10 micron pixels. 8:18: They figured anything smaller wouldn’t actually give them more useful scientific information. 8:23: Because you start hitting the limits of what the atmosphere and the telescope optics themselves can resolve. 8:28: So 10 microns was the optimal size, right? 8:31: balancing sensor tech with physical limits. 8:33: Now, keeping something that sensitive cool, that sounds like a nightmare, especially with all those electronics. 8:39: Oh, it’s a huge thermal engineering challenge. 8:42: The camera actually has 3 different cooling zones, 3 distinct temperature levels inside. 8:46: 3. 8:47: OK. 8:47: First, the CCDs themselves. 8:49: They need to be incredibly cold to minimize noise. 8:51: They operate at -125 °C. 8:54: -125C, how do they manage that? 8:57: With a special evaporator plate connected to the CCD rafts by flexible copper braids, which pulls heat away very effectively. 9:04: Then you’ve got the cameras, electronics, the readout boards and stuff. 9:07: They run cooler than room temp, but not that cold, around -50 °C. 9:12: OK. 9:12: That requires a separate liquid cooling loop delivered through these special vacuum insulated tubes to prevent heat leaks. 9:18: And the third zone. 9:19: That’s for the electronics in the utility trunk at the back of the camera. 9:23: They generate a fair bit of heat, about 3000 watts, like a few hair dryers running constantly. 9:27: Exactly. 9:28: So there’s a third liquid cooling system just for them, keeping them just slightly below the ambient room temperature in the dome. 9:35: And all this cooling, it’s not just to keep the parts from overheating, right? 9:39: It affects the images, absolutely critical for image quality. 9:44: If the outer surface of the camera body itself is even slightly warmer or cooler than the air inside the dome, it creates tiny air currents, turbulence right near the light path. 9:57: And that shows up as little wavy distortions in the images, messing up the precision. 10:02: So even the outside temperature of the camera matters. 10:04: Yep, it’s not just a camera. 10:06: They even have to monitor the heat generated by the motors that move the massive dome, because that heat could potentially cause enough air turbulence inside the dome to affect the image quality too. 10:16: That’s incredible attention to detail, and the camera interior is a vacuum you mentioned. 10:21: Yes, a very strong vacuum. 10:23: They pump it down about once a year, first using turbopumps spinning at like 80,000 RPM to get it down to about 102 tor. 10:32: Then they use other methods to get it down much further. 10:34: The 107 tor, that’s an ultra high vacuum. 10:37: Why the vacuum? 10:37: Keep frost off the cold part. 10:39: Exactly. 10:40: Prevents condensation and frost on those negatives when it 25 degree CCDs and generally ensures everything works optimally. 10:47: For normal operation, day to day, they use something called an ion pump. 10:51: How does that work? 10:52: It basically uses a strong electric field to ionize any stray gas molecules, mostly hydrogen, and trap them, effectively removing them from the vacuum space, very efficient for maintaining that ultra-high vacuum. 11:04: OK, so we have this incredible camera taking these massive images every few seconds. 11:08: Once those photons hit the CCDs and become digital signals, What happens next? 11:12: How does Ruben handle this absolute flood of data? 11:15: Yeah, this is where Ruben becomes, you know, almost as much a data processing machine as a telescope. 11:20: It’s designed for the data output. 11:22: So photons hit the CCDs, get converted to electrical signals. 11:27: Then, interestingly, they get converted back into light signals, photonic signals back to light. 11:32: Why? 11:33: To send them over fiber optics. 11:34: They’re about 6 kilometers of fiber optic cable running through the observatory building. 11:39: These signals go to FPGA boards, field programmable gate arrays in the data acquisition system. 11:46: OK. 11:46: And those FPGAs are basically assembling the complete image data packages from all the different CCDs and amplifiers. 11:53: That sounds like a fire hose of data leaving the camera. 11:56: How does it get off the mountain and where does it need to go? 11:58: And what about all the like operational data, temperatures, positions? 12:02: Good question. 12:03: There are really two main data streams all that telemetry you mentioned, sensor readings, temperatures, actuator positions, command set, everything about the state of the observatory that all gets collected into something called the Engineering facility database or EFD. 12:16: They use Kafka for transmitting that data. 12:18: It’s good for high volume streams, and store it in an influx database, which is great for time series data like sensor readings. 12:26: And astronomers can access that. 12:28: Well, there’s actually a duplicate copy of the EFD down at SLAC, the research center in California. 12:34: So scientists and engineers can query that copy without bogging down the live system running on the mountain. 12:40: Smart. 12:41: How much data are we talking about there? 12:43: For the engineering data, it’s about 20 gigabytes per night, and they plan to keep about a year’s worth online. 12:49: OK. 12:49: And the image data, the actual science pixels. 12:52: That takes a different path. [All of the data from Rubin to SLAC travels over the same network.] 12:53: It travels over dedicated high-speed network links, part of ESET, the research network, all the way from Chile, usually via Boca Raton, Florida, then Atlanta, before finally landing at SLAC. 13:05: And how fast does that need to be? 13:07: The goal is super fast. 13:09: They aim to get every image from the telescope in Chile to the data center at SLAC within 7 seconds of the shutter closing. 13:15: 7 seconds for gigabytes of data. 13:18: Yeah. 13:18: Sometimes network traffic bumps it up to maybe 30 seconds or so, but the target is 7. 13:23: It’s crucial for the next step, which is making sense of it all. 13:27: How do astronomers actually use this, this torrent of images and data? 13:30: Right. 13:31: This really changes how astronomy might be done. 13:33: Because Ruben is designed to generate alerts, real-time notifications about changes in the sky. 13:39: Alerts like, hey, something just exploded over here. 13:42: Pretty much. 13:42: It takes an image compared to the previous images of the same patch of sky and identifies anything that’s changed, appeared, disappeared, moved, gotten brighter, or fainter. 13:53: It expects to generate about 10,000 such alerts per image. 13:57: 10,000 per image, and they take an image every every 20 seconds or so on average, including readouts. [Images are taken every 34 seconds: a 30 second exposure, and then about 4 seconds for the telescope to move and settle.] 14:03: So you’re talking around 10 million alerts every single night. 14:06: 10 million a night. 14:07: Yep. 14:08: And the goal is to get those alerts out to the world within 60 seconds of the image being taken. 14:13: That’s insane. 14:14: What’s in an alert? 14:15: It contains the object’s position, brightness, how it’s changed, and little cut out images, postage stamps in the last 12 months of observations, so astronomers can quickly see the history. 14:24: But surely not all 10 million are real astronomical events satellites, cosmic rays. 14:30: Exactly. 14:31: The observatory itself does a first pass filter, masking out known issues like satellite trails, cosmic ray hits, atmospheric effects, with what they call real bogus stuff. 14:41: OK. 14:42: Then, this filtered stream of potentially real alerts goes out to external alert brokers. 14:49: These are systems run by different scientific groups around the world. 14:52: Yeah, and what did the brokers do? 14:53: They ingest the huge stream from Ruben and apply their own filters, based on what their particular community is interested in. 15:00: So an astronomer studying supernovae can subscribe to a broker that filters just for likely supernova candidates. 15:06: Another might filter for near Earth asteroids or specific types of variable stars. 15:12: so it makes the fire hose manageable. 15:13: You subscribe to the trickle you care about. 15:15: Precisely. 15:16: It’s a way to distribute the discovery potential across the whole community. 15:19: So it’s not just raw images astronomers get, but these alerts and presumably processed data too. 15:25: Oh yes. 15:26: Rubin provides the raw images, but also fully processed images, corrected for instrument effects, calibrated called processed visit images. 15:34: And also template images, deep combinations of previous images used for comparison. 15:38: And managing all that data, 15 petabytes you mentioned, how do you query that effectively? 15:44: They use a system called Keyserve. [The system is "QServ."] 15:46: It’s a distributed relational database, custom built basically, designed to handle these enormous astronomical catalogs. 15:53: The goal is to let astronomers run complex searches across maybe 15 petabytes of catalog data and get answers back in minutes, not days or weeks. 16:02: And how do individual astronomers actually interact with it? 16:04: Do they download petabytes? 16:06: No, definitely not. 16:07: For general access, there’s a science platform, the front end of which runs on Google Cloud. 16:11: Users interact mainly through Jupiter notebooks. 16:13: Python notebooks, familiar territory for many scientists. 16:17: Exactly. 16:18: They can write arbitrary Python code, access the catalogs directly, do analysis for really heavy duty stuff like large scale batch processing. 16:27: They can submit jobs to the big compute cluster at SLEC, which sits right next to the data storage. 16:33: That’s much more efficient. 16:34: Have they tested this? 16:35: Can it handle thousands of astronomers hitting it at once? 16:38: They’ve done extensive testing, yeah, scaled it up with hundreds of users already, and they seem confident they can handle up to maybe 3000 simultaneous users without issues. 16:49: And a key point. 16:51: After an initial proprietary period for the main survey team, all the data and importantly, all the software algorithms used to process it become public. 17:00: Open source algorithms too. 17:01: Yes, the idea is, if the community can improve on their processing pipelines, they’re encouraged to contribute those solutions back. 17:08: It’s meant to be a community resource. 17:10: That open approach is fantastic, and even the way the images are presented visually has some deep thought behind it, doesn’t it? 17:15: You mentioned Robert Leptina’s perspective. 17:17: Yes, this is fascinating. 17:19: It’s about how you assign color to astronomical images, which usually combine data from different filters, like red, green, blue. 17:28: It’s not just about making pretty pictures, though they can be beautiful. 17:31: Right, it should be scientifically meaningful. 17:34: Exactly. 17:35: Lepton’s approach tries to preserve the inherent color information in the data. 17:40: Many methods saturate bright objects, making their centers just white blobs. 17:44: Yeah, you see that a lot. 17:46: His algorithm uses a different mathematical scaling, more like a logarithmic scale, that avoids this saturation. 17:52: It actually propagates the true color information back into the centers of bright stars and galaxies. 17:57: So, a galaxy that’s genuinely redder, because it’s red shifted, will actually look redder in the image, even in its bright core. 18:04: Precisely, in a scientifically meaningful way. 18:07: Even if our eyes wouldn’t perceive it quite that way directly through a telescope, the image renders the data faithfully. 18:13: It helps astronomers visually interpret the physics. 18:15: It’s a subtle but powerful detail in making the data useful. 18:19: It really is. 18:20: Beyond just taking pictures, I heard Ruben’s wide view is useful for something else entirely gravitational waves. 18:26: That’s right. 18:26: It’s a really cool synergy. 18:28: Gravitational wave detectors like Lego and Virgo, they detect ripples in space-time, often from emerging black holes or neutron stars, but they usually only narrow down the location to a relatively large patch of sky, maybe 10 square degrees or sometimes much more. 18:41: Ruben’s camera has a field of view of about 9.6 square degrees. 18:45: That’s huge for a telescope. 18:47: It almost perfectly matches the typical LIGO alert area. 18:51: so when LIGO sends an alert, Ruben can quickly scan that whole error box, maybe taking just a few pointings, looking for any new point of light. 19:00: The optical counterpart, the Killanova explosion, or whatever light accompany the gravitational wave event. 19:05: It’s a fantastic follow-up machine. 19:08: Now, stepping back a bit, this whole thing sounds like a colossal integration challenge. 19:13: A huge system of systems, many parts custom built, pushed to their limits. 19:18: What were some of those big integration hurdles, bringing it all together? 19:22: Yeah, classic system of systems is a good description. 19:25: And because nobody’s built an observatory quite like this before, a lot of the commissioning phase, getting everything working together involves figuring out the procedures as they go. 19:34: Learning by doing on a massive scale. 19:36: Pretty much. 19:37: They’re essentially, you know, teaching the system how to walk. 19:40: And there’s this constant tension, this balancing act. 19:43: Do you push forward, maybe build up some technical debt, things you know you’ll have to fix later, or do you stop and make sure every little issue is 100% perfect before moving on, especially with a huge distributed team? 19:54: I can imagine. 19:55: And you mentioned the dome motors earlier. 19:57: That discovery about heat affecting images sounds like a perfect example of unforeseen integration issues. 20:03: Exactly. 20:03: Marina Pavvich described that. 20:05: They ran the dome motors at full speed, something maybe nobody had done for extended periods in that exact configuration before, and realized, huh. 20:13: The heat these generate might actually cause enough air turbulence to mess with our image quality. 20:19: That’s the kind of thing you only find when you push the integrated system. 20:23: Lots of unexpected learning then. 20:25: What about interacting with the outside world? 20:27: Other telescopes, the atmosphere itself? 20:30: How does Ruben handle atmospheric distortion, for instance? 20:33: that’s another interesting point. 20:35: Many modern telescopes use lasers. 20:37: They shoot a laser up into the sky to create an artificial guide star, right, to measure. 20:42: Atmospheric turbulence. 20:43: Exactly. 20:44: Then they use deformable mirrors to correct for that turbulence in real time. 20:48: But Ruben cannot use a laser like that. 20:50: Why? 20:51: Because its field of view is enormous. 20:53: It sees such a wide patch of sky at once. 20:55: A single laser beam, even a pinpoint from another nearby observatory, would contaminate a huge fraction of Ruben’s image. 21:03: It would look like a giant streak across, you know, a quarter of the sky for Ruben. 21:06: Oh, wow. 21:07: OK. 21:08: Too much interference. 21:09: So how does it correct for the atmosphere? 21:11: Software. 21:12: It uses a really clever approach called forward modeling. 21:16: It looks at the shapes of hundreds of stars across its wide field of view in each image. 21:21: It knows what those stars should look like, theoretically. 21:25: Then it builds a complex mathematical model of the atmosphere’s distorting effect across the entire field of view that would explain the observed star shapes. 21:33: It iterates this model hundreds of times per image until it finds the best fit. [The model is created by iterating on the image data, but iteration is not necessary for every image.] 21:38: Then it uses that model to correct the image, removing the atmospheric blurring. 21:43: So it calculates the distortion instead of measuring it directly with a laser. 21:46: Essentially, yes. 21:48: Now, interestingly, there is an auxiliary telescope built alongside Ruben, specifically designed to measure atmospheric properties independently. 21:55: Oh, so they could use that data. 21:57: They could, but currently, they’re finding their software modeling approach using the science images themselves, works so well that they aren’t actively incorporating the data from the auxiliary telescope for that correction right now. 22:08: The software solution is proving powerful enough on its own. 22:11: Fascinating. 22:12: And they still have to coordinate with other telescopes about their lasers, right? 22:15: Oh yeah. 22:15: They have agreements about when nearby observatories can point their lasers, and sometimes Ruben might have to switch to a specific filter like the Iband, which is less sensitive to the laser. 22:25: Light if one is active nearby while they’re trying to focus. 22:28: So many interacting systems. 22:30: What an incredible journey through the engineering of Ruben. 22:33: Just the sheer ingenuity from the custom steel pier and the capacitor banks, the hexapods, that incredibly flat camera, the data systems. 22:43: It’s truly a machine built to push boundaries. 22:45: It really is. 22:46: And it’s important to remember, this isn’t just, you know, a bigger version of existing telescopes. 22:51: It’s a fundamentally different kind of machine. 22:53: How so? 22:54: By creating this massive all-purpose data set, imaging the entire southern sky over 800 times, cataloging maybe 40 billion objects, it shifts the paradigm. 23:07: Astronomy becomes less about individual scientists applying for time to point a telescope at one specific thing and more about statistical analysis, about mining this unprecedented ocean of data that Rubin provides to everyone. 23:21: So what does this all mean for us, for science? 23:24: Well, it’s a generational investment in fundamental discovery. 23:27: They’ve optimized this whole system, the telescope, the camera, the data pipeline. 23:31: For finding, quote, exactly the stuff we don’t know we’ll find. 23:34: Optimized for the unknown, I like that. 23:36: Yeah, we’re basically generating this incredible resource that will feed generations of astronomers and astrophysicists. 23:42: They’ll explore it, they’ll harvest discoveries from it, they’ll find patterns and objects and phenomena within billions and billions of data points that we can’t even conceive of yet. 23:50: And that really is the ultimate excitement, isn’t it? 23:53: Knowing that this monumental feat of engineering isn’t just answering old questions, but it’s poised to open up entirely new questions about the universe, questions we literally don’t know how to ask today. 24:04: Exactly. 24:05: So, for you, the listener, just think about that. 24:08: Consider the immense, the completely unknown discoveries that are waiting out there just waiting to be found when an entire universe of data becomes accessible like this. 24:16: What might we find? Back to top
spectrum.ieee.org
evanackerman.bsky.social
Video Friday: Jet-Powered Humanoid Robot Lifts Off https://spectrum.ieee.org/video-friday-jet-powered-robot
Video Friday: Jet-Powered Humanoid Robot Lifts Off
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RSS 2025 : 21–25 June 2025, LOS ANGELES ETH Robotics Summer School : 21–27 June 2025, GENEVA IAS 2025 : 30 June–4 July 2025, GENOA, ITALY ICRES 2025 : 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics : 8–11 July 2025, SUWON, SOUTH KOREA IFAC Symposium on Robotics : 15–18 July 2025, PARIS RoboCup 2025 : 15–21 July 2025, BAHIA, BRAZIL RO-MAN 2025 : 25–29 August 2025, EINDHOVEN, THE NETHERLANDS CLAWAR 2025 : 5–7 September 2025, SHENZHEN CoRL 2025 : 27–30 September 2025, SEOUL IEEE Humanoids : 30 September–2 October 2025, SEOUL World Robot Summit : 10–12 October 2025, OSAKA, JAPAN IROS 2025 : 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos! This is the first successful vertical takeoff of a jet-powered flying humanoid robot, developed by Artificial and Mechanical Intelligence (AMI) at Istituto Italiano di Tecnologia (IIT). The robot lifted ~50 cm off the ground while maintaining dynamic stability, thanks to advanced AI-based control systems and aerodynamic modeling. We will have much more on this in the coming weeks! [ Nature ] via [ IIT ] As a first step towards our mission of deploying general purpose robots, we are pushing the frontiers of what end-to-end AI models can achieve in the real world. We’ve been training models and evaluating their capabilities for dexterous sensorimotor policies across different embodiments, environments, and physical interactions. We’re sharing capability demonstrations on tasks stressing different aspects of manipulation: fine motor control, spatial and temporal precision, generalization across robots and settings, and robustness to external disturbances. [ Generalist AI ] Thanks, Noah! Ground Control Robotics is introducing SCUTTLE, our newest elongate multilegged platform for mobility anywhere! [ Ground Control Robotics ] Teleoperation has been around for a while, but what hasn’t been is precise, real-time force feedback.That’s where Flexiv steps in to shake things up. Now, whether you’re across the room or across the globe, you can experience seamless, high-fidelity remote manipulation with a sense of touch. This sort of thing usually takes some human training, for which you’d be best served by robot arms with precise, real-time force feedback . Hmm, I wonder where you’d find those... [ Flexiv ] The 1X World Model is a data-driven simulator for humanoid robots, built with a grounded understanding of physics. It allows us to predict—or “hallucinate”—the outcomes of NEO’s actions before they’re taken in the real world. Using the 1X World Model, we can instantly assess the performance of AI models—compressing development time and providing a clear benchmark for continuous improvement. [ 1X ] SLAPBOT is an interactive robotic artwork by Hooman Samani and Chandler Cheng, exploring the dynamics of physical interaction, artificial agency, and power. The installation features a robotic arm fitted with a soft, inflatable hand that delivers slaps through pneumatic actuation, transforming a visceral human gesture into a programmed robotic response. I asked, of course, whether SLAPBOT slaps people, and it does not: “Despite its provocative concept and evocative design, SLAPBOT does not make physical contact with human participants. It simulates the gesture of slapping without delivering an actual strike. The robotic arm’s movements are precisely choreographed to suggest the act, yet it maintains a safe distance.” [ SLAPBOT ] Thanks, Hooman! Inspecting the bowels of ships is something we’d really like robots to be doing for us, please and thank you. [ Norwegian University of Science and Technology ] via [ GitHub ] Thanks, Kostas! H2L Corporation (hereinafter referred to as H2L) has unveiled a new product called “Capsule Interface,” which transmits whole-body movements and strength, enabling new shared experiences with robots and avatars. A product introduction video depicting a synchronization never before experienced by humans was also released. [ H2L Corp. ] via [ RobotStart ] How do you keep a robot safe without requiring it to look at you? Radar ! [ Paper ] via [ IEEE Sensors Journal ] Thanks, Bram! We propose Aerial Elephant Trunk, an aerial continuum manipulator inspired by the elephant trunk, featuring a small-scale quadrotor and a dexterous, compliant tendon-driven continuum arm for versatile operation in both indoor and outdoor settings. [ Adaptive Robotics Controls Lab ] This video demonstrates a heavy weight lifting test using the ARMstrong Dex robot, focusing on a 40 kg bicep curl motion. ARMstrong Dex is a human-sized, dual-arm hydraulic robot currently under development at the Korea Atomic Energy Research Institute (KAERI) for disaster response applications. Designed to perform tasks flexibly like a human while delivering high power output, ARMstrong Dex is capable of handling complex operations in hazardous environments. [ Korea Atomic Energy Research Institute ] Micro-robots that can inspect water pipes, diagnose cracks and fix them autonomously – reducing leaks and avoiding expensive excavation work – have been developed by a team of engineers led by the University of Sheffield. [ University of Sheffield ] We’re growing in size, scale, and impact! We’re excited to announce the opening of our serial production facility in the San Francisco Bay Area, the very first purpose-built robotaxi assembly facility in the United States. More space means more innovation, production, and opportunities to scale our fleet. [ Zoox ] Watch multipick in action as our pickle robot rapidly identifies, picks, and places multiple boxes in a single swing of an arm. [ Pickle ] And now, this. [ Aibo ] Cargill’s Amsterdam Multiseed facility enlists Spot and Orbit to inspect machinery and perform visual checks, enhanced by all-new AI features, as part of their “Plant of the Future” program. [ Boston Dynamics ] This ICRA 2025 plenary talk is from Raffaello D’Andrea, entitled “Models are Dead, Long Live Models!” [ ICRA 2025 ] Will data solve robotics and automation? Absolutely! Never! Who knows! Let’s argue about it! [ ICRA 2025 ]
spectrum.ieee.org
evanackerman.bsky.social
Video Friday: AI Model Gives Neo Robot Autonomy https://spectrum.ieee.org/video-friday-neo-humanoid-robot
Video Friday: AI Model Gives Neo Robot Autonomy
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. 2025 Energy Drone & Robotics Summit : 16–18 June 2025, HOUSTON RSS 2025 : 21–25 June 2025, LOS ANGELES ETH Robotics Summer School : 21–27 June 2025, GENEVA IAS 2025 : 30 June–4 July 2025, GENOA, ITALY ICRES 2025 : 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics : 8–11 July 2025, SUWON, SOUTH KOREA IFAC Symposium on Robotics : 15–18 July 2025, PARIS RoboCup 2025 : 15–21 July 2025, BAHIA, BRAZIL RO-MAN 2025 : 25–29 August 2025, EINDHOVEN, THE NETHERLANDS CLAWAR 2025 : 5–7 September 2025, SHENZHEN CoRL 2025 : 27–30 September 2025, SEOUL IEEE Humanoids : 30 September–2 October 2025, SEOUL World Robot Summit : 10–12 October 2025, OSAKA, JAPAN IROS 2025 : 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos! Introducing Redwood—1X’s breakthrough AI model capable of doing chores around the home. For the first time, NEO Gamma moves, understands, and interacts autonomously in complex human environments. Built to learn from real-world experiences, Redwood empowers NEO to perform end-to-end mobile manipulation tasks like retrieving objects for users, opening doors, and navigating around the home gracefully, on top of hardware designed for compliance, safety, and resilience. - YouTube www.youtube.com [ 1X Technology ] Marek Michalowski , who co-created Keepon , has not posted to his YouTube channel in 17 years. Until this week. The new post? It’s about a project from 10 years ago! [ Project Sundial ] Helix can now handle a wider variety of packaging approaching human-level dexterity and speed, bringing us closer to fully autonomous package sorting. This rapid progress underscores the scalability of Helix’s learning-based approach to robotics, translating quickly into real-world application. [ Figure ] This is certainly an atypical Video Friday selection, but I saw this Broadway musical called “Maybe Happy Ending” a few months ago because the main characters are deprecated humanoid home service robots. It was utterly charming, and it just won the Tony award for best new musical among others. [ "Maybe Happy Ending " ] Boston Dynamics brought a bunch of Spots to “America’s Got Talent,” and kudos to them for recovering so gracefully from an on stage failure. [ Boston Dynamics ] I think this is the first time I’ve seen end-effector changers used for either feet or heads. [ CNRS-AIST Joint Robotics Laboratory ] ChatGPT has gone fully Navrim—complete with existential dread and maximum gloom! Watch as the most pessimistic ChatGPT-powered robot yet moves chess pieces across a physical board, deeply contemplating both chess strategy and the futility of existence. Experience firsthand how seamlessly AI blends with robotics, even if Navrim insists there’s absolutely no point. Not bad for $219 all in. [ Vassar Robotics ] We present a single layer multimodal sensory skin made using only a highly sensitive hydrogel membrane. Using electrical impedance tomography techniques, we access up to 863,040 conductive pathways across the membrane, allowing us to identify at least six distinct types of multimodal stimuli, including human touch, damage, multipoint insulated presses, and local heating. To demonstrate our approach’s versatility, we cast the hydrogel into the shape and size of an adult human hand. [ Bio-Inspired Robotics Laboratory ] paper published by [ Science Robotics ] This paper introduces a novel robot designed to exhibit two distinct modes of mobility: rotational aerial flight and terrestrial locomotion. This versatile robot comprises a sturdy external frame, two motors, and a single wing embodying its fuselage. The robot is capable of vertical takeoff and landing in mono-wing flight mode, with the unique ability to fly in both clockwise and counterclockwise directions, setting it apart from traditional mono-wings. [ AIR Lab paper ] published in [ The International journal of Robotics Research ] When TRON 1 goes to work all he does is steal snacks from hoomans. Apparently. [ LimX Dynamics ] The 100,000th robot has just rolled off the line at Pudu Robotics’ Super Factory! This key milestone highlights our cutting-edge manufacturing strength and marks a global shipment volume of over 100,000 units delivered worldwide. [ Pudu Robotics ] Now that is a big saw. [ Kuka Robotics ] NASA Jet Propulsion Laboratory has developed the Exploration Rover for Navigating Extreme Sloped Terrain or ERNEST. This rover could lead to a new class of low-cost planetary rovers for exploration of previously inaccessible locations on Mars and the moon. [ NASA Jet Propulsion Laboratory paper ] Brett Adcock, Founder and CEO, Figure AI speaks with Bloomberg Television’s Ed Ludlow about how it is training humanoid robots for logistics, manufacturing, and future roles in the home at Bloomberg Tech in San Francisco. [ Figure ] Peggy Johnson, CEO of Agility Robotics, discusses how humanoid robots like Digit are transforming logistics and manufacturing. She speaks with Bloomberg Businessweek’s Brad Stone about the rapid advances in automation and the next era of robots in the workplace at Bloomberg Tech in San Francisco. [ Agility Robotics ] This ICRA 2025 Plenary is from Allison Okamura, entitled “Rewired: The Interplay of Robots and Society.” [ ICRA 2025 ]
spectrum.ieee.org
evanackerman.bsky.social
Video Friday: Hopping On One Robotic Leg https://spectrum.ieee.org/video-friday-one-legged-robot
Video Friday: Hopping On One Robotic Leg
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. 2025 Energy Drone & Robotics Summit : 16–18 June 2025, HOUSTON, TX RSS 2025 : 21–25 June 2025, LOS ANGELES ETH Robotics Summer School : 21–27 June 2025, GENEVA IAS 2025 : 30 June–4 July 2025, GENOA, ITALY ICRES 2025 : 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics : 8–11 July 2025, SUWON, KOREA IFAC Symposium on Robotics : 15–18 July 2025, PARIS RoboCup 2025 : 15–21 July 2025, BAHIA, BRAZIL RO-MAN 2025 : 25–29 August 2025, EINDHOVEN, THE NETHERLANDS CLAWAR 2025 : 5–7 September 2025, SHENZHEN CoRL 2025 : 27–30 September 2025, SEOUL IEEE Humanoids : 30 September–2 October 2025, SEOUL World Robot Summit : 10–12 October 2025, OSAKA, JAPAN IROS 2025 : 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos! This single leg robot is designed to “form a foundation for future bipedal robot development,” but personally, I think it’s perfect as is. [ KAIST Dynamic Robot Control and Design Lab ] Selling 17k social robots still amazes me. Aldebaran will be missed. [ Aldebaran ] Nice to see some actual challenging shoves as part of biped testing. [ Under Control Robotics ] Ground Control made multilegged waves at IEEE’s International Conference on Robotics and Automation 2025 in Atlanta! We competed in the Startup Pitch Competition and demoed our robot at our booth, on NIST standard terrain, and around the convention. We were proud to be a finalist for Best Expo Demo and participate in the Robot Parade. [ Ground Control Robotics ] Thanks, Dan! Humanoid is a UK-based robotics innovation company dedicated to building commercially scalable, reliable and safe robotic solutions for real-world applications. It’s a nifty bootup screen, I’ll give them that. [ Humanoid ] Thanks, Kristina! Quadrupedal robots have demonstrated remarkable agility and robustness in traversing complex terrains. However, they remain limited in performing object interactions that require sustained contact. In this work, we present LocoTouch, a system that equips quadrupedal robots with tactile sensing to address a challenging task in this category: long-distance transport of unsecured cylindrical objects, which typically requires custom mounting mechanisms to maintain stability. [ LocoTouch paper ] Thanks, Changyi! In this video, Digit is performing tasks autonomously using a whole-body controller for mobile manipulation. This new controller was trained in simulation, enabling Digit to execute tasks while navigating new environments and manipulating objects it has never encountered before. Not bad, although it’s worth pointing out that those shelves are not representative of any market I’ve ever been to. [ Agility Robotics ] It’s always cool to see robots presented as an incidental solution to a problem as opposed to, you know, robots. The question that you really want answered, though, is “why is there water on the floor?” [ Boston Dynamics ] Reinforcement learning (RL) has significantly advanced the control of physics based and robotic characters that track kinematic reference motion. We propose a multi-objective reinforcement learning framework that trains a single policy conditioned on a set of weights, spanning the Pareto front of reward trade-offs. Within this framework, weights can be selected and tuned after training, significantly speeding up iteration time. We demonstrate how this improved workflow can be used to perform highly dynamic motions with a robot character. [ Disney Research ] It’s been a week since ICRA 2025, and TRON 1 already misses all the new friends he made! [ LimX Dynamics ] ROB 450 in Winter 2025 challenged students to synthesize the knowledge acquired through their Robotics undergraduate courses at the University of Michigan to use a systematic and iterative design and analysis process and apply it to solving a real open-ended Robotics problem. [ University of Michigan Robotics ] What’s The Trick? A talk on human vs current robot learning, given by Chris Atkeson at the Robotics and AI Institute. [ Robotics and AI Institute (RAI) ]
spectrum.ieee.org
evanackerman.bsky.social
This Little Mars Rover Stayed Home https://spectrum.ieee.org/mars-pathfinder-rover
This Little Mars Rover Stayed Home
As a mere earthling, I remember watching in fascination as Sojourner sent back photos of the Martian surface during the summer of 1997. I was not alone. The servers at NASA’s Jet Propulsion Lab slowed to a crawl when they got more than 47 million hits (a record number!) from people attempting to download those early images of the Red Planet. To be fair, it was the late 1990s, the Internet was still young, and most people were using dial-up modems. By the end of the 83-day mission, Sojourner had sent back 550 photos and performed more than 15 chemical analyses of Martian rocks and soil. Sojourner , of course, remains on Mars. Pictured here is Marie Curie, its twin. Functionally identical, either one of the rovers could have made the voyage to Mars, but one of them was bound to become the famous face of the mission, while the other was destined to be left behind in obscurity. Did I write this piece because I feel a little bad for Marie Curie ? Maybe. But it also gave me a chance to revisit this pioneering Mars mission, which established that robots could effectively explore the surface of planets and captivate the public imagination. Sojourner ’s sojourn on Mars On 4 July 1997, the Mars Pathfinder parachuted through the Martian atmosphere and bounced about 15 times on glorified airbags before finally coming to a rest. The lander, renamed the Carl Sagan Memorial Station , carried precious cargo stowed inside. The next day, after the airbags retracted, the solar-powered Sojourner eased its way down the ramp, the first human-made vehicle to roll around on the surface of another planet. (It wasn’t the first extraterrestrial body, though. The Soviet Lunokhod rovers conducted two successful missions on the moon in 1970 and 1973. The Soviets had also landed a rover on Mars back in 1971, but communication was lost before the PROP-M ever deployed.) This giant sandbox at JPL provided Marie Curie with an approximation of Martian terrain. Mike Nelson/AFP/Getty Images The six-wheeled, 10.6-kilogram, microwave-oven-size Sojourner was equipped with three low-resolution cameras (two on the front for black-and-white images and a color camera on the rear), a laser hazard–avoidance system, an alpha-proton X-ray spectrometer, experiments for testing wheel abrasion and material adherence, and several accelerometers. The robot also demonstrated the value of the six-wheeled “rocker-bogie” suspension system that became NASA’s go-to design for all later Mars rovers. Sojourner never roamed more than about 12 meters from the lander due to the limited range of its radio. Pathfinder had landed in Ares Vallis , an assumed ancient floodplain chosen because of the wide variety of rocks present. Scientists hoped to confirm the past existence of water on the surface of Mars. Sojourner did discover rounded pebbles that suggested running water, and later missions confirmed it. A highlight of Sojourner ’s 83-day mission on Mars was its encounter with a rock nicknamed Barnacle Bill [to the rover’s left]. JPL/NASA As its first act of exploration, Sojourner rolled forward 36 centimeters and encountered a rock, dubbed Barnacle Bill due to its rough surface. The rover spent about 10 hours analyzing the rock, using its spectrometer to determine the elemental composition. Over the next few weeks, while the lander collected atmospheric information and took photos, the rover studied rocks in detail and tested the Martian soil. Marie Curie ’s sojourn…in a JPL sandbox Meanwhile back on Earth, engineers at JPL used Marie Curie to mimic Sojourner’s movements in a Mars-like setting. During the original design and testing of the rovers, the team had set up giant sandboxes, each holding thousands of kilograms of playground sand, in the Space Flight Operations Facility at JPL. They exhaustively practiced the remote operation of Sojourner , including an 11-minute delay in communications between Mars and Earth. (The actual delay can vary from 7 to 20 minutes.) Even after Sojourner landed, Marie Curie continued to help them strategize. Initially, Sojourner was remotely operated from Earth, which was tricky given the lengthy communication delay. Mike Nelson/AFP/Getty Images During its first few days on Mars, Sojourner was maneuvered by an Earth-based operator wearing 3D goggles and using a funky input device called a Spaceball 2003 . Images pieced together from both the lander and the rover guided the operator. It was like a very, very slow video game—the rover sometimes moved only a few centimeters a day. NASA then turned on Sojourner’s hazard-avoidance system, which allowed the rover some autonomy to explore its world. A human would suggest a path for that day’s exploration, and then the rover had to autonomously avoid any obstacles in its way, such as a big rock, a cliff, or a steep slope. JPL designed Sojourner to operate for a week. But the little rover that could kept chugging along for 83 Martian days before NASA finally lost contact, on 7 October 1997. The lander had conked out on 27 September. In all, the mission collected 1.2 gigabytes of data (which at the time was a lot ) and sent back 10,000 images of the planet’s surface. NASA held on to Marie Curie with the hopes of sending it on another mission to Mars. For a while, it was slated to be part of the Mars 2001 set of missions, but that didn’t happen. In 2015, JPL transferred the rover to the Smithsonian’s National Air and Space Museum . When NASA Embraced Faster, Better, Cheaper The Pathfinder mission was the second one in NASA administrator Daniel S. Goldin ’s Discovery Program, which embodied his “faster, better, cheaper” philosophy of making NASA more nimble and efficient. (The first Discovery mission was to the asteroid Eros.) In the financial climate of the early 1990s, the space agency couldn’t risk a billion-dollar loss if a major mission failed. Goldin opted for smaller projects; the Pathfinder mission’s overall budget, including flight and operations, was capped at US $300 million. RELATED: How NASA Built Its Mars Rovers In his 2014 book Curiosity: An Inside Look at the Mars Rover Mission and the People Who Made It Happen (Prometheus), science writer Rod Pyle interviews Rob Manning , chief engineer for the Pathfinder mission and subsequent Mars rovers. Manning recalled that one of the best things about the mission was its relatively minimal requirements. The team was responsible for landing on Mars, delivering the rover, and transmitting images—technically challenging, to be sure, but beyond that the team had no constraints. Sojourner was succeeded by the rovers Spirit , Opportunity , and Curiosity . Shown here are four mission spares, including Marie Curie [foreground]. JPL-Caltech/NASA The real mission was to prove to Congress and the American public that NASA could do groundbreaking work more efficiently. Behind the scenes, there was a little bit of accounting magic happening, with the “faster, better, cheaper” missions often being silently underwritten by larger, older projects. For example, the radioisotope heater units that kept Sojourner ’s electronics warm enough to operate were leftover spares from the Galileo mission to Jupiter, so they were “free.” Not only was the Pathfinder mission successful but it captured the hearts of Americans and reinvigorated an interest in exploring Mars. In the process, it set the foundation for the future missions that allowed the rovers Spirit , Opportunity , and Curiosity (which, incredibly, is still operating nearly 13 years after it landed) to explore even more of the Red Planet. How the rovers Sojourner and Marie Curie got their names To name its first Mars rovers, NASA launched a student contest in March 1994, with the specific guidance of choosing a “heroine.” Entry essays were judged on their quality and creativity, the appropriateness of the name for a rover, and the student’s knowledge of the woman to be honored as well as the mission’s goals. Students from all over the world entered. Twelve-year-old Valerie Ambroise of Bridgeport, Conn., won for her essay on Sojourner Truth , while 18-year-old Deepti Rohatgi of Rockville, Md., came in second for hers on Marie Curie . Truth was a Black woman born into slavery at the end of the 18th century. She escaped with her infant daughter and two years later won freedom for her son through legal action. She became a vocal advocate for civil rights, women’s rights, and alcohol temperance. Curie was a Polish-French physicist and chemist famous for her studies of radioactivity, a term she coined. She was the first woman to win a Nobel Prize, as well as the first person to win a second Nobel. NASA subsequently recognized several other women with named structures. One of the last women to be so honored was Nancy Grace Roman , the space agency’s first chief of astronomy. In May 2020, NASA announced it would name the Wide Field Infrared Survey Telescope after Roman; the space telescope is set to launch as early as October 2026, although the Trump administration has repeatedly said it wants to cancel the project . Related: A Trillion Rogue Planets and Not One Sun to Shine on Them These days, NASA tries to avoid naming its major projects after people. It quietly changed its naming policy in December 2022 after allegations came to light that James Webb, for whom the James Webb Space Telescope is named, had fired LGBTQ+ employees at NASA and, before that, the State Department. A NASA investigation couldn’t substantiate the allegations, and so the telescope retained Webb’s name. But the bar is now much higher for NASA projects to memorialize anyone, deserving or otherwise. (The agency did allow the hopping lunar robot IM-2 Micro Nova Hopper , built by Intuitive Machines, to be named for computer-software pioneer Grace Hopper .) And so Marie Curie and Sojourner will remain part of a rarefied clique. Sojourner , inducted into the Robot Hall of Fame in 2003, will always be the celebrity of the pair. And Marie Curie will always remain on the sidelines. But think about it this way: Marie Curie is now on exhibit at one of the most popular museums in the world, where millions of visitors can see the rover up close. That’s not too shabby a legacy either. Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology. An abridged version of this article appears in the June 2025 print issue. References Curator Matthew Shindell of the National Air and Space Museum first suggested I feature Marie Curie . I found additional information from the museum’s collections website , an article by David Kindy in Smithsonian magazine , and the book After Sputnik: 50 Years of the Space Age (Smithsonian Books/HarperCollins, 2007) by Smithsonian curator Martin Collins. NASA has numerous resources documenting the Mars Pathfinder mission, such as the mission website , fact sheet , and many lovely photos (including some of Barnacle Bill and a composite of Marie Curie during a prelaunch test). Curiosity: An Inside Look at the Mars Rover Mission and the People Who Made It Happen (Prometheus, 2014) by Rod Pyle and Roving Mars: Spirit, Opportunity, and the Exploration of the Red Planet (Hyperion, 2005) by planetary scientist Steve Squyres are both about later Mars missions and their rovers, but they include foundational information about Sojourner .
spectrum.ieee.org