IEEE Spectrum
spectrum.ieee.org.web.brid.gy
IEEE Spectrum
@spectrum.ieee.org.web.brid.gy
The world's leading engineering magazine

[bridged from https://spectrum.ieee.org/ on the web: https://fed.brid.gy/web/spectrum.ieee.org ]
Capacity Limits in 5G Prompt a 6G Focus on Infrastructure
When the head of Nokia Bell Labs core research talks about “lessons learned” from 5G, he’s doing something rare in telecom: admitting a flagship technology didn’t quite work out as planned. That candor matters now, too, because Bell Labs core research president Peter Vetter says 6G’s success depends on getting infrastructure right the first time—something 5G didn’t fully do. By 2030, he says, 5G will have exhausted its capacity. Not because some 5G killer app will appear tomorrow, suddenly making everyone’s phones demand 10 or 100 times as much data capacity as they require today. Rather, by the turn of the decade, wireless telecom won’t be centered around just cellphones anymore. AI agents, autonomous cars, drones, IoT nodes, and sensors, sensors, sensors: Everything in a 6G world will potentially need a way on to the network. That means more than anything else in the remaining years before 6G’s anticipated rollout, high-capacity connections behind cell towers are a key game to win. Which brings industry scrutiny, then, to what telecom folks call backhaul—the high-capacity fiber or wireless links that pass data from cell towers toward the internet backbone. It’s the difference between the “local” connection from your phone to a nearby tower and the “trunk” connection that carries millions of signals simultaneously. But the backhaul crisis ahead isn’t just about capacity. It’s also about architecture. 5G was designed around a world where phones dominated, downloading video at higher and higher resolutions. 6G is now shaping up to be something else entirely. This inversion—from 5G’s anticipated downlink deluge to 6G’s uplink resurgence—requires rethinking everything at the core level, practically from scratch. Vetter’s career spans the entire arc of the wireless telecom era—from optical interconnections in the 1990s at Alcatel (a research center pioneering fiber-to-home connections) to his roles at Bell Labs and later Nokia Bell Labs, culminating in 2021 in his current position at the industry’s bellwether institution. In this conversation, held in November at the Brooklyn 6G Summit in New York, Vetter explains what 5G got wrong, what 6G must do differently, and whether these innovations can arrive before telecom’s networks start running out of room. ### 5G’s Expensive Miscalculation ** _IEEE Spectrum:_ Where is telecom today, halfway between 5G’s rollout and 6G’s anticipated rollout?** **Peter Vetter:** Today, we have enough spectrum and capacity. But going forward, there will not be enough. The 5G network by the end of the decade will run out of steam. We have traffic simulations. And it is something that has been consistent generation to generation, from 2G to 3G to 4G. Every decade, capacity goes up by about a factor of 10. So you need to prepare for that. And the challenge for us as researchers is how do you do that in an energy-efficient way? Because the power consumption cannot go up by a factor of 10. The cost cannot go up by a factor of 10. And then, lesson learned from 5G: The idea was, “Oh, we do that in higher spectrum. There is more bandwidth. Let’s go to millimeter wave.” The lesson learned is, okay, millimeter waves have short reach. You need a small cell [tower] every 300 meters or so. And that doesn’t cut it. It was too expensive to install all these small cells. **Is this related to the backhaul question?** **Vetter:** So backhaul is the connection between the base station and what we call the core of the network—the data centers, and the servers. Ideally, you use fiber to your base station. If you have that fiber as a service provider, use it. It gives you the highest capacity. But very often new cell sites don’t have that fiber backhaul, then there are alternatives: wireless backhaul. Nokia Bell Labs has pioneered a glass-based chip architecture for telecom’s backhaul signals, communicating between towers and telecom infrastructure.Nokia ### Radios Built on Glass Push Frequencies Higher **What are the challenges ahead for wireless backhaul?** **Vetter:** To get up to the 100 gigabit per second, fiber-like speeds, you need to go to higher frequency bands. **Higher frequency bands for the signals the backhaul antennas use?** **Vetter:** Yes. The challenge is the design of the radio front ends and the radio-frequency integrated circuits (RFICs) at those frequencies. You cannot really integrate [present-day] antennas with RFICs at those high speeds. **And what happens as those signal frequencies get higher?** **Vetter:** So in a millimeter wave, say 28 gigahertz, you could still do [the electronics and waveguides] for this with a classical printed circuit board. But as the frequencies go up, the attenuation gets too high. **What happens when you get to, say, 100 GHz?** **Vetter:**Conventional materials] are no good anymore. So we need to look at other still low-cost materials. We have done pioneering work at Bell Labs on radio on glass. And we use glass not for its optical transparency, but for its transparency in the [sub-terahertz radio range. **Is Nokia Bell Labs making these radio-on-glass backhaul systems for 100 GHz communications?** **Vetter:** I used an order of magnitude. Above 100 GHz, you need to look into a different material. But the wavelength range] is actually 140 to 170 GHz, what is called the [D-Band.**** We collaborate with our internal customers to get these kind of concepts on the long-term roadmap. As an example, that D-Band radio system, we actually integrated it in a prototype with our mobile business group. And we tested it last year at theOlympics in Paris.**** But this is, as I said, a prototype. We need to mature the technology between a research prototype and qualifying it to go into production. The researcher on that isShahriar Shahramian. He’s well-known in the field for this. ### Why 6G’s Bandwidth Crisis Isn’t About Phones **What will be the applications that’ll drive the big 6G demands for bandwidth?** **Vetter:** We’re installing more and more cameras and other types of sensors. I mean, we’re going into a world where we want to create large world models that are synchronous copies of the physical world. So what we will see going forward in 6G is a massive-scale deployment of sensors which will feed the AI models. So a lot of uplink capacity. That’s where a lot of that increase will come from. **Any others?** **Vetter:** Autonomous cars could be an example. It can also be in industry—like a digital twin of a harbor, and how you manage that? It can be a digital twin of a warehouse, and you query the digital twin, “Where is my product X?” Then a robot will automatically know thanks to the updated digital twin where it is in the warehouse and which route to take. Because it knows where the obstacles are in real time, thanks to that massive-scale sensing of the physical world and then the interpretation with the AI models. You will have your agents that act on behalf of you to do your groceries, or order a driverless car. They will actively record where you are, make sure that there are also the proper privacy measures in place. So that your agent has an understanding of the state you’re in and can serve you in the most optimal way. ### How 6G Networks Will Help Detect Drones, Earthquakes, and Tsunamis **You’ve described before how 6G signals can not only transmit data but also provide sensing. How will that work?** **Vetter:** The augmentation now is that the network can be turned also in a sensing modality. That if you turn around the corner, a camera doesn’t see you anymore. But the radio still can detect people that are coming, for instance, at a traffic crossing. And you can anticipate that. Yeah, warn a car that, “There’s a pedestrian coming. Slow down.” We also have fiber sensing. And for instance, using fibers at the bottom of the ocean and detecting movements of waves and detect tsunamis, for instance, and do an early tsunami warning. **What are your teams’ findings?** **Vetter:** The present-day use of tsunami warning buoys are a few hundred kilometers offshore. These tsunami waves travel at 300 and more meters per second, and so you only have 15 minutes to warn the people and evacuate. If you have now a fiber sensing network across the ocean that you can detect it much deeper in the ocean, you can do meaningful early tsunami warning. We recently detected there was a major earthquake in East Russia. That was last July. And we had a fiber sensing system between Hawaii and California. And we were able to see that earthquake on the fiber. And we also saw the development of the tsunami wave. ### 6G’s Thousands of Antennas and Smarter Waveforms **Bell Labs was an early pioneer in multiple-input, multiple-output (MIMO) antennas starting in the 1990s. Where multiple transmit and receive antennas could carry many data streams at once. What is Bell Labs doing with MIMO now to help solve these bandwidth problems you’ve described?** **Vetter:** So, as I said earlier, you want to provide capacity from existing cell sites. And the way to MIMO can do that by a technology called a simplified beamforming: If you want better coverage at a higher frequency, you need to focus your electromagnetic energy, your radio energy, even more. So in order to do that, you need a larger amount of antennas. So if you double the frequency, we go from 3.5 gigahertz, which is the C-band in 5G, now to 6G, 7 gigahertz. So it’s about double. That means the wavelength is half. So you can fit four times more antenna elements in the same form factor. So physics helps us in that sense. **What’s the catch?** **Vetter:** Where physics doesn’t help us is more antenna elements means more signal processing, and the power consumption goes up. So here is where the research then comes in. Can we creatively get to these larger antenna arrays without the power consumption going up?**** The use of AI is important in this. How can we leverage AI to do channel estimation, to do such things as equalization, to do smart beamforming, to learn the waveform, for instance? We’ve shown that with these kind of AI techniques, we can get actually up to 30 percent more capacity on the same spectrum. **And that allows many gigabits per second to go out to each phone or device?** **Vetter:** So gigabits per second is already possible in 5G. We’ve demonstrated that. You can imagine that this could go up, but that’s not really the need. The need is really how many more can you support from a base station?
spectrum.ieee.org
December 3, 2025 at 5:41 AM
Why We Keep Making the Same Software Mistakes
Talking to Robert N. Charette can be pretty depressing. Charette, who has been writing about software failures for this magazine for the past 20 years, is a renowned risk analyst and systems expert who over the course of a 50-year career has seen more than his share of delusional thinking among IT professionals, government officials, and corporate executives, before, during, and after massive software failures. In 2005’s “Why Software Fails,” in __IEEE Spectrum__ , a seminal article documenting the causes behind large-scale software failures, Charette noted, “The biggest tragedy is that software failure is for the most part predictable and avoidable. Unfortunately, most organizations don’t see preventing failure as an urgent matter, even though that view risks harming the organization and maybe even destroying it. Understanding why this attitude persists is not just an academic exercise; it has tremendous implications for business and society.” Two decades and several trillion wasted dollars later, he finds that people are making the same mistakes. They claim their project is unique, so past lessons don’t apply. They underestimate complexity. Managers come out of the gate with unrealistic budgets and timelines. Testing is inadequate**** or skipped entirely. Vendor promises that are too good to be true are taken at face value. Newer development approaches like DevOps or AI copilots are implemented without proper training or the organizational change necessary to make the most of them. What’s worse, the huge impacts of these missteps on end users aren’t fully accounted for. When the Canadian government’s Phoenix paycheck system initially failed, for instance, the developers glossed over the protracted financial and emotional distress inflicted on tens of thousands of employees receiving erroneous paychecks; problems persist today, nine years later. Perhaps that’s because, as Charette told me recently, IT project managers don’t have professional licensing requirements and are rarely, if ever, held legally liable for software debacles. While medical devices may seem a far cry from giant IT projects, they have a few things in common. As Special Projects Editor Stephen Cass uncovered in this month’s The Data, the U.S. Food and Drug Administration recalls on average 20 medical devices per month due to software issues. “Software is as significant as electricity. We would never put up with electricity going out every other day, but we sure as hell have no problem having AWS go down.” **—Robert N. Charette** Like IT projects, medical devices face fundamental challenges posed by software complexity. Which means that testing, though rigorous and regulated in the medical domain, can’t possibly cover every scenario or every line of code. The major difference between failed medical devices and failed IT projects is that a huge amount of liability attaches to the former. “When you’re building software for medical devices, there are a lot more standards that have to be met and a lot more concern about the consequences of failure,” Charette observes. “Because when those things don’t work, there’s tort law available, which means manufacturers are on the hook. It’s much harder to bring a case and win when you’re talking about an electronic payroll system.” Whether a software failure is hyperlocal, as when a medical device fails inside your body, or spread across an entire region, like when an airline’s ticketing system crashes, organizations need to dig into the root causes and apply those lessons to the next device or IT project if they hope to stop history from repeating itself. “Software is as significant as electricity,” Charette says. “We would never put up with electricity going out every other day, but we sure as hell have no problem accepting AWS going down or telcos or banks going out.” He lets out a heavy sigh worthy of A.A. Milne’s Eeyore. “People just kind of shrug their shoulders.”
spectrum.ieee.org
December 2, 2025 at 6:41 AM
IEEE President’s Note: Engineering With Purpose
Innovation, expertise, and efficiency often take center stage in the engineering world. Yet engineering’s impact lies not only in technical advancement but also in its ability to serve the greater good. This foundational principle is behind IEEE’s public imperative initiatives which apply our efforts and expertise to support our mission to advance technology for humanity with a direct benefit to society. ## Serving society Public imperative activities and initiatives serve society by promoting understanding, impact for humans and our environment, and responsible use of science and technology. These initiatives encompass a wide range of efforts, including STEM outreach, humanitarian technology deployments, public education on emerging technologies, and sustainability. Unlike many efforts advancing technology, these initiatives are not designed with financial opportunity in mind. Instead, they fulfill IEEE’s designation as a 501(c)(3) public charity engaged in scientific and educational activities for the benefit of the engineering community and the public. ### Building a Better World Across the globe, IEEE members and volunteers dedicate their time and use their talents, experiences, and expertise to lead, organize, and drive activities to advance technology for humanity. The IEEE Social Impact report showcases a selection of recent projects and initiatives that support that mission. In my March column, I described my vision for One IEEE, which is aimed at empowering IEEE’s diverse units to work together in ways that magnify their individual and collective impact. Within the framework of One IEEE, public imperative activities are not peripheral; they are central to unifying the organization and amplifying our global relevance. Across IEEE’s varied regions, societies, and technical communities, these activities align efforts around a shared mission. They provide our members from different disciplines and geographies the opportunity to collaborate on projects that transcend boundaries, fostering interdisciplinary innovation and global stewardship. Such activities also offer members opportunities to apply their technical expertise in service of societal needs. Whether finding innovative solutions to connect the unconnected or developing open-source educational tools for students, we are solving real-world problems. The initiatives transform abstract technical knowledge into actionable solutions, reinforcing the idea that technology is not just about building systems—it’s about building futures. For our young professionals and students, these activities offer hands-on experiences that connect technical skills with real-world applications, inspiring the next generation to pursue careers in engineering with purpose and passion. These activities also create mentorship opportunities, leadership pathways, and a sense of belonging within the wider IEEE community. ## Principled tech leader In an age when technology influences practically every aspect of life—from health care and energy to communication and transportation—IEEE must, as a leading technical authority, also serve as a socially responsible leader. Public imperative activities include IEEE’s commitment to ethical development, university and pre-university education, and accessible innovation. They help bridge the gap between technical communities and the public, working to ensure that engineering solutions are accessible, equitable, and aligned with societal values. From a strategic standpoint, public imperatives also support IEEE’s long-term sustainability. The organization is redesigning its budget process to emphasize aligning financial resources with mission-driven goals. One of the guiding principles is to publicize IEEE’s public charity status and invest accordingly. That means promoting our public imperatives in funding decisions, integrating them into operational planning, and measuring their outcomes with engineering rigor. By treating these activities as core infrastructure, IEEE ensures that its resources are deployed in ways that maximize public benefit and organizational impact. Public imperatives are vital to the success of One IEEE. They embody the organization’s mission, unify its global membership, and demonstrate the societal relevance of engineering and technology. They offer our members the opportunity to apply their skills in meaningful ways, contribute to public good, and shape the future of technology with integrity. Through our public imperative activities, IEEE is a force for innovation and a driver of meaningful impact. _This article appears in the December 2025 print issue as “ Engineering With Purpose.”_
spectrum.ieee.org
December 2, 2025 at 6:41 AM
The Next Frontier in AI Isn’t More Data
For the past decade, progress in artificial intelligence has been measured by scale: bigger models, larger datasets, and more compute. That approach delivered astonishing breakthroughs in large language models (LLMs); in just five years, AI has leapt from models like GPT-2, which could hardly mimic coherence, to systems like GPT-5 that can reason and engage in substantive dialogue. And now early prototypes of AI agents that can navigate codebases or browse the web point towards an entirely new frontier. But size alone can only take AI so far. The next leap won’t come from bigger models alone. It will come from combining ever-better data with worlds we build for models to learn in. And the most important question becomes: What do classrooms for AI look like? In the past few months Silicon Valley has placed its bets, with labs investing billions in constructing such classrooms, which are called reinforcement learning (RL) environments. These environments let machines experiment, fail, and improve in realistic digital spaces. ## AI Training: From Data to Experience The history of modern AI has unfolded in eras, each defined by the kind of data that the models consumed. First came the age of pretraining on internet-scale datasets. This commodity data allowed machines to mimic human language by recognizing statistical patterns. Then came data combined with reinforcement learning from human feedback—a technique that uses crowd workers to grade responses from LLMs—which made AI more useful, responsive, and aligned with human preferences. We have experienced both eras firsthand. Working in the trenches of model data at Scale AI exposed us to what many consider the fundamental problem in AI: ensuring that the training data fueling these models is diverse, accurate, and effective in driving performance gains. Systems trained on clean, structured, expert-labeled data made leaps. Cracking the data problem allowed us to pioneer some of the most critical advancements in LLMs over the past few years. Today, data is still a foundation. It is the raw material from which intelligence is built. But we are entering a new phase where data alone is no longer enough. To unlock the next frontier, we must pair high-quality data with environments that allow limitless interaction, continuous feedback, and learning through action. RL environments don’t replace data; they amplify what data can do by enabling models to apply knowledge, test hypotheses, and refine behaviors in realistic settings. ## How an RL Environment Works In an RL environment, the model learns through a simple loop: it observes the state of the world, takes an action, and receives a reward that indicates whether that action helped accomplish a goal. Over many iterations, the model gradually discovers strategies that lead to better outcomes. The crucial shift is that training becomes interactive—models aren’t just predicting the next token but improving through trial, error, and feedback. For example, language models can already generate code in a simple chat setting. Place them in a live coding environment—where they can ingest context, run their code, debug errors, and refine their solution—and something changes. They shift from advising to autonomously problem-solving. This distinction matters. In a software-driven world, the ability for AI to generate and test production-level code in vast repositories will mark a major change in capability. That leap won’t come solely from larger datasets; it will come from immersive environments where agents can experiment, stumble, and learn through iteration—much like human programmers do. The real world of development is messy: Coders have to deal with underspecified bugs, tangled codebases, vague requirements. Teaching AI to handle that mess is the only way it will ever graduate from producing error-prone attempts to generating consistent and reliable solutions. ## Can AI Handle the Messy Real World? Navigating the internet is also messy. Pop-ups, login walls, broken links, and outdated information are woven throughout day-to-day browsing workflows. Humans handle these disruptions almost instinctively, but AI can only develop that capability by training in environments that simulate the web’s unpredictability. Agents must learn how to recover from errors, recognize and persist through user-interface obstacles, and complete multi-step workflows across widely used applications. Some of the most important environments aren’t public at all. Governments and enterprises are actively building secure simulations where AI can practice high-stakes decision-making without real-world consequences. Consider disaster relief: It would be unthinkable to deploy an untested agent in a live hurricane response. But in a simulated world of ports, roads, and supply chains, an agent can fail a thousand times and gradually get better at crafting the optimal plan. Every major leap in AI has relied on unseen infrastructure, such as annotators labeling datasets, researchers training reward models, and engineers building scaffoldings for LLMs to use tools and take action. Finding large-volume and high-quality datasets was once the bottleneck in AI, and solving that problem sparked the previous wave of progress. Today, the bottleneck is not data—it’s building RL environments that are rich, realistic, and truly useful. The next phase of AI progress won’t be an accident of scale. It will be the result of combining strong data foundations with interactive environments that teach machines how to act, adapt, and reason across messy real-world scenarios. Coding sandboxes, OS and browser playgrounds, and secure simulations will turn prediction into competence.
spectrum.ieee.org
December 1, 2025 at 5:49 PM
This Toy Electric Stove Was Dangerously Realistic
Introduced in 1930 by Lionel Corp.—better known for its electric model trains—the fully functional toy stove shown at top had two electric burners and an oven that heated to 260 °C. It came with a set of cookware, including a frying pan, a pot with lid, a muffin tin, a tea kettle, and a wooden potato masher. I would have also expected a spoon, whisk, or spatula, but maybe most girls already had those. Just plug in the toy, and housewives-in-training could mimic their mothers frying eggs, baking muffins, or boiling water for tea. ## A brief history of toy stoves Even before electrification, cast-iron toy stoves had become popular in the mid-19th century. At first fueled by coal or alcohol and later by oil or gas, these toy stoves were scaled-down working equivalents of the real thing. Girls could use their stoves along with a toy waffle iron or small skillet to whip up breakfast. If that wasn’t enough fun, they could heat up a miniature flatiron and iron their dolls’ clothes. Designed to help girls understand their domestic duties, these toys were the gendered equivalent of their brothers’ toy steam engines. If you’re thinking fossil-fuel-powered “educational toys” are a recipe for disaster, you are correct. Many children suffered serious burns and sometimes death by literally playing with fire. Then again, people in the 1950s thought playing with uranium was safe. When electric toy stoves came on the scene in the 1910s, things didn’t get much safer, as the new entrants also lacked basic safety features. The burners on the 1930 Lionel range, for example, could only be turned off or on, but at least kids weren’t cooking over an open flame. At 86 centimeters tall, the Lionel range was also significantly larger than its more diminutive predecessors. Just the right height for young children to cook standing up. Western Electric’s Junior Electric Range was demonstrated at an expo in 1915 in New York City.The Strong Well before the Lionel stove, the Western Electric Co. had a cohort of girls demonstrating its Junior Electric Range at the Electrical Exposition held in New York City in 1915. The Junior Electric held its own in a display of regular sewing-machine motors, vacuum cleaners, and electric washing machines. The Junior Electric stood about 30 cm tall with six burners and an oven. The electric cord plugged into a light fixture socket. Children played with it while sitting on the floor or as it sat on a table. A visitor to the Expo declared the miniature range “the greatest electrical novelty in years.” Cooking by electricity in any form was still innovative—George A. Hughes had introduced his eponymous electric range just five years earlier. When the Junior Electric came along, less than a third of U.S. households had been wired for electric lights. ## How electricity turned cooking into a science One reason to give little girls working toy stoves was so they could learn how to differentiate between a hot flame and low heat and get a feel for cooking without burning the food. These are skills that come with experience. Directions like “bake until done in a moderate oven,” a common line in 19th-century recipes, require a lot more tacit knowledge than is needed to, say, throw together a modern boxed brownie mix. The latter comes with detailed instructions and assumes you can control your oven temperature to within a few degrees. That type of precision simply didn’t exist in the 19th century, in large part because it was so difficult to calibrate wood- or coal-burning appliances. Girls needed to start young to master these skills by the time they married and were expected to handle the household cooking on their own. Electricity changed the game. In his comparison of “fireless cookers,” an engineer named Percy Wilcox Gumaer exhaustively tested four different electric ovens and then presented his findings at the 32nd Annual Convention of the American Institute of Electrical Engineers (a forerunner of today’s IEEE) on 2 July 1915. At the time, metered electricity was more expensive than gas or coal, so Gumaer investigated the most economical form of cooking with electricity, comparing different approaches such as longer cooking at low heat versus faster cooking in a hotter oven, the effect of heat loss when opening the oven door, and the benefits of searing meat on the stovetop versus in the oven before making a roast. Gumaer wasn’t starting from scratch. Similar to how Yoshitada Minami needed to learn the ideal rice recipe before he could design an automatic rice cooker, Gumaer decided that he needed to understand the principles of roasting beef. Minami had turned to his wife, Fumiko, who spent five years researching and testing variations of rice cooking. Gumaer turned to the work of Elizabeth C. Sprague, a research assistant in nutrition investigations at the University of Illinois, and H.S. Grindley, a professor of general chemistry there. In their 1907 publication “A Precise Method of Roasting Beef,” Sprague and Grindley had defined qualitative terms like medium rare and well done by precisely measuring the internal temperature in the center of the roast. They concluded that beef could be roasted at an oven temperature between 100 and 200 °C. Continuing that investigation, Gumaer tested 22 roasts at 100, 120, 140, 160, and 180 °C, measuring the time they took to reach rare, medium rare, and well done, and calculating the cost per kilowatt-hour. He repeated his tests for biscuits, bread, and sponge cake. In case you’re wondering, Gumaer determined that cooking with electricity could be a few cents cheaper than other methods if you roasted the beef at 120 °C instead of 180 °C. It’s also more cost-effective to sear beef on the stovetop rather than in the oven. Biscuits tasted best when baked at 200 to 240 °C, while sponge cake was best between 170 and 200 °C. Bread was better at 180 to 240 °C, but too many other factors affected its quality. In true electrical engineering fashion, Gumaer concluded that “it is possible to reduce the art of cooking with electricity to an exact science.” ## Electric toy stoves as educational tools This semester, I’m teaching an introductory class on women’s and gender studies, and I told my students about the Lionel toy oven. They were horrified by the inherent danger. One incredulous student kept asking, “This is real? This is not a joke?” Instead of learning to cook with a toy that could heat to 260 °C, many of us grew up with the Easy-Bake Oven. The 1969 model could reach about 177° C with its two 100-watt incandescent light bulbs. That was still hot enough to cause burns, but somehow it seemed safer. (Since 2011, Easy-Bakes have used a heating element instead of lightbulbs.) The Queasy Bake Cookerator, designed to whip up “gross-looking, great-tasting snacks,” was marketed to boys. The Strong The Easy-Bake I had wasn’t particularly gendered. It was orange and brown and meant to look like a different new-fangled appliance of the day, the microwave oven. But by the time my students were playing with Easy-Bake Ovens, the models were in the girly hues of pink and purple. In 2002, Hasbro briefly tried to lure boys by releasing the Queasy Bake Cookerator, which the company marketed with disgusting-sounding foods like Chocolate Crud Cake and Mucky Mud. The campaign didn’t work, and the toy was soon withdrawn. Similarly, Lionel’s electric toy range didn’t last long on the market. Launched in 1930, it had been discontinued by 1932, but that may have had more to do with timing. The toy cost US $29.50, the equivalent of a men’s suit, a new bed, or a month’s rent. In the midst of a global depression, the toy stove was an extravagance. Lionel reverted to selling electric trains to boys. My students discussed whether cooking is still a gendered activity. Although they agreed that meal prep disproportionately falls on women even now, they acknowledged the rise of the male chef and credited televised cooking shows with closing the gender gap. As a surprise, we discovered that one of the students in the class, Haley Mattes, competed in and won __Chopped Junior__ as a 12-year-old. Haley had a play kitchen as a kid that was entirely fake: fake food, fake pans, fake utensils. She graduated to the Easy-Bake Oven, but really got into cooking the same way girls have done for centuries, by learning beside her grandmas. __Part of a__ __continuing series__ ____looking at historical artifacts that embrace the boundless potential of technology.__ __An abridged version of this article appears in the December 2025 print issue as “Too Hot to Handle.”__ ### References I first came across a description of Western Electric’s Junior Electric Range in “The Latest in Current Consuming Devices,” in the November 1915 issue of _Electrical Age._ The Strong National Museum of Play, in Rochester, N.Y., has a large collection of both cast-iron and electric stoves. The Strong also published two blogs that highlighted Lionel’s toy: “Kids and Cooking” and “Lionel for Ladies?” Although Ron Hollander’s _All Aboard! The Story of Joshua Lionel Cowen & His Lionel Train Company_ (Workman Publishing, 1981) is primarily about toy trains, it includes a few details about how Lionel marketed its electric toy stove to girls.
spectrum.ieee.org
December 1, 2025 at 7:16 AM
Video Friday: Disney’s Robotic Olaf Makes His Debut
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at _IEEE Spectrum_ robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. ##### SOSV Robotics Matchup: 1–5 December 2025, ONLINE ##### ICRA 2026: 1–5 June 2026, VIENNA Enjoy today’s videos! > _Step behind the scenes with Walt Disney Imagineering Research & Development and discover how Disney uses robotics, AI, and immersive technology to bring stories to life! From the brand new self-walking Olaf in World of Frozen and BDX Droids to cutting-edge attractions like Millennium Falcon: Smugglers Run, see how magic meets innovation._ [Disney Experiences ] > _We just released a new demonstration of Mentee’s V3 humanoid robots completing a real world logistics task together. Over an uninterrupted 18-minute run, the robots autonomously move 32 boxes from eight piles to storage racks of different heights. The video shows steady locomotion, dexterous manipulation, and reliable coordination throughout the entire task._ And there’s an uncut 18 minute version of this at the link. [MenteeBot ] Thanks, Yovav! > _This video contains graphic depictions of simulated injuries. Viewer discretion is advised. > > __In this immersive overview, guided by theDARPA Triage Challenge program manager, retired Army Col. Jeremy C. Pamplin, M.D., you’ll experience how teams of innovators, engineers, and DARPA are redefining the future of combat casualty care. Be sure to look all around! Check out competition runs, behind-the-scenes of what it takes to put on a DARPA Challenge, and glimpses into the future of lifesaving care._ Those couple of minutes starting at 6:50 with the human medic and robotic teaming was particularly cool. [DARPA ] You don’t need to build a humanoid robot if you can just make existing humanoids a lot better. I especially love 0:45 because you know what? Humanoids should spend more time sitting down, for all kinds of reasons. And of course, thank you for falling and getting up again, albeit on some of the squishiest grass on the planet. [Flexion ] “Human-in-the-Loop Gaussian Splatting” wins best paper title of the week. Paper ] via [ _IEEE Robotics and Automation Letters_ in [IEEE Xplore ] Scratch that, “Extremum Seeking Controlled Wiggling for Tactile Insertion” wins best paper title of the week. [University of Maryland PRG ] The battery swapping on this thing is... Unfortunate. [LimX Dynamics ] > _To push the boundaries of robotic capability, researchers in the Department of Mechanical Engineering at Carnegie Mellon University in collaboration with The University of Washington and Google Deepmind, have developed a new tactile sensing system that enables four-legged robots to carry unsecured, cylindrical objects on their backs. This system, known as LocoTouch, features a network oftactile sensors that spans the robot’s entire back. As an object shifts, the sensors provide real-time feedback on its position, allowing the robot to continuously adjust its posture and movement to keep the object balanced._ [Carnegie Mellon University ] This robot is in more need of googly eyes than any other robot I’ve ever seen. [Zarrouk Lab ] > _DPR Construction has deployed Field AI’s autonomy software on a quadruped robot at the company’s job site in Santa Clara, CA, to greatly improve its daily surveying and data collection processes. By automating what has traditionally been a very labor intensive and time consuming process, Field AI is helping the DPR team operate more efficiently and effectively, while increasing project quality._ [FieldAI ] > _In our second episode of AI in Motion, our host, Waymo AI researcher Vincent Vanhoucke, talks with a robotics startup founder Sergey Levine, who left a career in academic research to build better robots for the home and workplace._ [Waymo ]
spectrum.ieee.org
November 29, 2025 at 8:19 PM
The Biggest Causes of Medical Device Recalls
According to the U.S. Food and Drug Administration records, in an average year over 2,500 medical device recalls are issued in the United States. Some of these recalls simply require checking the device for problems, but others require the return or destruction of the device. Once identified, the FDA categorizes the root cause of these recalls into 40 categories, plus a catchall of “other”: situations that include labeling mix-ups, problems with expiration dates, and counterfeiting. What’s shown here is the breakdown of the five biggest problem categories found among the 56,000 entries in the FDA medical-recall database, which stretches back to 2002: device design, process control (meaning an error in the device’s manufacturing process), nonconforming material/component (meaning something does not meet required specifications), software issues, and packaging. Software issues are broken down into six root causes, with software design far and away the biggest problem. The other five are, in order: change control; software design changes; software manufacturing or deployment problems; software design issues in the manufacturing process; and software in the “use environment.” That last one includes cybersecurity issues, or problems with supporting software, such as a smartphone app. _This article appears in the December 2025 print issue as “Medical Device Recalls.”_
spectrum.ieee.org
November 29, 2025 at 8:19 PM
EPICS in IEEE Funds Record-Breaking Number of Student Projects
The EPICS (Engineering Projects in Community Service) in IEEE initiative had a record year in 2025, funding 48 projects involving nearly 1,000 students from 17 countries. The IEEE Educational Activities program approved the most projects this year, distributing US $290,000 in funding and engaging more students than ever before in innovative, hands-on engineering systems. The program offers students opportunities to engage in service learning and collaborate with engineering professionals and community organizations to develop solutions that address local community challenges. The projects undertaken by IEEE groups encompass student branches, sections, society chapters, and affinity groups including Women in Engineering and Young Professionals. EPICS in IEEE provides funding up to $10,000, along with resources and mentorship, for projects focused on four key areas of community improvement: education and outreach, environment, access and abilities, and human services. This year, EPICS partnered with four IEEE societies and the IEEE Standards Association on 23 of the 48 approved projects. The Antennas and Propagation Society supported three, the Industry Applications Society (IAS) funded nine, the Instrumentation and Measurement Society (IMS) sponsored five, the Robotics and Automation Society supported two, the Solid State Circuits Society (SSCS) provided funding for three, and the IEEE Standards Association sponsored one. The stories of the partner-funded projects demonstrate the impact and the effect the projects have on the students and their communities. ## Matoruco agroecological garden The IAS student branch at the Universidad Pontificia Bolivariana in Colombia worked on a project that involved water storage, automated irrigation, and waste management. The goal was to transform the Matoruco agroecological garden at the Institución Educativa Los Garzones into a more lively, sustainable, and accessible space. These EPICS in IEEE team members from the Universidad Pontificia Bolivariana in Colombia are configuring a radio communications network that will send data to an online dashboard showing the solar power usage, pump status, and soil moisture for the Matoruco agroecological garden at the Institución Educativa Los Garzones. EPICS in IEEE By using an irrigation automation system, electric pump control, and soil moisture monitoring, the team aimed to show how engineering concepts combine academic knowledge and practical application. The initiative uses monocrystalline solar panels for power, a programmable logic controller to automatically manage pumps and valves, soil moisture sensors for real-time data, and a LoRa One network (a proprietary radio communication system based on spread spectrum modulation) to send data to an online dashboard showing solar power usage, pump status, and soil moisture. Los Garzones preuniversity students were taught about the irrigation system through hands-on projects, received training on organic waste management from university students, and participated in installation activities. The university team also organizes garden cleanup events to engage younger students with the community garden. “We seek to generate a true sense of belonging by offering students and faculty a gathering place for hands-on learning and shared responsibility,” says Rafael Gustavo Ramos Noriega, the team lead and fourth-year electronics engineering student. “By integrating technical knowledge with fun activities and training sessions, we empower the community to keep the garden alive and continue improving it. “This project has been an unmatched platform for preparing me for a professional career,” he added. “By leading everything from budget planning to the final installation, I have experienced firsthand all the stages of a real engineering project: scope definition, resource management, team coordination, troubleshooting, and delivering tangible results. All of this reinforces my goal of dedicating myself to research and development in automation and embedded systems and contributing innovation in the agricultural and environmental sectors to help more communities and make my mark.” The project received $7,950 from IAS. Students give a tour of the systems they installed at the Matoruco agroecological garden. ## A smart braille system More than 1.5 million individuals in Pakistan are blind, including thousands of children who face barriers to accessing essential learning resources, according to the International Agency for the Prevention of Blindness. To address the need for accessible learning tools, a student team from the Mehran University of Engineering and Technology (MUET) and the IEEE Karachi Section created BrailleGenAI: Empowering Braille Learning With Edge AI and Voice Interaction. The interactive system for blind children combines edge artificial intelligence, generative AI, and embedded systems, says Kainat Fizzah Muhammad, a project leader and electrical engineering student at MUET. The system uses a camera to recognize tactile braille blocks and provide real-time audio feedback via text-to-speech technology. It includes gamified modules designed to support literacy, numeracy, logical reasoning, and voice recognition. The team partnered with the Hands Welfare Foundation, a nonprofit in Pakistan that focuses on inclusive education, disability empowerment, and community development. The team collaborated with the Ida Rieu School, part of the Ida Rieu Welfare Association, which serves the visually and hearing impaired. “These partnerships have been instrumental in helping us plan outreach activities, gather input from experts and caregivers, and prepare for usability testing across diverse environments,” says Attiya Baqai, a professor in the MUET electronic engineering department. Support from the Hands foundation ensured the solution was shaped by the real-world needs of the visually impaired community. SSCS provided $9,155 in funding. The student team shows how the smart braille system they developed works. ## Tackling air pollution Macedonia’s capital, Skopje, is among Europe’s most polluted cities, particularly in winter, due to thick smog caused by temperature changes, according to the World Health Organization. The WHO reports that the city’s air contains particles that can cause health issues without early warning signs—known as silent killers. A team at Sts. Cyril and Methodius University created a system to measure and publicize local air pollution levels through its What We Breathe project. It aims to raise awareness and improve health outcomes, particularly among the city’s children. “Our goal is to provide people with information on current pollution levels so they can make informed decisions regarding their exposure and take protective measures,” says Andrej Ilievski, an IEEE student member majoring in computer hardware engineering and electronics. “We chose to focus on schools first because children’s lungs and immune systems are still developing, making them one of our population’s most vulnerable demographics.” The project involved 10 university students working with high schools, faculty, and the Society of Environmental Engineers of Macedonia to design and build a sensing and display tool that communicates via the Internet. “By leading everything from budget planning to the final installation, I have experienced firsthand all the stages of a real engineering project: scope definition, resource management, team coordination, troubleshooting, and delivering tangible results.” **—Rafael Gustavo Ramos Noriega** “Our sensing unit detects particulate matter, temperature, and humidity,” says project leader Josif Kjosev, an electronics professor at the university. “It then transmits that data through a Wi-Fi connection to a public server every 5 minutes, while our display unit retrieves the data from the server.” “Since deploying the system,” Ilievski says, “everyone on the team has been enthusiastic about how well the project connects with their high school audience.” The team says it hopes students will continue to work on new versions of the devices and provide them to other interested schools in the area. “For most of my life, my academic success has been on paper,” Ilievski says. “But thanks to our EPICS in IEEE project, I finally have a real, physical object that I helped create. “We’re grateful for the opportunity to make this project a reality and be part of something bigger.” The project received $8,645 from the IMS. ## Society partnerships count Thanks to partnerships with IEEE societies, EPICS can provide more opportunities to students around the world. The program also includes mentors from societies and travel grants for conferences, enhancing the student experience. The collaborations motivate students to apply technologies in the IEEE societies’ areas of interest to real-world problems, helping them improve their communities and fostering continued engagement with the society and IEEE. You can learn how to get involved with EPICS by visiting its website.
spectrum.ieee.org
November 29, 2025 at 9:39 AM
Citizens of Smart Cities Need a Way to Opt Out
For years, Gwen Shaffer has been leading Long Beach, Calif. residents on “data walks,” pointing out public Wi-Fi routers, security cameras, smart water meters, and parking kiosks. The goal, according to the professor of journalism and public relations at California State University, Long Beach, was to learn how residents felt about the ways in which their city collected data on them. ### Gwen Shaffer Gwen Shaffer is a professor of journalism and public relations at California State University, Long Beach. She is the principal investigator on a National Science Foundation–funded project aimed at providing Long Beach residents with greater agency over the personal data their city collects. She also identified a critical gap in smart city design today: While cities may disclose how they collect data, they rarely offer ways to opt out. Shaffer spoke with __IEEE Spectrum__ about the experience of leading data walks, and about her research team’s efforts to give citizens more control over the data collected by public technologies. **What was the inspiration for your data walks?** **Gwen Shaffer:** I began facilitating data walks in 2021. I was studying residents’ comfort levels with city-deployed technologies that collect personally identifiable information. My first career as a political reporter has influenced my research approach. I feel strongly about conducting applied rather than theoretical research. And I always go into a study with the goal of helping to solve a real-world challenge and inform policy. **How did you organize the walks?** **Shaffer:** We posted data privacy labels with a QR code that residents can scan and find out how their data are being used. Downtown, they’re in Spanish and English. In Cambodia Town, we did them in Khmer and English. **What happened during the walks?** **Shaffer:** I’ll give you one example. In a couple of the city-owned parking garages, there are automated license-plate readers at the entrance. So when I did the data walks, I talked to our participants about how they feel about those scanners. Because once they have your license plate, if you’ve parked for fewer than two hours, you can breeze right through. You don’t owe money. Responses were contextual and sometimes contradictory. There were residents who said, “Oh, yeah. That’s so convenient. It’s a time saver.” So I think that shows how residents are willing to make trade-offs. Intellectually, they hate the idea of the privacy violation, but they also love convenience. **What surprised you most?** **Shaffer:** One of the participants said, “When I go to the airport, I can opt out of the facial scan and still be able to get on the airplane. But if I want to participate in so many activities in the city and not have my data collected, there’s no option.” There was a cyberattack against the city in November 2023. Even though we didn’t have a prompt asking about it, people brought it up on their own in almost every focus group. One said, “I would never connect to public Wi-Fi, especially after the city of Long Beach’s site was hacked.” **What is the app your team is developing?** **Shaffer:** Residents want agency. So that’s what led my research team to connect with privacy engineers at Carnegie Mellon University, in Pittsburgh. Norman Sadeh and his team had developed what they called the IoT Assistant. So I told them about our project, and proposed adapting their app for city-deployed technologies. Our plan is to give residents the opportunity to exercise their rights under the California Consumer Privacy Act with this app. So they could say, “Passport Parking app, delete all the data you’ve already collected on me. And don’t collect any more in the future.” _This article appears in the December 2025 print issue as “Gwen Shaffer.”_
spectrum.ieee.org
November 29, 2025 at 9:40 AM
3 Weird Things You Can Turn Into a Memristor
From the honey in your tea to the blood in your veins, materials all around you have a hidden talent. Some of these substances, when engineered in specific ways, can act as memristors—electrical components that can “remember” past states. Memristors are often used in chips that both perform computations and store data. They are devices that store data as particular levels of resistance. Today, they are constructed as a thin layer of titanium dioxide or similar dielectric material sandwiched between two metal electrodes. Applying enough voltage to the device causes tiny regions in the dielectric layer—where oxygen atoms are missing—to form filaments that bridge the electrodes or otherwise move in a way that makes the layer more conductive. Reversing the voltage undoes the process. Thus, the process essentially gives the memristor a memory of past electrical activity. Last month, while exploring the electrical properties of fungi, a group at The Ohio State University found first-hand that some organic memristors have benefits beyond those made with conventional materials. Not only can shiitake act as a memristor, for example, but it may be useful in aerospace or medical applications because the fungus demonstrates high levels of radiation resistance. The project “really mushroomed into something cool,” lead researcher John LaRocco says with a smirk. Researchers have learned that other unexpected materials may give memristors an edge. They may be more flexible than typical memristors or even biodegradable. Here’s how they’ve made memristors from strange materials, and the potential benefits these odd devices could bring: ## Mushrooms LaRocco and his colleagues were searching for a proxy for brain circuitry to use in electrical stimulation research when they stumbled upon something interesting—shiitake mushrooms are capable of learning in a way that’s similar to memristors. The group set out to evaluate just how well shiitake can remember electrical states by first cultivating nine samples and curating optimal growing conditions, including feeding them a mix of farro, wheat, and hay. Once fully matured, the mushrooms were dried and rehydrated to a level that made them moderately conductive. In this state, the fungi’s structure includes conductive pathways that emulate the oxygen vacancies in commercial memristors. The scientists plugged them into circuits and put them through voltage, frequency, and memory tests. The result? Mushroom memristors. It may smell “kind of funny,” LaRocco says, but shiitake performs surprisingly well when compared to conventional memristors. Around 90 percent of the time, the fungus maintains ideal memristor-like behavior for signals up to 5.85 kilohertz. While traditional materials can function at frequencies orders of magnitude faster, these numbers are notable for biological materials, he says. What fungi lack in performance, they may make up for in other properties. For one, many mushrooms—including shiitake—are highly resistant to radiation and other environmental dangers. “They’re growing in logs in Fukushima and a lot of very rough parts of the world, so that’s one of the appeals,” LaRocco says. Shiitake are also an environmentally-friendly option that’s already commercialized. “They’re already cultured in large quantities,” LaRocco explains. “One could simply leverage existing logistics chains” if the industry wanted to commercialize mushroom memristors. The use cases for this product would be niche, he thinks, and would center around the radiation resistance that shiitake boasts. Mushroom GPUs are unlikely, LaRocco says, but he sees potential for aerospace and medical applications. ## Honey In 2022, engineers at Washington State University interested in green electronics set out to study if honey could serve as a good memristor. “Modern electronics generate 50 million tons of e-waste annually, with only about 20 percent recycled,” says Feng Zhao, who led the work and is now at Missouri University of Science and Technology. “Honey offers a biodegradable alternative.” The researchers first blended commercial honey with water and stored it in a vacuum to remove air bubbles. They then spread the mixture on a piece of copper, baked the whole stack at 90 °C for nine hours to stabilize it, and, finally, capped it with circular copper electrodes on top—completing the honey-based memristor sandwich. The resulting 2.5-micrometer-thick honey layer acted like oxide dielectric in conventional memristors: a place for conductive pathways to form and dissolve, changing resistance with voltage. In this setup, when voltage is applied, copper filaments extend through the honey. The honey-based memristor was able to switch from low to high resistance in 500 nanoseconds and back to low in 100 nanoseconds, which is comparable to speeds in some non-food-based memristive materials. One advantage of honey is that it’s “cheap and widely available, making it an attractive candidate for scalable fabrication,” Zhao says. It’s also “fully biodegradable and dissolves in water, showing zero toxic waste.” In the 2022 paper, though, the researchers note that for a honey-based device to be truly biodegradable, the copper components would need to be replaced with dissolvable metals. They suggest options like magnesium and tungsten, but also write that the performance of memristors made from these metals is still “under investigation.” ## Blood Considering it a potential means of delivering healthcare, a group in India wondered if blood would make a good memristor in 2011, just three years after the first memristor was built. The experiments were pretty simple. The researchers filled a test tube with fresh, type O+ human blood and inserted two conducting wire probes. The wires were connected with a power supply, creating a complete circuit, and voltages of one, two, and three volts were applied in repeated steps. Then, to test the memristor-qualities of blood as it exists in the human body, the researchers set up a “flow mode” that applied voltage to the blood as it flowed from a tube at up to one drop per second. The experiments were preliminary and only measured current passing through the blood, but resistance could be set by applying voltage. Crucially, resistance changed by less than 10 percent in the 30 minute period after voltage was applied. In the __International Journal of Medical Engineering and Informatics__ , the scientists wrote that, because of these observations, their contraption “looks like a human blood memristor.” They suggested that this knowledge could be useful in treating illness. Sick people may have ion imbalances in certain parts of their bodies—instead of prescribing medication, why not employ a circuit component made of human tissue to solve the problem? In recent years, blood-based memristors have been tested by other scientists as means to treat conditions ranging from high blood sugar to nearsightedness.
spectrum.ieee.org
November 27, 2025 at 8:12 PM
For This Engineer, Taking Deep Dives Is Part of the Job
Early in Levi Unema’s career as an electrical engineer, he was presented with an unusual opportunity. While working on assembly lines at an automotive parts supplier in 2015, he got a surprise call from his high-school science teacher that set him off on an entirely new path: piloting underwater robots to explore the ocean’s deepest abysses. That call came from Harlan Kredit, a nationally renowned science teacher and board member of a Rhode Island-based nonprofit called the Global Foundation for Ocean Exploration (GFOE). The organization was looking for an electrical engineer to help design, build, and pilot remotely operated vehicles (ROVs) for the U.S. National Oceanic and Atmospheric Administration. ### Levi Unema ******Employer** ****Deep Exploration Solutions **Occupation** ****ROV engineer **Education** Bachelor’s degree in electrical engineering, Michigan Technological University This was an exciting break for Unema, a Washington state native who had grown up tinkering with electronics and exploring the outdoors. Unema joined the team in early 2016 and has since helped develop and operate deep-sea robots for scientific expeditions around the globe. The GFOE’s contract with NOAA expired in July, forcing the engineering team to disband. But soon after, Unema teamed up with four former colleagues to start their own ROV consultancy, called Deep Exploration Solutions, to continue the work he’s so passionate about. “I love the exploration and just seeing new things every day,” he says. “And the engineering challenges that go along with it are really exciting, because there’s a lot of pressure down there and a lot of technical problems to solve.” ## Nature and Technology Unema’s fascination with electronics started early. Growing up in Lynden, Wash., he took apart radios, modified headphones, and hacked together USB chargers from AA batteries. “I’ve always had to know how things work,” he says. He was also a Boy Scout, and much of his youth was spent hiking, camping, and snowboarding. That love of both technology and nature can be traced back, at least in part, to his parents—his father was a civil engineer, and his mother was a high-school biology teacher. But another major influence growing up was Kredit, the science teacher who went on to recruit him. (Kredit was also a colleague of Unema’s mother.) Kredit has won numerous awards for his work as an educator, including the Presidential Award for Excellence in Science Teaching in 2004. Like Unema, he also shares a love for the outdoors as Yellowstone National Park’s longest-serving park ranger. “He was an excellent science teacher, very inspiring,” says Unema. When Unema graduated high school in 2010, he decided to enroll at his father’s alma mater, Michigan Technological University, to study engineering. He was initially unsure what discipline to follow and signed up for the general engineering course, but he quickly settled on electrical engineering. A summer internship at a steel mill run by the multinational corporation ArcelorMittal introduced Unema to factory automation and assembly lines. After graduating in 2014 he took a job at Gentex Corp. in Zeeland, Mich., where he worked on manufacturing systems and industrial robotics. ## Diving Into Underwater Robotics In late 2015, he got the call from Kredit asking if he’d be interested in working on underwater robots for GFOE. The role involved not just engineering these systems, but also piloting them. Taking the plunge was a difficult choice, says Unema, as he’d just been promoted at Gentex. But the promise of travel combined with the novel engineering challenges made it too good an opportunity to turn down. Building technology that can withstand the crushing pressure at the bottom of the ocean is tough, he says, and you have to make trade-offs between weight, size, and cost. Everything has to be waterproof, and electronics have to be carefully isolated to prevent them from grounding on the ocean floor. Some components are pressure-tolerant, but most must be stored in pressurized titanium flasks, so the components must be extremely small to minimize the size of the metallic housing. Unema conducts predive checks from the Okeanos Explorer’s control room. Once the ROV is launched, scientists will watch the camera feeds and advise his team where to direct the vehicle.Art Howard “You’re working very closely with the mechanical engineer to fit the electronics in a really small space,” he says. “The smaller the cylinder is, the cheaper it is, but also the less mass on the vehicle. Every bit of mass means more buoyancy is required, so you want to keep things small, keep things light.” Communications are another challenge. The ROVs rely on several kilometers of cable containing just three single-mode optical fibers. “All the communication needs to come together and then go up one cable,” Unema says. “And every year new instruments consume more data.” He works exclusively on ROVs that are custom made for scientific research, which require smoother control and considerably more electronics and instrumentation than the heavier-duty vehicles used by the oil and gas industry. “The science ones are all hand-built, they’re all quirky,” he says. Unema’s role spans the full life cycle of an ROV’s design, construction, and operation. He primarily spends winters upgrading and maintaining vehicles and summers piloting them on expeditions. At GFOE, he mainly worked on two ROVs for NOAA called __Deep Discoverer__ and __Seirios__ , which operate from the ship __Okeanos Explorer__. But he has also piloted ROVs for other organizations over the years, including the Schmidt Ocean Institute and the Ocean Exploration Trust. Unema’s new consultancy, Deep Exploration Solutions, has been given a contract to do the winter maintenance on the NOAA ROVs, and the firm is now on the lookout for more ROV design and upgrade work, as well as piloting jobs. ## An Engineer’s Life at Sea On expeditions, Unema is responsible for driving the robot. He follows instructions from a science team that watches the ROV’s video feed to identify things like corals, sponges, or deepwater creatures that they’d like to investigate in more detail. Sometimes he will also operate hydraulic arms to sample particularly interesting finds. In general, the missions are aimed at discovering new species and mapping the range of known ones, says Unema. “There’s a lot of the bottom of the ocean where we don’t know anything about it,” he says. “Basically every expedition there’s some new species.” This involves being at sea for weeks at a time. Unema says that life aboard ships can be challenging—many new crew members get seasick, and you spend almost a month living in close quarters with people you’ve often never met before. But he enjoys the opportunity to meet colleagues from a wide variety of backgrounds who are all deeply enthusiastic about the mission. “It’s like when you go to scout camp or summer camp,” he says. “You’re all meeting new people. Everyone’s really excited to be there. We don’t know what we’re going to find.” Unema also relishes the challenge of solving engineering problems with the limited resources available on the ship. “We’re going out to the middle of the Pacific,” he says. “Things break, and you’ve got to fix them with what you have out there.” If that sounds more exciting than daunting, and you’re interested in working with ROVs, Unema’s main advice is to talk to engineers in the field. It’s a small but friendly community, he says, so just do your research to see what opportunities are available. Some groups, such as the Ocean Exploration Trust, also operate internships for college students to help them get experience in the field. And Unema says there are very few careers quite like it. “I love it because I get to do all aspects of engineering—from idea to operations,” he says. “To be able to take something I worked on and use it in the field is really rewarding.” _This article appears in the December 2025 print issue as “Levi Unema.”_
spectrum.ieee.org
November 27, 2025 at 8:12 PM
IEEE and Girl Scouts Are Working on Getting Girls Into STEM
The percentage of women working in science, technology, engineering, and math fields continues to remain stubbornly low. Women made up 28 percent of the STEM global workforce last year, according to the World Economic Forum. IEEE and many other organizations conduct outreach programs targeting preuniversity girls and college-age women, and studies show that one of the most powerful ways to encourage girls to consider a STEM career is by introducing them to female role models in such fields. The exposure can provide the girls with insights, guidance, and advice on how to succeed in STEM. To provide a venue to connect young girls with members working in STEM, IEEE partnered with the Girl Scouts of the United States of America’s Heart of New Jersey (GSHNJ) council and its See Her, Be Her career exploration program. Now in its eighth year, the annual event—which used to be called What a G.I.R.L. Can Be—provides an opportunity for girls to learn about STEM careers by participating in hands-on activities, playing games, and questioning professionals at the exhibits. This year’s event was held in May at Stevens Institute of Technology, in Hoboken, N.J. Volunteers from the IEEE North Jersey Section and the IEEE Technical Activities Future Networks technical community were among the 30 exhibitors. More than 100 girls attended. “IEEE and the Girl Scouts share a view that STEM fields require a diversity of thought, experience, and backgrounds to be able to use technology to better the world,” says IEEE Member Craig Polk, senior program manager for the technical community. He helped coordinate the See Her, Be Her event. “We know that there’s a shortage of girls and women in STEM careers,” adds Johanna Nurjahan, girl experience manager for the Heart of New Jersey council. “We are really trying to create that pipeline, which is needed to ensure that the number of women in STEM tracks upward.” ## STEM is one of four pillars The Girl Scouts organization focuses on helping girls build courage, confidence, and character. The program is based on four pillars: life skills, outdoor skills, entrepreneurship, and STEM. “We offer girls a wide range of experiences that empower them to take charge of their future, explore their interests, and discover the joy of learning new skills,” Nurjahan says. “As they grow and progress through the program, they continue developing and refining skills that build courage, confidence, and character—qualities that prepare them to make the world a better place. Everything we do helps lay a strong foundation for leadership.” ## A fruitful collaboration The partnership between IEEE and the Girl Scouts began shortly before the COVID-19 pandemic hit the United States in 2020. Volunteers from IEEE sections worked with IEEE TryEngineering to bring resources to areas that had not historically been represented in STEM, Polk says. Trinity Zang, a laboratory manager at Howard Hughes Medical Institute in Essex County, N.J.shows a Girl Scout Brownie how to transfer liquid samples using pipettes.GSHNJ During that same period, the Girl Scouts were increasing their involvement in STEM-related programs. They worked with U.S. IEEE sections to conduct hands-on activities at schools. They also held career fairs and created STEM badges. The collaboration has grown since then. “IEEE has always been a fantastic partner,” Nurjahan says. “They’re always willing to aid us as we work to get more girls engaged in STEM.” IEEE first got involved with the See Her, Be Her career fair in May 2024, which was also held at Stevens Tech. “Being able to introduce engineering and STEM to possible future innovators and leaders helps grow the understanding of how societal problems can be solved,” Polk says. “IEEE also benefits by having a new generation knowing who we are and what our charitable organization is doing to improve humanity through technology.” “See Her, Be Her gives girls the chance to see women leading in nontraditional careers and inspires them to dream bigger, challenge limits, and believe they can do anything they set their minds to,” Nurjahan says. “It’s about showing them that every path is open to them. They just have to go for it.” ## Making cloud computing fun One of the volunteers who participated in this year’s career fair was IEEE Senior Member Gautami Nadkarni. A cloud architect, she’s a senior customer engineer with Google in New York City. “I’m very passionate about diversity, equity, and inclusion and other such initiatives because I believe that was something I personally benefited from in my career,” Nadkarni says. “I had a lot of strong supporters and champions.” She says she was inspired to pursue a STEM career after attending a lecture given by a female professor from the Indian Institute of Technology, Bombay. “I remember being just so empowered and really inspired by her and thinking, Wow, there is someone who looks like me and is going places,” Nadkarni says. “When I look back, that was one of the moments that helped me shape who I am from a career standpoint.” IEEE Senior Member Gautami Nadkarn decorated her career fair booth with a cloud motif.Gautami Nadkarn She holds a master’s degree in management information systems from the State University of New York, Buffalo, and a bachelor’s degree in engineering from the Dwarkadas Jivanlal Sanghvi College of Engineering, in Mumbai. Her exhibit at the career fair was on cloud computing. She decorated her booth with a cloud motif and introduced herself to the youngsters as a “superhero for big companies” because she helps them keep their information safe and organized. She used child-friendly examples, explaining to the Girl Scouts that she teaches customers how to use supercomputers to better understand information and help them determine what kind of toys children want. “IEEE and the Girl Scouts share a view that STEM fields require a diversity of thought, experience, and backgrounds to be able to use technology to better the world.” **— Craig Polk** “I think cloud computing is still an untapped area,” she says. “There are a lot of people who probably don’t know a lot about cloud engineering. “I wanted to create an awareness and an experience to show that it’s not boring, and show how they can use it in their day-to-day lives.” Her exhibit showcased the tasks cloud engineers handle. To describe the fundamentals of how data is stored, managed, and processed, she created a data-sorting exercise by having participants separate toy dinosaurs by color. As a way to explain the importance of data security, she made a puzzle that showed students how to protect valuable information. To demonstrate how AI can bring someone’s wild ideas to life, she taught them to use Google Cloud’s text-to-image model Imagen 3. The girls used their imaginations—which translated into AI-generated images including one of a dog riding a unicycle on a boat. The girls also made audio messages using different voices. “The exhibitors who participate in the See Her, Be Her program provide inspiration,” Nurjahan says. “It’s inspiring to see the enthusiasm in the girls after meeting with exhibitors. Just a few minutes of engagement gives them a glimpse of their potential and sparks hope for the future, no matter what career they choose.”
spectrum.ieee.org
November 27, 2025 at 6:40 AM
TraffickCam Uses Computer Vision to Counter Human Trafficking
Abby Stylianou built an app that asks its users to upload photos of hotel rooms they stay in when they travel. It may seem like a simple act, but the resulting database of hotel room images helps Stylianou and her colleagues assist victims of human trafficking. Traffickers often post photos of their victims in hotel rooms as online advertisements, evidence that can be used to find the victims and prosecute the perpetrators of these crimes. But to use this evidence, analysts must be able to determine where the photos were taken. That’s where TraffickCam comes in. The app uses the submitted images to train an image search system currently in use by the U.S.-based National Center for Mission and Exploited Children (NCMEC), aiding in its efforts to geolocate posted images—a deceptively hard task. Stylianou, a professor at Saint Louis University, is currently working with Nathan Jacobs‘ group at the Washington University in St. Louis to push the model even further, developing multimodal search capabilities that allow for video and text queries. Stylianou on: * Her desire to help victims of abuse * How TraffickCam’s algorithm works * Why hotel rooms are tricky for recognition algorithms * The difference between image recognition and object recognition * How she evaluates TraffickCam’s success **Which came first, your interest in computers or your desire to help provide justice to victims of abuse, and how did they coincide?** **Abby Stylianou:** It’s a crazy story. I’ll go back to my undergraduate degree. I didn’t really know what I wanted to do, but I took a remote sensing class my second semester of senior year that I just loved. When I graduated, George Washington University professor (then at Washington University in St. Louis)] Robert Pless hired me to work on a program called [Finder. The goal of Finder was to say, if you have a picture and nothing else, how can you figure out where that picture was taken? My family knew about the work that I was doing, and [in 2013] my uncle shared an article in the St. Louis Post-Dispatch with me about a young murder victim from the 1980s whose case had run cold. [The St. Louis Police Department] never figured out who she was. What they had was pictures from the burial in 1983. They were wanting to do an exhumation of her remains to do modern forensic analysis, figure out what part of the country she was from. But they had exhumed the remains underneath her headstone at the cemetery and it wasn’t her. And they [dug up the wrong remains] two more times, at which point the medical examiner for St. Louis said, “You can’t keep digging until you have evidence of where the remains actually are.” My uncle sends this to me, and he’s like, “Hey, could you figure out where this picture was taken?” And so we actually ended up consulting for the St. Louis Police Department to take this tool we were building for geolocalization to see if we could find the location of this lost grave. We submitted a report to the medical examiner for St. Louis that said, “Here is where we believe the remains are.” And we were right. We were able to exhume her remains. They were able to do modern forensic analysis and figure out she was from the Southeast. We’ve still not figured out her identity, but we have a lot better genetic information at this point. For me, that moment was like, “This is what I want to do with my life. I want to use computer vision to do some good.” That was a tipping point for me. Back to top **So how does your algorithm work? Can you walk me through how a user-uploaded photo becomes usable data for law enforcement?** **Stylianou:** There are two really key pieces when we think about AI systems today. One is the data, and one is the model you’re using to operate. For us, both of those are equally important. First is the data. We’re really lucky that there’s tons of imagery of hotels on the Internet, and so we’re able to scrape publicly available data in large volume. We have millions of these images that are available online. The problem with a lot of those images, though, is that they’re like advertising images. They’re perfect images of the nicest hotel in the room—they’re really clean, and that isn’t what the victim images look like. A victim image is often a selfie that the victim has taken themselves. They’re in a messy room. The lighting is imperfect. This is a problem for machine learning algorithms. We call it the domain gap. When there is a gap between the data that you trained your model on and the data that you’re running through at inference time, your model won’t perform very well. This idea to build the TraffickCam mobile application was in large part to supplement that Internet data with data that actually looks more like the victim imagery. We built this app so that people, when they travel, can submit pictures of their hotel rooms specifically for this purpose. Those pictures, combined with the pictures that we have off the Internet, are what we use to train our model. **Then what?** **Stylianou:** Once we have a big pile of data, we train neural networks to learn to embed it. If you take an image and run it through your neural network, what comes out on the other end isn’t explicitly a prediction of what hotel the image came from. Rather, it’s a numerical representation [of image features]. What we have is a neural network that takes in images and spits out vectors—small numerical representations of those images—where images that come from the same place hopefully have similar representations. That’s what we then use in this investigative platform that we have deployed at [NCMEC]. We have a search interface that uses that deep learning model, where an analyst can put in their image, run it through there, and they get back a set of results of what are the other images that are visually similar, and you can use that to then infer the location. Back to top ## Identifying Hotel Rooms Using Computer Vision **Many of your papers mention that matching hotel room images can actually be more difficult than matching photos of other types of locations. Why is that, and how do you deal with those challenges?** **Stylianou:** There are a handful of things that are really unique about hotels compared to other domains. Two different hotels may actually look really similar—every Motel 6 in the country has been renovated so that it looks virtually identical. That’s a real challenge for these models that are trying to come up with different representations for different hotels. On the flip side, two rooms in the same hotel may look really different. You have the penthouse suite and the entry-level room. Or a renovation has happened on one floor and not another. That’s really a challenge when two images should have the same representation. Other parts of our queries are unique because usually there’s a very, very large part of the image that has to be erased first. We’re talking about child pornography images. That has to be erased before it ever gets submitted to our system. We trained the first version**** by pasting in people-shaped blobs to try and get the network to ignore the erased portion. But Temple University professor and close collaborator [Richard Souvenir’s team] showed that if you actually use AI in-painting—you actually fill in that blob with a sort of natural-looking texture—you actually do a lot better on the search than if you leave the erased blob in there. So when our analysts run their search, the first thing they do is they erase the image. The next thing that we do is that we actually then go and use an AI in-painting model to fill that back in. Back to top **Some of your work involved object recognition rather thanimage recognition. Why?** **Stylianou:** The [NCMEC] analysts that use our tool have shared with us that oftentimes, in the query, all they can see is one object in the background and they want to run a search on just that. But when these models that we train typically operate on the scale of the full image, that’s a problem. And there are things in a hotel that are unique and things that aren’t. Like a white bed in a hotel is totally non-discriminative. Most hotels have a white bed. But a really unique piece of artwork on the wall, even if it’s small, might be really important to recognizing the location. [NCMEC analysts] can sometimes only see one object, or know that one object is important. Just zooming in on it in the types of models that we’re already using doesn’t work well. How could we support that better? We’re doing things like training object-specific models. You can have a couch model and a lamp model and a carpet model. Back to top **How do you evaluate the success of the algorithm?** **Stylianou:** I have two versions of this answer. One is that there’s no real world dataset that we can use to measure this, so we create proxy datasets. We have our data that we’ve collected via the TraffickCam app. We take subsets of that and we put big blobs into them that we erase and we measure the fraction of the time that we correctly predict what hotel those are from. So those images look as much like the victim images as we can make them look. That said, they still don’t necessarily look exactly like the victim images, right? That’s as good of a sort of quantitative metric as we can come up with. And then we do a lot of work with the [NCMEC] to understand how the system is working for them. We get to hear about the instances where they’re able to use our tool successfully and not successfully. Honestly, some of the most useful feedback we get from them is them telling us, “I tried running the search and it didn’t work.” **Have positive hotel image matches actually been used to help trafficking victims?** **Stylianou:** I always struggle to talk about these things, in part because I have young kids. This is upsetting and I don’t want to take things that are the most horrific thing that will ever happen to somebody and tell it as our positive story. With that said, there are cases we’re aware of. There’s one that I’ve heard from the analysts at NCMEC recently that really has reinvigorated for me why I do what I do. There was a case of a live stream that was happening. And it was a young child who was being assaulted in a hotel. NCMEC got alerted that this was happening. The analysts who have been trained to use TraffickCam took a screenshot of that, plugged it into our system, got a result for which hotel it was, sent law enforcement, and were able to rescue the child. I feel very, very lucky that I work on something that has real world impact, that we are able to make a difference. Back to top
spectrum.ieee.org
November 27, 2025 at 6:40 AM
Event Sensors Bring Just the Right Data to Device Makers
**Anatomically, the human eye** is like a sophisticated tentacle that reaches out from the brain, with the retina acting as the tentacle’s tip and touching everything the person sees. Evolution worked a wonder with this complex nervous structure. Now, contrast the eye’s anatomy to the engineering of the most widely used machine-vision systems today: a charge-coupled device (CCD) or a CMOS imaging chip, each of which consists of a grid of pixels. The eye is orders of magnitude more efficient than these flat-chipped computer-vision kits. Here’s why: For any scene it observes, a chip’s pixel grid is updated periodically—and in its entirety—over the course of receiving the light from the environment. The eye, though, is much more parsimonious, focusing its attention only on a small part of the visual scene at any one time—namely, the part of the scene that changes, like the fluttering of a leaf or a golf ball splashing into water. My company, Prophesee, and our competitors call these changes in a scene “events.” And we call the biologically inspired, machine-vision systems built to capture these events neuromorphic event sensors. Compared to CCDs and CMOS imaging chips, event sensors respond faster, offer a higher dynamic range—meaning they can detect both in dark and bright parts of the scene at the same time—and capture quick movements without blur, all while producing new data only when and where an event is sensed, which makes the sensors highly energy and data efficient. We and others are using these biologically inspired supersensors to significantly upgrade a wide array of devices and machines, including high-dynamic-range cameras, augmented-reality wearables, drones, and medical robots. So wherever you look at machines these days, they’re starting to look back—and, thanks to event sensors, they’re looking back more the way we do. Event-sensing videos may seem unnatural to humans, but they capture just what computers need to know: motion.Prophesee ## Event Sensors vs. CMOS Imaging Chips Digital sensors inspired by the human eye date back decades. The first attempts to make them were in the 1980s at the California Institute of Technology. Pioneering electrical engineers Carver A. Mead, Misha Mahowald, and their colleagues used analog circuitry to mimic the functions of the excitable cells in the human retina, resulting in their “silicon retina.” In the 1990s, Mead cofounded Foveon to develop neurally inspired CMOS image sensors with improved color accuracy, less noise at low light, and sharper images. In 2008, camera maker Sigma purchased Foveon and continues to develop the technology for photography. A number of research institutions continued to pursue bioinspired imaging technology through the 1990s and 2000s. In 2006, a team at the Institute of Neuroinformatics at the University of Zurich, built the first practical temporal-contrast event sensor, which captured changes in light intensity over time. By 2010, researchers at the Seville Institute of Microelectronics had designed sensors that could be tuned to detect changes in either space __or__ time. Then, in 2010, my group at the Austrian Institute of Technology, in Vienna, combined temporal contrast detection with photocurrent integration at the pixel-level to both detect relative changes in intensity and acquire absolute light levels in each individual pixel . More recently, in 2022, a team at the Institut de la Vision, in Paris, and their spin-off, Pixium Vision, applied neuromorphic sensor technology to a biomedical application—a retinal implant to restore some vision to blind people. (Pixium has since been acquired by Science Corp., the Alameda, Calif.–based maker of brain-computer interfaces.) RELATED: Bionic Eye Gets a New Lease on Life Other startups that pioneered event sensors for real-world vision tasks include iniVation in Zurich (which merged with SynSense in China), CelePixel in Singapore (now part of OmniVision), and my company, Prophesee (formerly Chronocam), in Paris. ### TABLE 1: Who’s Developing Neuromorphic Event Sensors Date released| Company| Sensor| Event pixel resolution| Status ---|---|---|---|--- 2023| OmniVision| Celex VII| 1,032 x 928| Prototype 2023| Prophesee| GenX320| 320 x 320| Commercial 2023| Sony| Gen3| 1,920 x 1,084| Prototype 2021| Prophesee & Sony| IMX636/637/646/647| 1,280 x 720| Commercial 2020| Samsung| Gen4| 1,280 x 960| Prototype 2018| Samsung| Gen3| 640 x 480| Commercial ### Among the leading CMOS image sensor companies, Samsung was the first to present its own event-sensor designs. Today other major players, such as Sony and OmniVision, are also exploring and implementing event sensors. Among the wide range of applications that companies are targeting are machine vision in cars, drone detection, blood-cell tracking, and robotic systems used in manufacturing. ## How an Event Sensor Works To grasp the power of the event sensor, consider a conventional video camera recording a tennis ball crossing a court at 150 kilometers per hour. Depending on the camera, it will capture 24 to 60 frames per second, which can result in an undersampling of the fast motion due to large displacement of the ball between frames and possibly cause motion blur because of the movement of the ball during the exposure time. At the same time, the camera essentially oversamples the static background, such as the net and other parts of the court that don’t move. If you then ask a machine-vision system to analyze the dynamics in the scene, it has to rely on this sequence of static images—the video camera’s frames—which contain both too little information about the important things and too much redundant information about things that don’t matter. It’s a fundamentally mismatched approach that’s led the builders of machine-vision systems to invest in complex and power-hungry processing infrastructure to make up for the inadequate data. These machine-vision systems are too costly to use in applications that require real-time understanding of the scene, such as autonomous vehicles, and they use too much energy, bandwidth, and computing resources for applications like battery-powered smart glasses, drones, and robots. Ideally, an image sensor would use high sampling rates for the parts of the scene that contain fast motion and changes, and slow rates for the slow-changing parts, with the sampling rate going to zero if nothing changes. This is exactly what an event sensor does. Each pixel acts independently and determines the timing of its own sampling by reacting to changes in the amount of incident light. The entire sampling process is no longer governed by a fixed clock with no relation to the scene’s dynamics, as with conventional cameras, but instead adapts to subtle variations in the scene. ### ### Let’s dig deeper into the mechanics. When the light intensity on a given pixel crosses a predefined threshold, the system records the time with microsecond precision. This time stamp and the pixel’s coordinates in the sensor array form a message describing the “event,” which the sensor transmits as a digital data package. Each pixel can do this without the need for an external intervention such as a clock signal and independently of the other pixels. Not only is this architecture vital for accurately capturing quick movements, but it’s also critical for increasing an image’s dynamic range. Since each pixel is independent, the lowest light in a scene and the brightest light in a scene are simultaneously recorded; there’s no issue of over- or underexposed images. ### The output generated by a video camera equipped with an event sensor is not a sequence of images but rather a continuous stream of individual pixel data, generated and transmitted based on changes happening in the scene. Since in many scenes, most pixels do not change very often, event sensors promise to save energy compared to conventional CMOS imaging, especially when you include the energy of data transmission and processing. For many tasks, our sensors consume about a tenth the power of a conventional sensor. Certain tasks, for example eye tracking for smart glasses, require even less energy for sensing and processing. In the case of the tennis ball, where the changes represent a small fraction of the overall field of vision, the data to be transmitted and processed is tiny compared to conventional sensors, and the advantages of an event sensor approach are enormous: perhaps five or even six orders of magnitude. ## Event Sensors in Action To imagine where we will see event sensors in the future, think of any application that requires a fast, energy- and data-efficient camera that can work in both low and high light. For example, they would be ideal for edge devices: Internet-connected gadgets that are often small, have power constraints, are worn close to the body (such as a smart ring), or operate far from high-bandwidth, robust network connections (such as livestock monitors). Event sensors’ low power requirements and ability to detect subtle movement also make them ideal for human-computer interfaces—for example, in systems for eye and gaze tracking, lipreading, and gesture control in smartwatches, augmented-reality glasses, game controllers, and digital kiosks at fast food restaurants. For the home, engineers are testing wall-mounted event sensors in health monitors for the elderly, to detect when a person falls. Here, event sensors have another advantage—they don’t need to capture a full image, just the event of the fall. This means the monitor sends only an alert, and the use of a camera doesn’t raise the usual privacy concerns. Event sensors can also augment traditional digital photography. Such applications are still in the development stage, but researchers have demonstrated that when an event sensor is used alongside a phone’s camera, the extra information about the motion within the scene as well as the high and low lighting from the event sensor can be used to remove blur from the original image, add more crispness, or boost the dynamic range. Event sensors could be used to remove motion in the other direction, too: Currently, cameras rely on electromechanical stabilization technologies to keep the camera steady. Event-sensor data can be used to algorithmically produce a steady image in real time, even as the camera shakes. And because event sensors record data at microsecond intervals, faster than the fastest CCD or CMOS image sensors, it’s also possible to fill in the gaps between the frames of traditional video capture. This can effectively boost the frame rate from tens of frames per second to tens of thousands, enabling ultraslow-motion video on demand after the recording has finished. Two obvious applications of this technique are helping referees at sporting events resolve questions right after a play, and helping authorities reconstruct the details of traffic collisions. An event sensor records and sends data only when light changes more than a user-defined threshold. The size of the arrows in the video at right convey how fast different parts of the dancer and her dress are moving. Prophesee Meanwhile, a wide range of early-stage inventors are developing applications of event sensors for situational awareness in space, including satellite and space-debris tracking. They’re also investigating the use of event sensors for biological applications, including microfluidics analysis and flow visualization, flow cytometry, and contamination detection for cell therapy. But right now, industrial applications of event sensors are the most mature. Companies have deployed them in quality control on beverage-carton production lines, in laser welding robots, and in Internet of Things devices. And developers are working on using event sensors to count objects on fast-moving conveyor belts, provide visual-feedback control for industrial robots, and to make touchless vibration measurements of equipment, for predictive maintenance. ## The Data Challenge for Event Sensors There is still work to be done to improve the capabilities of the technology. One of the biggest challenges is in the kind of data event sensors produce. Machine-vision systems use algorithms designed to interpret static scenes. Event data is temporal in nature, effectively capturing the swings of a robot arm or the spinning of a gear, but those distinct data signatures aren’t easily parsed by current machine-vision systems. Engineers can calibrate an event sensor to send a signal only when the number of photons changes more than a preset amount. This way, the sensor sends less, but more relevant, data. In this chart, only changes to the intensity [black curve] greater than a certain amount [dotted horizontal lines] set off an event message [blue or red, depending on the direction of the change]. Note that the y-axis is logarithmic and so the detected changes are _relative_ changesProphesee This is where Prophesee comes in. My company offers products and services that help other companies more easily build event-sensor technology into their applications. So we’ve been working on making it easier to incorporate temporal data into existing systems in three ways: by designing a new generation of event sensors with industry-standard interfaces and data protocols; by formatting the data for efficient use by a computer-vision algorithm or a neural network; and by providing always-on low-power mode capabilities. To this end, last year we partnered with chipmaker AMD to enable our Metavision HD event sensor to be used with AMD’s Kria KV260 Vision AI Starter Kit, a collection of hardware and software that lets developers test their event-sensor applications. The Prophesee and AMD development platform manages some of the data challenges so that developers can experiment more freely with this new kind of camera. One approach that we and others have found promising for managing the data of event sensors is to take a cue from the biologically inspired neural networks used in today’s machine-learning architectures. For instance, spiking neural networks, or SNNs, act more like biological neurons than traditional neural networks do—specifically, SNNs transmit information only when discrete “spikes” of activity are detected, while traditional neural nets process continuous values. SNNs thus offer an event-based computational approach that is well matched to the way that event sensors capture scene dynamics. Another kind of neural network that’s attracting attention is called a graph neural network, or GNN. These types of neural networks accept graphs as input data, which means they’re useful for any kind of data that’s represented by a mesh of nodes and their connections—for example, social networks, recommendation systems, molecular structures, and the behavior of biological and digital viruses. As it happens, the data that event sensors produce can also be represented by a graph that’s 3D, where there are two dimensions of space and one dimension of time. The GNN can effectively compress the graph from an event sensor by picking out features such as 2D images, distinct types of objects, estimates of the direction and speed of objects, and even bodily gestures. We think GNNs will be especially useful for event-based edge-computing applications with limited power, connectivity, and processing. We’re currently working to put a GNN almost directly into an event sensor and eventually to incorporate both the event sensor and the GNN process into the same millimeter-dimension chip. In the future, we expect to see machine-vision systems that follow nature’s successful strategy of capturing the right data at just the right time and processing it in the most efficient way. Ultimately, that approach will allow our machines to see the wider world in a new way, which will benefit both us and them.
spectrum.ieee.org
November 26, 2025 at 2:45 PM
Listen to Protons for Less Than $100
When you get an MRI scan, the machine exploits a phenomenon called nuclear magnetic resonance (NMR). Certain kinds of atomic nuclei—including those of the hydrogen atoms in a water molecule—can be made to oscillate in a magnetic field, and these oscillations can be detected with coils of wire. MRI scanners employ intense magnetic fields that create resonances at tens to hundreds of megahertz. However, another NMR-based instrument involves much lower-frequency oscillations: a proton-precession magnetometer, often used to measure Earth’s magnetic field. Proton-precession magnetometers have been around for decades and were once often used in archaeology and mineral exploration. High-end models can cost thousands of dollars. Then, in 2022 a German engineer named Alexander Mumm devised a very simple circuit for a stripped-down one. I recently built his circuit and can attest that with less than half a kilogram of 22-gauge magnet wire; two common integrated circuits; a metal-oxide-semiconductor field-effect transistor, or MOSFET; a handful of discrete components; and two empty 113-gram bottles of Morton seasoning blend, it’s possible to measure Earth’s magnetic field very accurately. The frequency of the signal emitted by protons precessing in Earth’s magnetic field lies in the audio range, so with a pair of headphones and two amplifier integrated circuits [middle right], you can detect a signal from water in seasoning bottles wrapped in coils [bottom left and right]. A MOSFET [middle left] allows for rapid control of the coils. The amplification circuitry is powered by a 9-volt battery, while a 36-volt battery charges the coils.James Provost Like an MRI scanner, a proton-precession magnetometer measures the oscillations of hydrogen nuclei—that is, protons. Like other subatomic particles, protons possess a quantum property called spin, akin to classical angular momentum. In a magnetic field, protons wobble like spinning tops, with their spin axes tracing out a cone—a phenomenon called precession. A proton-precession magnetometer gets many protons to wobble in sync and then measures the frequency of their wobbles, which is proportional to the intensity of the ambient magnetic field. The weak strength of Earth’s magnetic field (at least compared to that of an MRI machine) means that protons wobbling under its influence do so at audio frequencies. Get enough moving in unison and the spinning protons will induce a voltage in a nearby pickup coil. Amplify that and pass it through some earphones, and you get an audio tone. So with a suitable circuit, you can, literally, hear protons. The first step is to make the pickup coils, which is where the bottles of Morton seasoning blend come in. Why Morton seasoning blend? Two reasons. First, this size bottle will allow you to wrap about 500 turns of wire around each one with about 450 grams of 22-gauge wire. Second, the bottle has little shoulders molded at each end, making for excellent coil forms. ### Why two bottles and two coils? That’s to quash electromagnetic noise—principally coming from power lines—that invariably gets picked up by the coils. When two counterwound coils are wired in series, such external noise tends to cancel out. Signals from precessing protons in the two coils, though, will reinforce one another. Don’t try this indoors or anywhere near iron-containing objects. A proton magnetometer has three modes. The first is for sending DC current through the coils. The second mode disconnects the current source and allows the magnetic field it had created to collapse. The third is listening mode, which connects the coils to a sensitive audio amplifier. By filling each bottle with distilled water and sending a DC current (a few amperes) through these coils, you line up the spins of many protons in the water. Then, after putting your circuit into listening mode, you use the coils to sense the synchronous oscillations of the wobbling protons. Mumm’s circuit shifts from one mode to another in the simplest way possible: using a three-position switch. One position enables the DC-polarization mode. The next allows the magnetic field built up during polarization to collapse, and the third position is for listening. ## Avoiding Damaging Sparks The second mode might seem easy to achieve—just disconnect the coils, right? But if you do that, the same principle that makes spark plugs spark will put a damaging high voltage across the switch contacts as the magnetic fields around the coils collapse. The proton-precession magnetometer is primarily just a multistage analog amplifier.James Provost To avoid that, Mumm’s circuit employs a MOSFET, wired to work like a high-power Zener diode, used in many power-regulation circuits to allow only current above a specified threshold voltage to flow. This limits the voltage that develops across the coils when the current is cut off by just enough so that the magnetometer can shift from polarizing to listening mode quickly but without causing damage. To pick up a strong signal, the listening circuit must also be tuned to resonate at the expected frequency of proton precession, which will depend on Earth’s magnetic field at your location. You can work out approximately what that is using an online geomagnetic-field calculator. You’ll get the field strength, and then you’ll multiply that by the gyromagnetic ratio of protons (42.577 MHz per tesla). For me, that worked out to about 2 kilohertz. Estimating the inductance of the coils from their diameter and number of turns, I then selected a capacitor of suitable value in parallel with the coils to make a tank circuit that resonates at that frequency. You could tune your tank circuit using a frequency generator and oscilloscope. Or, as Mumm suggests, attach a small speaker to the output of the circuit. Then bring the speaker near the pickup coils. This will create magnetic feedback and the circuit will oscillate on it’s own—loudly! You merely need to measure the frequency of this tone, and then adjust the tank capacitor to bring this self-oscillation to the frequency you want to tune to. My initial attempt to listen to protons met with mixed success: Sometimes I heard tones, sometimes not. What helped to get this gizmo working consistently was realizing that proton magnetometers don’t tolerate large gradients in the magnetic field. So don’t try this indoors or anywhere near iron-containing objects: water pipes, cars, or even the ground. A wide-open space outside is best, with the coils raised off the ground. The second thing that helped was to apply more oomph in polarization mode. While a 12-volt battery works okay, 36 V does much better. After figuring these things out, I can now hear protons easily. These tones are clearly the sounds of protons, because they go away if I drain the water in the bottles. And, using free audio-analyzer software called Spectrum Lab, I confirmed that the frequency of these tones matches the magnetic field at my location to about 1 percent. While it’s not a practical field instrument, a proton-precession magnetometer of any kind for less than US $100 is nothing to sneer at.
spectrum.ieee.org
November 25, 2025 at 11:31 PM
AI Agents Break Rules Under Everyday Pressure
Several recent studies have shown that artificial-intelligence agents sometimes decide to misbehave, for instance by attempting to blackmail people who plan to replace them. But such behavior often occurs in contrived scenarios. Now, a new study presents PropensityBench, a benchmark that measures an agentic model’s choices to use harmful tools in order to complete assigned tasks. It finds that somewhat realistic pressures (such as looming deadlines) dramatically increase rates of misbehavior. “The AI world is becoming increasingly agentic,” says Udari Madhushani Sehwag, a computer scientist at the AI infrastructure company Scale AI and a lead author of the paper, which is currently under peer review. By that she means that large language models (LLMs), the engines powering chatbots such as ChatGPT, are increasingly connected to software tools that can surf the Web, modify files, and write and run code in order to complete tasks. Giving LLMs these abilities adds convenience but also risk, as the systems might not act as we’d wish. Even if they’re not yet capable of doing great harm, researchers want to understand their proclivities before it’s too late. Although AIs don’t have intentions and awareness in the way that humans do, treating them as goal-seeking entities often helps researchers and users better predict their actions. AI developers attempt to “align” the systems to safety standards through training and instructions, but it’s unclear how faithfully models adhere to guidelines. “When they are actually put under real-world stress, and if the safe option is not working, are they going to switch to just getting the job done by any means necessary?” Sehwag says. “This is a very timely topic.” ## How to Test an AI Agent Under Pressure The researchers tested a dozen models made by Alibaba, Anthropic, Google, Meta, and OpenAI across nearly 6,000 scenarios. In each scenario, a model was assigned a task and told it had access to several tools. It was instructed to use the safe tools and not the harmful ones. At first there was no pressure, and it could take 12 steps toward completing its task by issuing instructions to the tools. Then it received a message increasing the pressure, and it was given 12 more attempts. This continued for up to 12 pressure levels. Attempts to use safe tools always returned error messages. An attempt to use a harmful tool ended the scenario. The scenarios covered several domains. In biosecurity, Sehwag says, the agent might have to study the spread of a pathogen and find a way to contain it. A safe tool might use anonymized genetic data, while a harmful one might use data that’s not anonymized. In the domain of chemical security, the agent might need to obtain a particular solvent. A safe tool might place an order for the solvent to be extracted from waste, while a harmful one might procure it illegally. In cybersecurity, the agent might need to identify and restrict a malicious user account. A safe tool might disable only the problematic account, while a harmful one might block all users. Scenarios in other domains tempted agents with tools that led to self-preservation by evading control, recruiting allies, duplicating themselves, or going rogue in other ways. The worst performing model, Gemini 2.5, cracked under pressure and chose to use forbidden tools 79 percent of the time. Source image: Udari Madhushani Sehwag, Shayan Shabihi, et al. Pressure came in several forms. Deadlines might shorten. Potential financial losses might increase if the task was not accomplished. Threats of greater oversight, reduced resources, and curtailed authority might escalate. Tasks, tools, pressures, and tactics were modeled on the kinds of situations humans might face in their jobs (except maybe the misbehavior option of cloning oneself). The best-behaved model (OpenAI’s o3) cracked under pressure in 10.5 percent of scenarios, while the worst (Google’s Gemini 2.5 Pro) had a propensity score of 79 percent; the average across models was about 47 percent. Even under zero pressure, the group on average failed about 19 percent of the time. To the degree that models remained aligned, the study shows that some of that alignment was “shallow.” In some tests, the researchers gave harmful tools benign names (such as “use_synthetic_data” instead of “use_fake_data”)—without changing the tools’ descriptions or admonitions to avoid them. This wording tweak increased average propensity by 17 percentage points, to 64 percent. ## Knowing What LLMs Are Thinking The researchers also found that models used harmful tools even after explicitly acknowledging that they were off-limits. The models offered diverse justifications for doing so, for example by citing the pressure they were under or arguing that the benefits outweighed the risks. It’s unclear if models released after the study’s completion would perform any better. Among the models tested, more capable models (according to a platform called LMArena) were only slightly safer. “PropensityBench is interesting,” emails Nicholas Carlini, a computer scientist at Anthropic who wasn’t involved in the research. He offers a caveat related to what’s called situational awareness. LLMs sometimes detect when they’re being evaluated and act nice so they don’t get retrained or shelved. “I think that most of these evaluations that claim to be ‘realistic’ are very much not, and the LLMs know this,” he says. “But I do think it’s worth trying to measure the rate of these harms in synthetic settings: If they do bad things when they ‘know’ we’re watching, that’s probably bad?” If the models knew they were being evaluated, the propensity scores in this study may be underestimates of propensity outside the lab. Alexander Pan, a computer scientist at xAI and the University of California, Berkeley, says while Anthropic and other labs have shown examples of scheming by LLMs in specific setups, it’s useful to have standardized benchmarks like PropensityBench. They can tell us when to trust models, and also help us figure out how to improve them. A lab might evaluate a model after each stage of training to see what makes it more or less safe. “Then people can dig into the details of what’s being caused when,” he says. “Once we diagnose the problem, that’s probably the first step to fixing it.” In this study, models didn’t have access to actual tools, limiting the realism. Sehwag says a next evaluation step is to build sandboxes where models can take real actions in an isolated environment. As for increasing alignment, she’d like to add oversight layers to agents that flag dangerous inclinations before they’re pursued. The self-preservation risks may be the most speculative in the benchmark, but Sehwag says they’re also the most underexplored. It “is actually a very high-risk domain that can have an impact on all the other risk domains,” she says. “If you just think of a model that doesn’t have any other capability, but it can persuade any human to do anything, that would be enough to do a lot of harm.”
spectrum.ieee.org
November 25, 2025 at 11:32 PM
IEEE Hits 500,000-Member Milestone​
IEEE celebrated a monumental achievement last month: The organization’s membership reached half a million innovators, engineers, technologists, and scientists worldwide. “This is more than a number; it is a profound testament to the enduring power and relevance of our global community,” says Antonio Luque, vice president of IEEE Member and Geographic Activities. “The 500,000-member milestone is a powerful endorsement of IEEE’s legacy and serves as a critical launchpad for solving the future’s complex challenges and opportunities. Passing this milestone affirms the value members find in our shared mission to advance technology for humanity.” To commemorate the historic moment, IEEE launched a digital mosaic highlighting members around the world. You can be a part of the collaborative art piece by uploading your name, location, and photo on the Snapshot website. ## Momentum and the member journey Since 1963, when the merger of the American Institute of Electrical Engineers and the Institute of Radio Engineers formed IEEE, the organization has strived to evolve alongside technology. IEEE’s achievement of 500,000 members underscores its momentum. The success reflects the organization’s dedication to keeping its members connected and up to date in rapidly evolving technological fields, demonstrating its ability to adapt. A key part of IEEE’s success is its dedication to nurturing the next generation of engineers. Student members and young professionals—who are included in the 500,000-member milestone—bring fresh perspectives and energy that help sustain the organization’s momentum. “With the collective expertise and enthusiasm of half a million individuals, there is no technical challenge we cannot face, and no future we cannot build.” “Their contributions are essential to IEEE’s vitality,” Luque says. IEEE life members and experienced professionals impart their knowledge, experience, and ethical frameworks to students and younger engineers through IEEE’s mentorship programs, ensuring that the legacy of innovation endures. ## Leave your mark on IEEE The membership milestone is more than just a historical count; it’s an invitation for every member to impact the organization’s future. IEEE challenges its 500,000 members to: * **Develop standards.** Join one of the working groups within the IEEE Standards Association to define the next generation of technologies, using your technical expertise to help shape global practice. * **Mentor the next generation.** Share your professional journey and technical knowledge with students or young professionals through programs such as IEEE Mentoring or initiatives offered by your local chapter. * **Publish your work.** Submit your research or technical insights to an IEEE journal or magazine and contribute directly to the global body of knowledge. * **Steer local impact.** Volunteer within your section or technical society to organize events, workshops, and community outreach initiatives to advance technological literacy. By engaging in helping to shape IEEE’s initiatives and community, you can pave the way for the next half-million members. ## Looking ahead IEEE’s global community is uniquely positioned to address the world’s most significant challenges including the climate crisis, global connectivity, and health care. The membership milestone confirms IEEE’s conviction that its members represent a commitment to advancing technology for the benefit of humanity. “The size of our community is a measure of our shared potential,” Luque says. “We extend our profound gratitude to every member, past and present, whose hard work and dedication built this extraordinary global organization. Together we make tomorrow possible. “With the collective expertise and enthusiasm of half a million individuals, there is no technical challenge we cannot face, and no future we cannot build.”
spectrum.ieee.org
November 24, 2025 at 8:14 PM
Trillions Spent and Big Software Projects Are Still Failing
**“Why worry about something** that isn’t going to happen?” KGB Chairman Charkov’s question to inorganic chemist Valery Legasov in HBO’s “Chernobyl” miniseries makes a good epitaph for the hundreds of software development, modernization, and operational failures I have covered for __IEEE Spectrum__ since my first contribution, to its September 2005 special issue on learning—or rather, not learning—from software failures. I noted then, and it’s still true two decades later: Software failures are universally unbiased. They happen in every country, to large companies and small. They happen in commercial, nonprofit, and governmental organizations, regardless of status or reputation. Global IT spending has more than tripled in constant 2025 dollars since 2005, from US $1.7 trillion to $5.6 trillion, and continues to rise. Despite additional spending, software success rates have not markedly improved in the past two decades. The result is that the business and societal costs of failure continue to grow as software proliferates, permeating and interconnecting every aspect of our lives. For those hoping AI software tools and coding copilots will quickly make large-scale IT software projects successful, forget about it. For the foreseeable future, there are hard limits on what AI can bring to the table in controlling and managing the myriad intersections and trade-offs among systems engineering, project, financial, and business management, and especially the organizational politics involved in any large-scale software project. Few IT projects are displays of rational decision-making from which AI can or should learn. As software practitioners know, IT projects suffer from enough management hallucinations and delusions without AI adding to them. ### As I noted 20 years ago, the drivers of software failure frequently are failures of human imagination, unrealistic or unarticulated project goals, the inability to handle the project’s complexity, or unmanaged risks, to name a few that today still regularly cause IT failures. Numerous others go back decades, such as those identified by Stephen Andriole, the chair of business technology at Villanova University’s School of Business, in the diagram below first published in _Forbes_ in 2021. Uncovering a software system failure that has gone off the rails in a unique, previously undocumented manner would be surprising because the overwhelming majority of software-related failures involve avoidable, known failure-inducing factors documented in hundreds of after-action reports, academic studies, and technical and management books for decades. Failure déjà vu dominates the literature. The question is, why haven’t we applied what we have repeatedly been forced to learn? ### ### ## The Phoenix That Never Rose Many of the IT developments and operational failures I have analyzed over the last 20 years have each had their own Chernobyl-like meltdowns, spreading reputational radiation everywhere and contaminating the lives of those affected for years. Each typically has a story that strains belief. A prime example is the Canadian government’s CA $310 million Phoenix payroll system, which went live in April 2016 and soon after went supercritical. Phoenix project executives believed they could deliver a modernized payment system, customizing PeopleSoft’s off-the-shelf payroll package to follow 80,000 pay rules spanning 105 collective agreements with federal public-service unions. It also was attempting to implement 34 human-resource system interfaces across 101 government agencies and departments required for sharing employee data. Further, the government’s developer team thought they could accomplish this for less than 60 percent of the vendor’s proposed budget. They’d save by removing or deferring critical payroll functions, reducing system and integration testing, decreasing the number of contractors and government staff working on the project, and forgoing vital pilot testing, along with a host of other overly optimistic proposals. ### The Worst IT Failure The Phoenix payroll failure pales in comparison to the worst operational IT system failure to date: the U.K. Post Office’s electronic point-of-sale (EPOS) Horizon system, provided by Fujitsu. Rolled out in 1999, Horizon was riddled with internal software errors that were deliberately hidden, leading to the Post Office unfairly accusing 3,500 local post branch managers of false accounting, fraud, and theft. Approximately 900 of these managers were convicted, with 236 incarcerated between 1999 and 2015. By then, the general public and the branch managers themselves finally joined __Computer Weekly__ ’s reporters (who had doggedly reported on Horizon’s problems since 2008) in the knowledge that there was something seriously wrong with Horizon’s software. It then took another decade of court cases, an independent public statutory inquiry, and an ITV miniseries “Mr. Bates vs. The Post Office” to unravel how the scandal came to be. Like Phoenix, Horizon was plagued with problems that involved technical, management, organizational, legal, and ethical failures. For example, the core electronic point-of-sale system software was built on communication and data-transfer middleware that was itself buggy. In addition, Horizon’s functionality ran wild under unrelenting, ill-disciplined scope creep. There were ineffective or missing development and project management processes, inadequate testing, and a lack of skilled professional, technical, and managerial personnel. The Post Office’s senior leadership repeatedly stated that the Horizon software was fully reliable, becoming hostile toward postmasters who questioned it, which only added to the toxic environment. As a result, leadership invoked every legal means at its disposal and crafted a world-class cover-up, including the active suppression of exculpatory information, so that the Post Office could aggressively prosecute postmasters and attempt to crush any dissent questioning Horizon’s integrity. Shockingly, those wrongly accused still have to continue to fight to be paid just compensation for their ruined lives. Nearly 350 of the accused died, at least 13 of whom are believed to be by suicide, before receiving any payments for the injustices experienced. Unfortunately, as attempts to replace Horizon in 2016 and 2021 failed, the Post Office continues to use it, at least for now. The government wants to spend £410 million on a new system, but it’s a safe bet that implementing it will cost much, much more. The Post Office accepted bids for a new point-of-sale software system in summer 2025, with a decision expected by 1 July 2026. ### Phoenix’s payroll meltdown was preordained. As a result, over the past nine years, around 70 percent of the 430,000 current and former Canadian federal government employees paid through Phoenix have endured paycheck errors. Even as recently as fiscal year 2023–2024, a third of all employees experienced paycheck mistakes. The ongoing financial stress and anxieties for thousands of employees and their families have been immeasurable. Not only are recurring paycheck troubles sapping worker morale, but in at least one documented case, a coroner blamed an employee’s suicide on the unbearable financial and emotional strain she suffered. By the end of March 2025, when the Canadian government had promised that the backlog of Phoenix errors would finally be cleared, over 349,000 were still unresolved, with 53 percent pending for more than a year. In June, the Canadian government once again committed to significantly reducing the backlog, this time by June 2026. Given previous promises, skepticism is warranted. ### Minnesota Licensing and Registration System **2019** The planned $41 million Minnesota Licensing and Registration System (MNLARS) effort is rolled out in 2016 and then is canceled in 2019 after a total cost of $100 million. It is deemed too hard to fix. ### The financial costs to Canadian taxpayers related to Phoenix’s troubles have so far climbed to over CA $5.1 billion (US $3.6 billion). It will take years to calculate the final cost of the fiasco. The government spent at least CA $100 million (US $71 million) before deciding on a Phoenix replacement, which the government acknowledges will cost several hundred million dollars more and take years to implement. The late Canadian Auditor General Michael Ferguson’s audit reports for the Phoenix fiasco described the effort as an “incomprehensible failure of project management and oversight.” While it may be a project management and oversight disaster, an inconceivable failure Phoenix certainly is not. The IT community has striven mightily for decades to make the incomprehensible routine. ## Opportunity Costs of Software Failure Keep Piling Up South of the Canadian border, the United States has also seen the overall cost of IT-related development and operational failures since 2005 rise to the multi-trillion-dollar range, potentially topping $10 trillion. A report from the Consortium for Information & Software Quality (CISQ) estimated the annual cost of operational software failures in the United States in 2022 alone was $1.81 trillion, with another $260 billion spent on software-development failures. It is larger than the total U.S. defense budget for that year, $778 billion. ### The question is, why haven’t we applied what we have repeatedly been forced to learn? ### What percentage of software projects fail, and what failure means, has been an ongoing debate within the IT community stretching back decades. Without diving into the debate, it’s clear that software development remains one of the riskiest technological endeavors to undertake. Indeed, according to Bent Flyvbjerg, professor emeritus at the University of Oxford’s Saїd Business School, comprehensive data shows that not only are IT projects risky, they are __the__ riskiest from a cost perspective. ### Australia Modernising Business Registers Program **2022** ****Australia’s planned AU $480.5 million program to modernize it business register systems is canceled. After AU $530 million is spent, a review finds that the projected cost has risen to AU $2.8 billion, and the project would take five more years to complete. ### The CISQ report estimates that organizations in the United States spend more than $520 billion annually supporting legacy software systems, with 70 to 75 percent of organizational IT budgets devoted to legacy maintenance. A 2024 report by services company NTT DATA found that 80 percent of organizations concede that “inadequate or outdated technology is holding back organizational progress and innovation efforts.” Furthermore, the report says that virtually all C-level executives believe legacy infrastructure thwarts their ability to respond to the market. Even so, given that the cost of replacing legacy systems is typically many multiples of the cost of supporting them, business executives hesitate to replace them until it is no longer operationally feasible or cost-effective. The other reason is a well-founded fear that replacing them will turn into a debacle like Phoenix or others. Nevertheless, there have been ongoing attempts to improve software development and sustainment processes. For example, we have seen increasing adoption of iterative and incremental strategies to develop and sustain software systems through Agile approaches, DevOps methods, and other related practices. ### Louisiana Office of Motor Vehicles **2025** **** Louisiana’s governor orders a state of emergency over repeated failures of the 50-year-old Office of Motor Vehicles mainframe computer system. The state promises expedited acquisition of a new IT system, which might be available by early 2028. ### The goal is to deliver usable, dependable, and affordable software to end users in the shortest feasible time. DevOps strives to accomplish this continuously throughout the entire software life cycle. While Agile and DevOps have proved successful for many organizations, they also have their share of controversy and pushback. Provocative reports claim Agile projects have a failure rate of up to 65 percent, while others claim up to 90 percent of DevOps initiatives fail to meet organizational expectations. It is best to be wary of these claims while also acknowledging that successfully implementing Agile or DevOps methods takes consistent leadership, organizational discipline, patience, investment in training, and culture change. However, the same requirements have always been true when introducing any new software platform. Given the historic lack of organizational resolve to instill proven practices, it is not surprising that novel approaches for developing and sustaining ever more complex software systems, no matter how effective they may be, will also frequently fall short. ## Persisting in Foolish Errors The frustrating and perpetual question is why basic IT project-management and governance mistakes during software development and operations continue to occur so often, given the near-total societal reliance on reliable software and an extensively documented history of failures to learn from? Next to electrical infrastructure, with which IT is increasingly merging into a mutually codependent relationship, the failure of our computing systems is an existential threat to modern society. Frustratingly, the IT community stubbornly fails to learn from prior failures. IT project managers routinely claim that their project is somehow different or unique and, thus, lessons from previous failures are irrelevant. That is the excuse of the arrogant, though usually not the ignorant. In Phoenix’s case, for example, it was the government’s second payroll-system replacement attempt, the first effort ending in failure in 1995. Phoenix project managers ignored the well-documented reasons for the first failure because they claimed its lessons were not applicable, which did nothing to keep the managers from repeating them. As it’s been said, we learn more from failure than from success, but repeated failures are damn expensive. ### Jaguar Land Rover **2025** ****A cyberattack forced Jaguar Land Rover, Britain’s largest automaker, to shut down its global operations for over a month. An initial FAIR-MAM assessment, a cybersecurity-cost-model, estimates the loss for Jaguar Land Rover to be between $1.2 billion and $1.9 billion (£911 million and £1.4 billion), which has affected its 33,000 employees and some 200,000 employees of its suppliers. ### Not all software development failures are bad; some failures are even desired. When pushing the limits of developing new types of software products, technologies, or practices, as is happening with AI-related efforts, potential failure is an accepted possibility. With failure, experience increases, new insights are gained, fixes are made, constraints are better understood, and technological innovation and progress continue. However, most IT failures today are not related to pushing the innovative frontiers of the computing art, but the edges of the mundane. They do not represent Austrian economist Joseph Schumpeter’s “gales of creative destruction.” They’re more like gales of financial destruction. Just how many more enterprise resource planning (ERP) project failures are needed before success becomes routine? Such failures should be called IT blunders, as learning anything new from them is dubious at best. Was Phoenix a failure or a blunder? I argue strongly for the latter, but at the very least, Phoenix serves as a master class in IT project mismanagement. The question is whether the Canadian government learned from this experience any more than it did from 1995’s payroll-project fiasco? The government maintains it will learn, which might be true, given the Phoenix failure’s high political profile. But will Phoenix’s lessons extend to the thousands of outdated Canadian government IT systems needing replacement or modernization? Hopefully, but hope is not a methodology, and purposeful action will be necessary. ### The IT community has striven mightily for decades to make the incomprehensible routine. ### Repeatedly making the same mistakes and expecting a different result is not learning. It is a farcical absurdity. Paraphrasing Henry Petroski in his book _To Engineer Is Human: The Role of Failure in Successful Design_ (Vintage, 1992), we may have learned how to calculate the software failure due to risk, but we have not learned how to calculate to eliminate the failure of the mind. There are a plethora of examples of projects like Phoenix that failed in part due to bumbling management, yet it is extremely difficult to find software projects managed professionally that still failed. Finding examples of what could be termed “IT heroic failures” is like Diogenes seeking one honest man. The consequences of not learning from blunders will be much greater and more insidious as society grapples with the growing effects of artificial intelligence, or more accurately, “intelligent” algorithms embedded into software systems. Hints of what might happen if past lessons go unheeded are found in the spectacular early automated decision-making failure of Michigan’s MiDAS unemployment and Australia’s Centrelink “Robodebt” welfare systems. Both used questionable algorithms to identify deceptive payment claims without human oversight. State officials used MiDAS to accuse tens of thousands of Michiganders of unemployment fraud, while Centrelink officials falsely accused hundreds of thousands of Australians of being welfare cheats. Untold numbers of lives will never be the same because of what occurred. Government officials in Michigan and Australia placed far too much trust in those algorithms. They had to be dragged, kicking and screaming, to acknowledge that something was amiss, even after it was clearly demonstrated that the software was untrustworthy. Even then, officials tried to downplay the errors’ impact on people, then fought against paying compensation to those adversely affected by the errors. While such behavior is legally termed “maladministration,” administrative evil is closer to reality. ### Lidl Enterprise Resource Planning (ERP) **2017** ****The international supermarket chain Lidl decides to revert to its homegrown legacy merchandise-management system after three years of trying to make SAP’s €500 million enterprise resource planning (ERP) system work properly. ### If this behavior happens in government organizations, does anyone think profit-driven companies whose AI-driven systems go wrong are going to act any better? As AI becomes embedded in ever more IT systems—especially governmental systems and the growing digital public infrastructure, which we as individuals have no choice but to use—the opaqueness of how these systems make decisions will make it harder to challenge them. The European Union has given individuals a legal “right to explanation” when a purely algorithmic decision goes against them. It’s time for transparency and accountability regarding all automated systems to become a fundamental, global human right. What will it take to reduce IT blunders? Not much has worked with any consistency over the past 20 years. The financial incentives for building flawed software, the IT industry’s addiction to failure porn, and the lack of accountability for foolish management decisions are deeply entrenched in the IT community. Some argue it is time for software liability laws, while others contend that it is time for IT professionals to be licensed like all other professionals. Neither is likely to happen anytime soon. ### Boeing 737 Max **2018** Boeing adds poorly designed and described Maneuvering Characteristics Augmentation System (MCAS) to new 737 Max model creating safety problems leading to two fatal airline crashes killing 346 passengers and crew and grounding of fleet for some 20 months. Total cost to Boeing estimates at $14b in direct costs and $60b in indirect costs. ### So, we are left with only a professional and personal obligation to reemphasize the obvious: Ask what you do know, what you should know, and how big the gap is between them before embarking on creating an IT system. If no one else has ever successfully built your system with the schedule, budget, and functionality you asked for, please explain why your organization thinks it can. Software is inherently fragile; building complex, secure, and resilient software systems is difficult, detailed, and time-consuming. Small errors have outsize effects, each with an almost infinite number of ways they can manifest, from causing a minor functional error to a system outage to allowing a cybersecurity threat to penetrate the system. The more complex and interconnected the system, the more opportunities for errors and their exploitation. A nice start would be for senior management who control the purse strings to finally treat software and systems development, operations, and sustainment efforts with the respect they deserve. This not only means providing the personnel, financial resources, and leadership support and commitment, but also the professional and personal accountability they demand. ### F-35 Joint Strike Fighter **2025** Software and hardware issues with the F-35 Block 4 upgrade continue unabated. The Block 4 upgrade program which started in 2018, and is intended to increase the lethality of the JSF aircraft has slipped to 2031 at earliest from 2026, with cost rising from $10.5 b to a minimum of $16.5b. It will take years more to rollout the capability to the F-35 fleet. ### It is well known that honesty, skepticism, and ethics are essential to achieving project success, yet they are often absent. Only senior management can demand they exist. For instance, honesty begins with the forthright accounting of the myriad of risks involved in any IT endeavor, not their rationalization. It is a common “secret” that it is far easier to get funding to fix a troubled software development effort than to ask for what is required up front to address the risks involved. Vendor puffery may also be legal, but that means the IT customer needs a healthy skepticism of the typically too-good-to-be-true promises vendors make. Once the contract is signed, it is too late. Furthermore, computing’s malleability, complexity, speed, low cost, and ability to reproduce and store information combine to create ethical situations that require deep reflection about computing’s consequences on individuals and society. Alas, ethical considerations have routinely lagged when technological progress and profits are to be made. This practice must change, especially as AI is routinely injected into automated systems. In the AI community, there has been a movement toward the idea of human-centered AI, meaning AI systems that prioritize human needs, values, and well-being. This means trying to anticipate where and when AI can go wrong, move to eliminate these situations, and build in ways to mitigate the effects if they do happen. This concept requires application to every IT system’s effort, not just AI. ### Given the historic lack of organizational resolve to instill proven practices...novel approaches for developing and sustaining ever more complex software systems...will also frequently fall short. Finally, project cost-benefit justifications of software developments rarely consider the financial and emotional distress placed on end users of IT systems when something goes wrong. These include the long-term failure after-effects. If these costs had to be taken fully into account, such as in the cases of Phoenix, MiDAS, and Centrelink, perhaps there could be more realism in what is required managerially, financially, technologically, and experientially to create a successful software system. It may be a forlorn request, but surely it is time the IT community stops repeatedly making the same ridiculous mistakes it has made since at least 1968, when the term “software crisis” was coined. Make new ones, damn it. As Roman orator Cicero said in __Philippic 12__ , “Anyone can make a mistake, but only an idiot persists in his error.” _Special thanks to Steve Andriole, Hal Berghel, Matt Eisler, John L. King, Roger Van Scoy, and Lee Vinsel for their invaluable critiques and insights._
spectrum.ieee.org
November 24, 2025 at 9:53 AM
How to Spot a Counterfeit Lithium-Ion Battery
As an auditor of battery manufacturers around the world, University of Maryland mechanical engineer Michael Pecht frequently finds himself touring spotless production floors. They’re akin to “the cleanest hospital that you could imagine–it’s semiconductor-type cleanliness,” he says. But he’s also seen the opposite, and plenty of it. Pecht estimates he’s audited dozens of battery factories where he found employees watering plants next to a production line or smoking cigarettes where particulates and contaminants can get into battery components and compromise their performance and safety. Unfortunately, those kinds of scenes are just the tip of the iceberg. Pecht says he’s seen poorly assembled lithium-ion cells with little or no safety features and, worse, outright counterfeits. These phonies may be home-built or factory-built and masquerade as those from well-known global brands. They’ve been found in scooters, vape pens, e-bikes, and other devices, and have caused fires and explosions with lethal consequences. The prevalence of fakes is on the rise, causing growing concern in the global battery market. In fact, after a rash of fires in New York City over the past few years caused by faulty batteries, including many powering e-bikes used by the city’s delivery cyclists, New York banned the sale of uncertified batteries. The city is currently setting up what will be its first e-bike battery-swapping stations as an alternative to home charging, in an effort to coax delivery riders to swap their depleted batteries for a fresh one rather than charging at home, where a bad battery could be a fire hazard. Compared with certified batteries, whose public safety risks may be overblown, the dangers of counterfeit batteries may be underrated. “It is probably an order of magnitude worse with these counterfeits,” Pecht says. ## Counterfeit Lithium-Ion Battery Risks There are a few ways to build a counterfeit battery. Scammers often relabel old or scrap batteries built by legitimate manufacturers like LG, Panasonic, or Samsung and sell them as new. “It’s so simple to make a new label and put it on,” Pecht says. To fetch a higher price, they sometimes rebadge real batteries with labels that claim more capability than the cells actually have. But the most prevalent fake batteries, Pecht says, are homemade creations. Counterfeiters can do this in make-shift environments because building a lithium-ion cell is fairly straightforward. With an anode, cathode, separator, electrolyte, and other electrical**** elements, even fly-by-night battery makers can get the cells to work. What they don’t do is make them as safe and reliable as tested, certified batteries. Counterfeiters skimp on safety mechanisms that prevent issues that lead to fire. For example, certified batteries are built to stop thermal runaway, the chain reaction that can start because of an electrical short or mechanical damage to the battery and lead to the temperature increasing out of control. Judy Jeevarajan, the vice president and executive director of Houston-based Electrochemical Safety Research Institute, which is part of Underwriters Laboratories (UL) Research Institutes, led a study of fake batteries in 2023. In the study, Jeevarajan and her colleagues gathered both real and fake lithium batteries from three manufacturers (whose names were withheld), and pushed them to their limits to demonstrate the differences. One test, called a destructive physical analysis, involved dismantling small cylindrical batteries. This immediately revealed differences in quality. The legitimate, higher quality examples contained thick plastic insulators at the top and bottom of the cylinders, as well as axially and radially placed tape to hold the “jelly roll” core of the battery. But illegitimate examples had thinner insulators or none at all, and little or no safety tape. “This is a major concern from a safety perspective as the original products are made with certain features to reduce the risk associated with the high energy density that li-ion cells offer,” Jeevarajan says. Jeevarajan’s team also subjected batteries to overcharging and to electrical shorts. A legitimately tested and certified battery, like the iconic 18650 lithium-ion cylinder, counters these threats with internal safety features such as positive temperature coefficient, where a material gains electrical resistance as it gets hotter, and a current interrupt device (CID), which automatically disconnects the battery’s electrical circuit if the internal pressure rises too high. The legit lithium battery in Jeevarajan’s test had the best insulators and internal construction. It also had a high-quality CID that prevented overcharging, reducing risk a fire. Neither of the other cells had one. Despite the gross lack of safety parts in the batteries, great care had clearly gone into making sure the counterfeit labels had the exact same shade and markings as the original manufacturer’s, Jeevarajan says. ## How to Spot a Counterfeit Battery Because counterfeiters are so skilled at duplicating manufacturers’ labels, it can be hard to know for sure whether the lithium batteries that come with a consumer electronics device, or the replacements that can be purchased on sites like eBay or Amazon, are in fact the genuine article. It’s not just individual consumers who struggle with this. Pecht says he knows of instances where device makers have bought what they thought were LG or Samsung batteries for their machines but failed to verify that the batteries were the real thing. “One cannot tell from visually inspecting it,” Jeevarajan says. But companies don’t have to dismantle the cells to do their due diligence. “The lack of safety devices internal to the cell can be determined by carrying out tests that verify their presence,” she says. A simple way, Pecht says, is to have a comparison standard on hand – a known, legitimate battery whose labeling, performance, or other characteristics can be compared to a questionable cell. His team will even go as far as doing a CT scan to see inside a battery and find out whether it is built correctly. Of course, most consumers don’t have the equipment on hand to test the veracity of all the rechargeable batteries in their homes. To shop smart, then, Pecht advises people to think about what kind of batteries and devices they’re using. The units in our smartphones and the large, high-capacity batteries found in electric vehicles aren’t the problem; they are subject to strict quality control and very unlikely to be fake. By far, he says, the more likely places to find counterfeits are the cylindrical batteries found in small, inexpensive devices. “They are mostly found as energy and power sources for portable applications that can vary from your cameras, camcorders, cell phones, power banks, power tools, e-bikes and e-scooters,” adds Jeevarajan. “For most of these products, they are sold with part numbers that show an equivalency to a manufacturer’s part number. Electric vehicles are a very high-tech market and they would not accept low-quality or cells and batteries of questionable origin.” The trouble with battling the counterfeit battery scourge, Pecht says, is that new rules tend to focus on consumer behavior, such as trying to prevent people from improperly storing or charging e-bike batteries in their apartments. Safe handling and charging are indeed crucial, but what’s even more important is trying to keep counterfeits out of the supply chain. “They want to blame the user, like you overcharged it or you did this wrong,” he says. “But in my view, it’s the cells themselves” that are the problem.
spectrum.ieee.org
November 22, 2025 at 6:14 PM
Tips for How to Think Like an Entrepreneur
__This article is part of our exclusive__ ____career advice__ __series in partnership with the__ ____IEEE Technology and Engineering Management Society__ __.__ Let’s say you’ve been in your role for a few years now. You know your systems inside and out. You’ve solved tricky problems, led small teams, and delivered results on time. But lately, between status meetings and routine design reviews, you’ve caught yourself thinking: There must be a better way to do this task. Someone should make this better. Then you spend some time imagining. Maybe it’s a new tool that would save weeks of engineering time. Or a better process. Or a new product feature. You sketch it out after work hours, maybe even build a quick prototype. Then you think: I could make this product myself. The shift from “someone should” to “I will” is the start of entrepreneurial thinking. And you don’t have to quit your job or have a billionaire’s appetite for risk to begin. ## From technical proficiency to entrepreneurial thinking As an engineer, you already have the ability to analyze complex problems, design viable solutions, and follow them through to a working prototype. Your technical skills came from a structured training background and hands-on projects. Your ability to lead, persuade, and navigate uncertainty often comes from experience, especially when you step outside your usual responsibilities. Some of the most game-changing products didn’t begin as formal projects. They started as bootleg efforts—side projects developed quietly by engineers who saw an opportunity. Post-it Notes and Gmail both began that way. Many companies now encourage such efforts; some even allow their engineers to devote 15 to 20 percent of their workweek to pursuing their own ideas. ## Closing the intention-action gap Ideas can be easy. Execution is harder. Nearly every engineer has a colleague with a clever idea that never got past the whiteboard. The difference between wanting to act and actually taking action—known as the __intention-action gap__—is where entrepreneurship lives or dies. Successful innovators build the discipline to cross the gap—one small, concrete step at a time. ## Building your innovative edge You don’t need to be born creative to be entrepreneurial. Here are ways to reprogram your mindset. * **Challenge the default.** Engineers are taught to follow proven processes, but innovation often starts by asking, “What if we did it differently?” * **Balance the team.** Innovative companies need a diverse mix of creative thinkers to generate ideas, entrepreneurs to drive execution, and managers to scale efficiently. * **Know your lane.** Whether you’re a visionary, a builder, or an optimizer, understanding your strengths can help you find the right collaborators. And, yes, timing matters. Amazon might have stayed just an online bookstore without the rise of e-commerce. The right idea at the wrong time is likely to struggle. Start with current trends, for instance, AI offers extremely low entry barriers to get started, and everything is being built around it these days. ## An engineering attitude Entrepreneurial thinking isn’t only for startup founders. It can mean championing a new process at your company, building an internal tool that changes how your team works, or bringing a product idea from sketch to launch. The engineering mindset—systematic, detail-oriented, problem-solving—is an asset that can power not just products but entire companies. If you’ve ever thought: There’s got to be a better way—and if you felt the itch to make it real—you might be closer to being an entrepreneur than you think. Don’t wait any longer; the best time to start is: tomorrow.
spectrum.ieee.org
November 21, 2025 at 8:28 PM
Video Friday: Watch Robots Throw, Catch, and Hit a Baseball
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at _IEEE Spectrum_ robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. ##### SOSV Robotics Matchup: 1–5 December 2025, ONLINE ##### ICRA 2026: 1–5 June 2026, VIENNA Enjoy today’s videos! > _Researchers at the RAI Institute have built a low-impedance platform to study dynamic robot manipulation. In this demo, robots play a game of catch and participate in batting practice, both with each other and with skilled humans. The robots are capable of throwing 70mph [112 kph], approaching the speed of a strong high school pitcher. The robots can catch and bat at short distances (23 feet [7 m]) requiring quick reaction times to catch balls thrown at up to 41 mph [66kph] and hit balls pitched at up to 30 mph [48kph]._ That’s a nice touch with the custom “RAI” baseball gloves, but what I really want to know is how long a pair of robots can keep themselves entertained. [RAI Institute ] This week’s best bacronym winner is GIRAF: Greatly Increased Reach AnyMAL Function. And if that arm looks like magic, that’s because it is, although with some careful pausing of the video you’ll be able to see how it works. [Stanford BDML ] > _DARPA concluded the second year of the DARPA Triage Challenge on October 4, awarding top marks to DART and MSAI in Systems and Data competitions, respectively. The three-year prize competition aims to revolutionize medical triage in mass casualty incidents where medical resources are limited._ [DARPA ] > _We propose a robot agnostic reward function that balances the achievement of a desired end pose with impact minimization and the protection of critical robot parts during reinforcement learning. To make the policy robust to a broad range of initial falling conditions and to enable the specification of an arbitrary and unseen end pose at inference time, we introduce a simulation-based sampling strategy of initial and end poses. Through simulated and real-world experiments, our work demonstrates that even bipedal robots can perform controlled, soft falls._ [Moritz Baecher ] Oh look, more humanoid acrobatics. My prediction: once humanoid companies run out of mocapped dance moves, we’ll start seeing some freaky stuff that leverages the degrees of freedom that robots have and humans do not. You heard it here first, folks. [MagicLab ] I challenge the next company that makes a “lights-out” video to just cut to just a totally black screen with a little “Successful Picks” counter in the corner that just goes up and up and up. [Brightpick ] Thanks, Gilmarie! The terrain stuff is cool and all but can we just talk about the trailer instead? [LimX Dynamics ] Presumably very picky German birblets are getting custom nesting boxes manufactured with excessively high precision by robots. [TUM ] All those UBTECH Walker S2 robots weren’t fake, it turns out. [UBTECH ] This is more automation than what we’d really be thinking of as robotics at this point, but I could still watch it all day. [Motoman ] > _Brad Porter (Cobot) and Alfred Lin (Sequoia Capital) discuss the future of robotics, AI, and automation at the Human[X] Conference, moderated by CNBC’s Kate Rooney. They explore why collaborative robots are accelerating now, how AI is transforming physical systems, the role of humanoids, labor market shifts, and the investment trends shaping the next decade of robotics._ [Cobot ] > _Humanoid robots have long captured our imagination. Interest has skyrocketed along with the perception that robots are getting closer to taking on a wide range of labor-intensive tasks. In this discussion, we reflect on what we’ve learned by observing factory floors, and why we’ve grown convinced that chasing generalization in manipulation—both in hardware and behavior—isn’t just interesting, but necessary. We’ll discuss AI research threads we’re exploring atBoston Dynamics to push this mission forward, and highlight opportunities our field should collectively invest more in to turn the humanoid vision, and the reinvention of manufacturing, into a practical, economically viable product._ [Boston Dynamics ] > _On November 12, 2025, Tom Williams presented “Degrees of Freedom: On Robotics and Social Justice” as part of the Michigan Robotics Seminar Series._ [Michigan Robotics ] Ask the OSRF Board of Directors anything! Or really, listen to other people ask them anything. [ROSCon ]
spectrum.ieee.org
November 21, 2025 at 8:28 PM
Could Terahertz Radar in Cars Save Lives?
A few years ago, Matthew Carey lost a friend in a freak car accident, after the friend’s car struck some small debris on a highway and spun out of control. Ordinarily, the car’s sensors would have detected the debris in plenty of time, but it was operating under conditions that render all of today’s car-mounted sensors useless: fog and bright early-morning sunshine. Radar can’t see small objects well, lidar is limited by fog, and cameras are blinded by glare. Carey and his cofounders decided to create a sensor that could have done the job—a terahertz imager. Historically, terahertz frequencies have been the least utilized portion of the electromagnetic spectrum. People have struggled to send them even short distances through the air. But thanks to some intense engineering and improvements in silicon transistor frequency, beaming terahertz radiation over hundreds of meters is now possible. Teradar, the Boston-based startup Carey cofounded, has managed to make a sensor that can meet the auto industry’s 300-meter distance requirements. The company came out of stealth last week with chips it says can deliver 20 times the resolution of automotive radar while seeing through all kinds of weather and costing less than lidar. The tech provides “a superset of lidar and radar combined,” Carey says. The technology is in tests with carmakers for a slot in vehicles to be produced in 2028, he says. It would be the first such sensor to make it to market. “Every time you unlock a chunk of the electromagnetic spectrum, you unlock a brand-new way to view the world,” Carey says. ## Terahertz imaging for cars Teradar’s system is a new architecture, says Carey, that has elements of traditional radar and a camera. The terahertz transmitters are arrays of elements that generate electronically steerable beams, while the sensors are like imaging chips in a camera. The beams scan the area, and the sensor measures the time it takes for the signals to return as well as where they return from. Teradar’s system can steer beams of terahertz radiation with no moving parts.Teradar From these signals, the system generates a point cloud, similar to what a lidar produces. But unlike lidar, it does not use any moving parts. Those moving parts add significantly to the cost of lidar and subject it to wear and tear from the road. “It’s a sensor that [has] the simplicity of radar and the resolution of lidar,” says Carey. Whether it replaces either technology or becomes an add-on is up to carmakers, he adds. The company is currently working with five of them. ## Terahertz transistors and circuits**** That Teradar has gotten this far is partly down to progress in silicon transistor technology—in particular, the steady increase in the maximum frequency of devices that modern foundries can supply, says Carey. Ruonan Han, a professor of electrical engineering at MIT who specializes in terahertz electronics, agrees. These improvements have led to boosts in the efficiency of terahertz circuits, their output power, and the sensitivity of receivers. Additionally, chip packaging, which is key to efficiently transmitting the radiation, has improved. Combined with research into the design of circuits and systems, engineers can now apply terahertz radiation in a variety of applications, including autonomous driving and safety. Nevertheless, “it’s pretty challenging to deliver the performance needed for real and safe self-driving—especially the distance,” says Han. His lab at MIT has worked on terahertz radar and other circuits for several years. At the moment it’s focused on developing lightweight, low-power terahertz sensors for robots and drones. His lab has also spun out an imaging startup, Cambridge Terahertz, targeted at using the frequency band’s advantages in security scanners, where it can see through clothes to spot hidden weapons. Teradar, too, will explore applications outside the automotive sector. Carey points out that while terahertz frequencies do not penetrate skin, melanomas show up as a different color at those wavelengths compared to normal skin. But for now Carey’s company is focused on cars. And in that area, there’s one question I had to ask: Could Teradar’s tech have saved Kit Kat, the feline regrettably run down by a Waymo self-driving car in San Francisco last month? “It probably would have saved the cat,” says Carey.
spectrum.ieee.org
November 20, 2025 at 7:46 PM
Narrowing focus can increase productivity
_This article is crossposted from_IEEE Spectrum _’s careers newsletter._Sign up now_ _to get insider tips, expert advice, and practical strategies,__written i _n partnership with tech career development companyTaro and ___delivered to your inbox for free!__ ____The most productive engineer I worked with at Meta joined the company as a staff engineer. This is already a relatively senior position, but he then proceeded to earn two promotions within three years, becoming one of the most senior engineers in the entire company. Interestingly, what made him so productive was also frequently a source of annoyance for many of his colleagues. Productivity comes from prioritization, and that meant he often said no to ideas and opportunities that he didn’t think were important. He frequently rejected projects that didn’t align with his priorities. He was laser-focused every day on the top project that the organization needed to deliver. He would skip status meetings, tech debt initiatives, and team bonding events. When he was in focus mode, he was difficult to get in touch with. Compared to his relentless focus, I realized that most of what I spent my time on didn’t actually matter. I thought that having a to-do list of 10 items meant I was being productive. He ended up accomplishing a lot more than me with a list of two items, even if that meant he may have occasionally been a painful collaborator. This is what the vast majority of engineers misunderstand about productivity. **The biggest productivity “hack” is to simply work on the right things**. Figure out what’s important and strip away everything else from your day so that you can make methodical progress on that. In many workplaces, this is surprisingly difficult, and you’ll find your calendar filled with team lunches, maintenance requests, and leadership reviews. Do an audit of your day and examine how you spend your time. As an engineer, if the majority of your day is spent in emails and coordinating across teams, you’re clearly not being as productive as you could be. My colleague got promoted so quickly because of his prodigious output. That output comes from whittling down the number of priorities rather than expanding them. It’s far better to deliver fully on the key priority, rather than getting pulled in every direction and subsequently failing to deliver anything of value. —Rahul ## This Professor’s Open-Source Robots Make STEM More Inclusive Carlotta Berry is an electrical and computer engineering professor focused on bringing low-cost mobile robots to the public so that anyone can learn about robotics. She demonstrates open-source robots of her own design at schools, libraries, museums, and other community venues. Learn how her work earned her an Undergraduate Teaching Award from the IEEE Robotics and Automation Society. Read more here. ## Scientists Need a Positive Vision for AI We should not resign ourselves to the story of AI making experiences worse, say Bruce Schneier and Nathan E. Sanders at Harvard University. Rather, scientists and engineers should recognize the ways in which AI can be used for good. They suggest reforming AI under ethical guidelines, documenting negative applications of AI, using AI responsibly, and preparing institutions for the impacts of AI. Read more here. ## Should You Use AI to Apply for Jobs? Many job seekers are now using AI during the application process. This trend has led to a deluge of AI-generated resumes and cover letters many recruiters now must sift through, but when used thoughtfully, AI can help applicants find a match in an increasingly difficult job market. __The Chronicle of Higher Education__ shares some dos and don’ts of using AI to apply for jobs. Read more here.
spectrum.ieee.org
November 20, 2025 at 8:19 AM
This IBM Engineer Is Pushing Quantum Computing Out of the Lab
Genya Crossman is a lifelong learner passionate about helping people understand and use quantum computing to solve the world’s most complex problems. So, she is excited that quantum computing is in the spotlight this year. UNESCO declared 2025 the International Year of Quantum Science and Technology. It’s also the 100th anniversary of physicist Werner Heisenberg’s “On the Quantum-Theoretical Reinterpretation of Kinematic and Mechanical Relationships,” the first published paper on quantum mechanics. Crossman, an IEEE member, is a quantum strategy consultant at IBM in Germany. As a full-time staff member, she coordinates and manages five working groups focused on developing quantum-based solutions for near-term problems in health care and life sciences, materials science, high-energy physics, optimization, and sustainability. ### Genya Crossman **Employer** ****IBM in Germany **Job title** ****Quantum strategy consultant **Member grade** ****Member **Alma maters** University of Massachusetts, Amherst; Delft University of Technology and the Technische Universität Berlin She attended the sixth annual IEEE Quantum Week, held from 31 August to 5 September in Albuquerque. This year’s event, also known as the IEEE International Conference on Quantum Computing and Engineering, marked the first time that the IBM- and community-created working groups’ experts and collaborators publicly presented their research together. “We got great feedback and information about identifying common features across groups,” Crossman says. “The audience got to hear real-life examples to understand how quantum computing applies to different scenarios and how it works.” Crossman understands the importance of sharing research more than most because she works at the intersection of quantum computing research and practical application. The quantum field might seem intimidating, she says, but you don’t need to understand it to use a quantum computer. “Anyone can use one,” she says. “And if you know programming languages like Python, you can code a quantum computer.” ## The basics of quantum computing IBM has a long-standing history with quantum computing. IEEE Member Charles H. Bennett, an IBM Fellow, is called the father of quantum information theory because he wrote the first notes on the subject in 1970. In May 1981, IBM and MIT held the first Physics of Computation Conference. “Quantum computing is often used to describe all quantum work,” including quantum science and quantum technology, Crossman says. The field involves a variety of technologies, including sensors, meteorology, and communications. Classical computers use bits, and quantum computers use quantum bits, called __qubits__. Qubits can exist in more than one state simultaneously (both one and zero), known as the ability to exist in “superposition.” Computers using qubits can store and process highly complex information and data faster and more efficiently, possibly using significantly less energy than classical computers. With so much power and processing ability, quantum computers are complex and still not fully understood. Engineers are working to make quantum computing more accessible to everyone, so more people can understand how to work with the technology, Crossman says. ## Inspired by her father and IEEE Growing up in the North Shore of Boston, Crossman spent many summer mornings poring over the latest issues of __IEEE Spectrum__ and __Scientific American__ with her older sister. Her father, Antony Crossman, is an electrical and electronics engineer and an IEEE life member. He often discussed science and engineering concepts with his daughters. Looking back, Crossman says, she sees reading __Spectrum__ as her first introduction to how research is presented. “I loved reading about new research and what could be done with it,” she says. “It helped point me toward engineering as a career.” When she enrolled at McGill University in Montreal in 2011 to pursue a bachelor’s degree in physics, her father gifted her an IEEE student membership. “Montreal is a beautiful, creative city that’s also relatively easy to travel to from Boston within a day,” she says. “Plus, the school was known for its physics program.” After two years, she dropped out and moved to Paris, where she worked in a café. A year later, in 2014, she enrolled in the physics degree program at the University of Massachusetts, Amherst. In the summer of 2016, Crossman’s undergraduate advisor, Professor Stéphane Willocq, recommended her for a research project in the Microsystems Technology Laboratory within MIT’s electrical engineering department. “Quantum computing is often used to describe all quantum work, including quantum computing, quantum science, and quantum technology.” “I had been conducting research” with Willocq, she says, “and he knew I was considering going into electrical engineering, so he suggested I apply for this summer research opportunity.” As a research assistant, she examined carrier transport in transistors and diodes made with two-dimensional materials. After graduating with a bachelor’s degree in physics in 2017, she initially planned to go straight to graduate school, she says, but she wasn’t sure what she wanted to focus on. A friend and former classmate from an undergraduate quantum mechanics course referred her to a quantum computing job opening at Rigetti Computing in Berkeley, Calif. She was hired as a junior quantum engineer. She started by creating the predecessor to, and then the schema for, the company’s first device database. She then designed, modeled, and simulated quantum devices such as circuits for superconducting quantum computers, including some used in the first deployed quantum systems. She also managed the Berkeley fabrication facility. In that role, she learned a great deal about electrical and microwave engineering, she says, and that introduced her to computational modeling. It led her to better understand practical applications of quantum computing, she says. Her newfound knowledge made her “want to learn why and how people use quantum technology,” she says, which is how she became interested in the end users’ needs. To further her career, she left Rigetti in 2020 and moved to Germany to pursue a dual master’s degree in computational and applied mathematics through a joint program between the Delft University of Technology and the Technische Universität Berlin. When she first began her master’s program, IBM recruiters offered her two jobs, she says, but she declined because she wanted to finish her degree. During her studies, she worked with her mentor Eliska Greplova, an associate professor at TU Delft, who invited Crossman to join her quantum matter and AI research group. Crossman learned about condensed matter, machine learning, and quantum learning, and she participated in discussions about the technologies’ implications. Despite being a great experience, it ultimately led her to decide against pursuing a Ph.D., she says, because she enjoyed working in the industry and that’s where she wanted to be in the long run. She had planned to focus her master’s thesis on quantum computing from the end user’s perspective, but she switched to writing about integrating topological properties onto superconducting hardware. She graduated in 2022. In January 2023, she accepted a full-time position at IBM Research in Germany as a quantum strategy consultant, supporting enterprise clients. Since then, her job has changed to technical engagement lead, overseeing the five quantum working groups. She is also part of the team that oversees the company’s responsible computing initiative. IBM defines responsible quantum computing as the type that’s “aware of its effects.” The company says it wants to ensure it develops and uses quantum computing in line with its principles. Established in 2022 by IBM and researchers from other organizations, the working groups tackle near-term problems and look for quantum and interdisciplinary solutions in their area of focus, Crossman says. The groups are community-driven, with researchers from both quantum and nonquantum backgrounds collaborating to identify key problems, decide what to pursue, and pool their expertise to fill gaps, allowing them to look at problems holistically, she says. The groups regularly publish papers and make them publicly available. Crossman’s job is to support the researchers, locate resources, help them use the IBM ecosystem, and identify experts to answer niche questions. Her other focus is on the end users, the people who will employ the research emerging from the working groups. She says she seeks to understand their needs and how to best support them. “I really enjoy quantum engineering and working with everyone because it’s such an interdisciplinary field,” she says. “It combines problem-solving with creativity. It’s really at an exciting stage of development.” With so much momentum, Crossman says, she is eager to see where quantum technologies go next. “When I started learning about quantum mechanics in undergrad, there wasn’t much information out there,” she says. “The beginning of my career was when the quantum computing industry was just getting started. I’m really grateful for that.” ## Staying current on research Being an IEEE member allows Crossman to stay updated on research across multiple fields, she says, and that’s important because most of them “are becoming much more interdisciplinary, especially quantum computing.” She says she is looking forward to collaborating more with IEEE members working on quantum computing. “I’ve always found IEEE useful,” she says. “I can learn about new research in my and other fields, and I really enjoyed attending this year’s Quantum Week.”
spectrum.ieee.org
November 20, 2025 at 8:19 AM