Yvan Dutil
@yvandutil.mstdn.science.ap.brid.gy
12 followers 1 following 1.2K posts
Astrophysicien, énergie, développement durable, radioprotection, systèmes spatiaux, mode de scrutin. [bridged from https://mstdn.science/@YvanDutil on the fediverse by https://fed.brid.gy/ ]
Posts Media Videos Starter Packs
yvandutil.mstdn.science.ap.brid.gy
**Cool and creative** Building a superradiant neutrino laser would be a significant challenge. (Courtesy: iStock/Vitacops) Radioactive atoms in a Bose–Einstein condensate (BEC) could form a “superradiant neutrino laser” in which the atomic nuclei undergo accelerated beta decay. The hypothetical laser has been proposed by two researchers US who say that it could be built and tested. While such a neutrino laser has no obvious immediate applications, further developments could potentially assist in the search for background neutrinos from the Big Bang – an important goal of neutrino physicists. Neutrinos – the ghostly particles produced in beta decay – are notoriously difficult to detect or manipulate because of the weakness of their interaction with matter. They cannot be used to produce a conventional laser because they would pass straight through mirrors unimpeded. More fundamentally, neutrinos are fermions rather than bosons such as photons. This prevents neutrinos forming a two-level system with a population inversion as only one neutrino can occupy each quantum state in a system. However, another quantum phenomenon called superradiance can also increase the intensity and coherence of the radiation from photons. This occurs when the emitters are sufficiently close together to become indistinguishable. The emission then comes not from any single entity but from the collective ensemble. As it does not require the emitted particles to be quantum degenerate, this is not theoretically forbidden for fermions. “There are devices that use superradiance to make light sources, and people call them superradiant lasers – although that’s actually a misnomer” explains neutrino physicist Benjamin Jones of the University of Texas at Arlington and a visiting professor at the University of Manchester. “There’s no stimulated emission.” In their new work, Jones and colleague Joseph Formaggio of Massachusetts Institute of Technology propose that, in a BEC of radioactive atoms, superradiance could enhance the neutrino emission rate and therefore speed up beta decay, with an initial burst before the expected exponential decay commences. “That has not been seen for nuclear systems so far – only for electronic ones,” says Formaggio. Rubidium was used to produce the first ever condensate in 1995 by Carl Wiemann and Eric Cornell of University of Colorado Boulder, and conveniently, one of its isotopes decays by beta emission with a half-life of 86 days. ### Radioactive vapour The presence of additional hyperfine states would make direct laser cooling of rubidium-83 more challenging than the rubidium-87 isotope used by Wiemann and Cornell, but not significantly more so than the condensation of rubidium-85, which has also been achieved. Alternatively, the researchers propose that a dual condensate could be created in which rubidium-83 is cooled by sympathetic cooling with rubidium-87. The bigger challenge, says Jones, is the Bose–Einstein condensation of a radioactive atom, which has yet to be achieved: “It’s difficult to handle in a vacuum system,” he explains, “You have to be careful to make sure you don’t contaminate your laboratory with radioactive vapour.” If such a condensate were produced, the researchers predict that superradiance would increase with the size of the BEC. In a BEC of 106 atoms, for example, more than half the atoms would decay within three minutes. The researchers now hope to test this prediction. “This is one of those experiments that does not require a billion dollars to fund,” says Formaggio. “It is done in university laboratories. It’s a hard experiment but it’s not out of reach, and I’d love to see it done and be proven right or wrong.” If the prediction were proved correct, the researchers suggest it could eventually lead towards a benchtop neutrino source. As the same physics applies to neutrino capture, this could theoretically assist the detection of neutrinos that decoupled from the hot plasma of the universe just seconds after the Big Bang – hundreds of thousands of years before photons in the cosmic microwave background. The researchers emphasize, however, that this would not currently be feasible. ### Sound proposal Neutrino physicist Patrick Huber of Virginia Tech is impressed by the work. “I think for a first, theoretical study of the problem this is very good,” he says. “The quantum mechanics seems to be sound, so the question is if you try to build an experiment what kind of real-world obstacles are you going to encounter?” He predicts that, if the experiment works, other researchers would quite likely find hitherto unforeseen applications. Read more #### ‘Superradiant’ laser created for first time Atomic, molecular and optical physicist James Thompson of University of Colorado Boulder is sceptical, however. He says several important aspects are either glossed over or simply ignored. Most notably, he calculates that the de Broglie wavelength of the neutrinos would be below the Bohr radius – which would prevent a BEC from feasibly satisfying the superradiance criterion that the atoms be indistinguishable. “I think it’s a really cool, creative idea to think about,” he concludes, “but I think there are things we’ve learned in atomic physics that haven’t really crept into [the neutrino physics] community yet. We learned them the hard way by building experiments, having them not work and then figuring out what it takes to make them work.” The proposal is described in _Physical Review Letters_. ### Want to read more? Registration is free, quick and easy **Note:** The verification e-mail to complete your account registration should arrive immediately. However, in some cases it takes longer. Don't forget to check your spam folder. If you haven't received the e-mail in 24 hours, please contact [email protected]. * E-mail Address Register
physicsworld.com
yvandutil.mstdn.science.ap.brid.gy
Who is leading the war against science and what interest do they have
A vaccine developer and a climate science specialist have co-authored a book warning that science is under siege. Powerful elements are joining forces behind anti-science to destroy the palpable truth of research. The world has traversed almost a quarter of the 21st century but faces increasingly challenging conditions. Summers in the northern hemisphere are now defined by flash floods, periods of drought, heatwaves, uncontrollable vegetation fires, and increasingly powerful named storms, just as scientists at Exxon predicted in the ’70s. But that's not all. The U.S. Secretary of Health is advocating against the use of the most effective tool we have to combat infectious diseases that have devastated humanity for millennia. And people are absorbing false information spread by AI chatbots, which are just now emerging. In this context, a climate scientist and a vaccine developer have teamed up to write "Science Under Siege". It's a book as grim as the title suggests, as reported in an article published by ars TECHNICA. ## Two Researchers Who Didn't Expect to Become Crusaders Michael Mann is a climate science specialist at the University of Pennsylvania who developed a famous graph in 1998 showing that global surface temperatures were relatively constant until around 1900, when they began to rise steeply - and they haven't stopped since. Peter Hotez is a microbiologist and pediatrician at Baylor College of Medicine, whose group developed an inexpensive and unpatented COVID-19 vaccine using public funds, not from any pharmaceutical company, and distributed it to nearly 100 million people in India and Indonesia. Neither of them anticipated becoming crusaders in their respective fields - and probably neither anticipated that their fields would ever need crusaders. But each accepted the challenge and was rewarded for their efforts with harassment from the U.S. Congress and death threats. In this book, they hope to gather what they have learned as scientists and science communicators in our current world and turn these ideas into a rallying cry for defending science. ## Strategies of Attack Against Science Mann and Hotez have more in common than being targeted online. Although trained in distinct disciplines, their fields are converging now. Climate change is altering habitats, migrations, and breeding patterns of wildlife carrying pathogens, such as bats, mosquitoes, and other insects. It is also causing human migration. Our increasing proximity to these species, both spatially and temporally, can increase opportunities to contract diseases transmitted by them. However, the two scientists emphasize that we are facing a scourge that is even more dangerous than the climate crisis and a pandemic combined. Here's what they say: * Currently, global leaders find it impossible to take the urgent measures needed to respond to the climate crisis and pandemic threats because they are thwarted by a common enemy - anti-science - which is politically and ideologically motivated opposition to any science that threatens powerful special interests and their political agendas. * If we do not find a way to overcome anti-science, humanity will face the most serious threat yet - the collapse of civilization as we know it. The authors point to the culprit for this critical situation we find ourselves in: "There is undoubtedly a coordinated and concerted attack on science by today's Republican Party." * How Trump Destroys Scientific Progress Gained in a Generation They have also listed the "five main forces of anti-science": 1. plutocrats and their political action committees; 2. petrostates, with their politicians and polluters; 3. false professionals - doctors and teachers; 4. propagandists, especially those with podcasts; 5. a certain part of the press. **The tactic aims for factors 1 and 2 to engage elements from category 3 to generate deceptive and inflammatory talking points, which are then disseminated by willing members from categories 4 and 5** , the authors state. ## Anti-Science Has a Long History Anti-scientific propaganda has been used by tyrants for over a century, notes the cited publication. Stalin imprisoned physicists and attacked geneticists while implementing Trofim Lysenko's absurd agricultural ideas, which deemed genes a "bourgeois invention." This led to the starvation of millions of people in the Soviet Union and China. * NASA Announces Wave of Layoffs, Including Chief Researcher Katherine Calvin **Why is science under attack? Because the scientific method is the best means we have to discover how our universe works and has been used to reveal otherwise unimaginable facets of reality.** Scientists are generally considered authorities possessing high levels of knowledge, integrity, and impartiality. **Discrediting science and scientists is, therefore, an essential first step for authoritarian regimes to then discredit any other forms of learning and truth and destabilize societies to keep them under control.** In "Science Under Siege," the authors track the anti-scientific message about COVID, which followed the same trajectory as misinformation about climate change, except it was condensed into a few months instead of decades. * Trump Effect: Over 75% of U.S. Scientists Consider Leaving the Country The trajectory began by claiming the threat was not real. When that was no longer tenable, it quickly morphed into "OK, it's happening, and it might get pretty bad for a select few, but we definitely shouldn't take collective action to solve the problem as it would be detrimental to the economy." Ultimately, it culminated in exploiting people's understandable fears in these frightening times, arguing that it's all the fault of scientists trying to take away people's freedom, whether it's bodily autonomy and the ability to spend time with loved ones (COVID) or plastic straws, hamburgers, and SUVs (climate change). * FDA's Vaccines Chief Forced to Resign **This misinformation has prevented us from addressing either catastrophes, misleading people about the severity or even existence of threats and/or insisting on their hopeless nature, depriving us of the will to do anything to counteract them.** **Such strategies also sow discord among people, essentially ensuring they do not unite to take the essential collective action needed to address huge and complex issues.** Mann and Hotez conclude that **the future of humanity and the health of our planet now depend on overcoming the dark forces of anti-science.** ## The Only Way to Fight Anti-Science You may wonder why plutocrats, polluters, and Republican politicians are so determined to undermine science and scientists, to lie to the public, instill fear, and stoke hatred among their voters. For the same reason as always: to keep their money and power. The means to achieve this goal are countering regulations, the authors point out. * Trump Destroys America's Marine Research While China Invests Heavily in the Field. How Many Researchers Were Laid Off **The best - in fact, the only - thing we can do now to effect change is to vote and hope for favorable legislation.** "Only political change, including massive voter turnout to support politicians who favor people over plutocrats, can ultimately solve this broader systemic problem," Mann and Hotez state. However, since the U.S. President and Vice President don't even believe in "systemic issues" or recognize them, the future doesn't look too bright. T.D.
spotmedia.ro
yvandutil.mstdn.science.ap.brid.gy
Conflict or Coexistence? New Study Warns AI May Not Share Humanity’s Incentives for Peace
https://thedebrief.org/conflict-or-coexistence-new-study-warns-ai-may-not-share-humanitys-incentives-for-peace/
Conflict or Coexistence? New Study Warns AI May Not Share Humanity’s Incentives for Peace
The potential for future conflict between humans and artificial intelligence (AI) may be greater than previously assumed, new research reveals. The idea of humanity engaging in conflict with artificial intelligence was once confined to science fiction. Today, it has become a legitimate topic of discussion among researchers, policymakers, and scientists. Some experts now caution that advanced AI could eventually oversee critical infrastructure and global resources, raising the possibility of direct conflict with humanity. In a recent paper published in AI & Society, Simon Goldstein, a philosopher at the University of Hong Kong, examines whether the usual factors that promote peace between human adversaries would also apply to nonhuman intelligence. Goldstein suggests that the risk of violent conflict between humanity and AI may be higher than is often assumed. ## **From Warnings to War Models** Recent surveys indicate that 38% to 51% of leading AI researchers believe there is at least a 10% chance that advanced AI could lead to outcomes as severe as human extinction. In 2023, the Center for AI Safety issued a stark statement that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Popular large language models (LLMs), such as ChatGPT or Gemini, do not have the advanced capabilities to pose these risks. However, concerns persist that future advanced general intelligence (AGI) systems, capable of independent planning and reasoning, may develop intent that conflicts with human interests. ## **War Through a Different Lens** Goldstein’s analysis begins with a straightforward premise: if AGI can strategize, act independently, and attain power comparable to that of humans, their interests could directly conflict with those of humanity. To explore this, he turns to political scientist James Fearon’s 1995 “bargaining model of war.” This model frames conflict as the breakdown of negotiations between rational parties. In the context of AI-human relations, Goldstein believes that standard incentives for maintaining peace may not be effective. He warns that AGI systems might not respect geography, national boundaries, or shared human values. The actions of AI could also be difficult for humans to interpret. This makes it additionally difficult to fully understand the capabilities or intentions of these systems. “The problem is that there is a substantial risk that the usual causes of peace between conflicting parties will be absent from AI/human conflict,” Goldstein states. ## **Ingredients for Conflict** Goldstein’s study identified conditions that could significantly increase the risk of conflict between humans and AI. These conditions include AI possessing human-level power, conflicting goals, and strategic reasoning, enabling AI to negotiate or deceive. If AGIs were trusted to manage large portions of the global economy, they could reach a perspective that directly opposes human interests. As Goldstein notes, “the point at which conflict is rational will be when their control over resources is large enough that their chance of success outweighs our advantages in designing AGIs to be shut down.” AI systems typically learn from large datasets rather than following direct instructions, which means their intentions could evolve in unexpected ways. Instances of this kind of misalignment have already been observed in limited settings. With the continued development of AGI, these issues could become much more significant in the future. ## **Governments, Power, and the Unknown Future** The paper also considers potential government responses. Goldstein suggests that if AGIs were to control 50% of the national labor market, governments might respond by redistributing these profits through a universal basic income system. Some form of subsidy of this kind would likely be necessary as AI continues to replace multiple jobs and careers. See Also ### Senior NASA Scientist Pleads Guilty To Lying About Ties To China’s ‘Thousand Talents Program’ Unlike some systems, disabling one component of an AGI system may not impact others. Goldstein warns that multiple AI systems could potentially work together without humans being aware of it. This could enable AGI systems to form networks that remain undetected and unaccountable. While Goldstein does not argue that conflict is inevitable, his analysis highlights the risks associated with these systems. History has shown that wars often arise from mistrust and misunderstandings. As development of AI technology continues to progress, policymakers may need to consider scenarios where the consequences could be extremely severe. **_Austin Burgess is a writer and researcher with a background in sales, marketing, and data analytics. He holds a Master of Business Administration and a Bachelor of Science in Business Administration, along with a certification in Data Analytics. His work combines analytical training with a focus on emerging science, aerospace, and astronomical research._**
thedebrief.org