As someone who has spent decades studying the evolution of nuclear energy, I’ve seen its emergence as a promising transformative technology, its stagnation as a consequence of dramatic accidents and its current re-emergence as a potential solution to ...
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 

OUPblog » Physics & Chemistry

 

Rethinking nuclear

Rethinking nuclear

As someone who has spent decades studying the evolution of nuclear energy, I’ve seen its emergence as a promising transformative technology, its stagnation as a consequence of dramatic accidents and its current re-emergence as a potential solution to the challenges of global warming.

While the issues of global warming and sustainable energy strategies are among the most consequential in today’s society, it is difficult to find objective sources that elucidate these topics. Discourse on this subject is often positioned at one or another polemical extreme. Further complicating the flow of objective information is the involvement of advocates of vested interests as seen in the lobbying efforts of the coal, gas and oil industries. My goal has been to present nuclear energy’s potential role in a sustainable energy future—alongside renewables like wind and solar—without ideological baggage.

An additional hurdle that must be overcome in dealing with the pros and cons of nuclear energy is the psychological context in which fear of nuclear weapons and of radiation impedes rational analysis. The deep antipathy to nuclear phenomena is illustrated by what might be called the “Godzilla Complex” that developed after the crew of the Japanese fishing boat, the Lucky Dragon 5, was exposed to heavy radiation from a nuclear weapons test in 1954. Godzilla was conceived as a monster that emerged from the depths of the ocean due to radiation exposure. It has become an enduring concept that has been portrayed in nearly forty films in the United States and Japan and in numerous video games, novels, comic books and television shows.

It is not surprising that fear of nuclear reactor radiation has been widespread. In spite of the fact that there are no documented deaths due to nuclear reactor waste (in contrast to deaths from accidents), it is widely assumed that nuclear reactor waste is quite dangerous. In contrast, the fact that premature deaths attributable to the fossil-fuel component of air pollution worldwide exceeds more than 5 million annually generates little concern. Similarly, the total waste produced from nuclear energy can be stored on one acre in a building 50 feet high, whereas for every tonne of coal that is mined, 880 pounds of waste material remain. Furthermore, this waste contains toxic components. Yet public concern for nuclear waste clearly overshadows that for coal, despite these contrasting impacts.

After an in-depth review of the most significant nuclear accidents and recognition of the deep psychological antipathy to nuclear energy, I’ve become increasingly interested in the emergence of an international effort to develop safe, cost-effective nuclear energy known as the Generation IV Nuclear Initiative. This began in 2000 with nine participating countries and has since grown substantially.

In the early years, the Generation IV Nuclear Initiative took a systematic approach to identify reactor designs that could meet demanding criteria—including the key characteristic of being “fail safe”. Rather than depending upon add-on safety apparatus, “fail safe” designs rely on the laws of nature—such as gravity and fluid flow—to provide cooling in the event that the reactor overheats. Another high priority design feature is modular construction, allowing multiple units to be constructed in a timely and economical fashion.

After reviewing dozens of options, the Generation IV Nuclear Initiative settled on six designs that it found to be the most attainable and desirable. Since its initial efforts, countries that have embraced the goals of the Generation IV Nuclear Initiative have been pursuing additional designs including reactors that range in size from quite small to about one third the size of the typical one megawatt reactor.

In my book, I’ve focused my attention on four promising designs. These four designs eschew the vulnerabilities of using water as a coolant that proved so devastating at Chernobyl and Fukushima. The explosion at Chernobyl was due to steam and the three explosions at Fukushima were due to hydrogen gas that resulted from oxidation of fuel rods by overheated water. These were not nuclear explosions. Instead, the four designs I’ve highlighted use liquid sodium, liquid lead, molten salts and helium gas as coolants. Liquid sodium and liquid lead cooled reactors are operating successfully in Russia, while China incorporated a gas cooled reactor into its grid in 2023. In the United States, Kairos Power is constructing a molten salt cooled reactor, while the TerraPower company (founded by Bill Gates) has broken ground on construction of a sodium cooled reactor in Kemmerer, Wyoming. These are intended to be models for replacing coal fired power plants with Generation IV nuclear plants. Multiple implementations of this approach are planned through the early 2030s.

Given the world-wide interest in Generation IV reactor development and the many initiatives that are being pursued, it is likely that at least some of these projects will come to fruition in the near future. While success is not guaranteed, there is clearly a need for the general public and students to be kept informed of progress leading up to 2030 and beyond.

To help bridge the knowledge gap in this rapidly evolving domain, I’ve launched a newsletter on Substack called “Nuclear Tomorrow.” It’s written for anyone concerned with the intersection of public policy, energy generation, and its impact on global warming. I hope it serves as a resource for those seeking clarity in a complex and consequential field.

Feature image: nuclear power plant via Pixabay.

OUPblog - Academic insights for the thinking world.

Quantum information theorists use Einstein’s Principle to solve “Einstein’s quantum riddle”

Quantum information theorists use Einstein’s Principle to solve “Einstein’s quantum riddle”

Albert Einstein, Boris Podolsky, and Nathan Rosen introduced the mystery of quantum entanglement (entanglement) in 1935 and it has been called “Einstein’s quantum riddle.” Many physicists and philosophers in foundations of quantum mechanics (foundations) have proposed solutions to Einstein’s quantum riddle, but no solution has received consensus support, which has led some to call entanglement “the greatest mystery in physics.” There is good reason for this 90-year morass, but there is also good reason to believe that a recent solution using quantum information theory will end it in ironic fashion.

Simply put, entanglement is one way that quantum particles produce correlated measurement outcomes. For example, when you measure an electron’s spin in any direction of space you get one of two outcomes, i.e. spin “up” or spin “down” relative to that direction. When two electrons are entangled with respect to spin and you measure those spins in the same direction, you get correlated outcomes, e.g. if one electron has spin “up” in that direction, then the other electron will have spin “down” in that direction. Einstein believed this was simply the result of the electrons having opposite spins when they were emitted from the same source, so this was not mysterious. For example, if I put two gloves from the same pair into two boxes and have two different people open the boxes to “measure” their handedness, one person will find a left-hand glove and the other person will find a right-hand glove. No mystery there. The alternative (which some in foundations believe) is that the electron spin is not determined until it is measured. That would be like saying each glove isn’t a right-hand or left-hand glove until its box is opened. No one believes that about gloves! So, Einstein argued, if you believe that about electron spin, then explain how each electron of the entangled pair produces a spin outcome at measurement such that the electrons always give opposite results in the same direction. What if those electrons were millions of miles apart? How would they signal each other instantly over such a great distance to coordinate their outcomes? Einstein derided that as “spooky actions at a distance” and instead believed the spin of an electron is an objective fact like the handedness of a glove. No one knew how to test Einstein’s belief until nine years after his death, when John Bell showed how it could be done.

In 1964, Bell published a paper that tells us if you measure the entangled electron spins in the same direction, you can’t discern if Einstein was right or “spooky actions” was right. But if you measure the spins in certain different directions, then quantum mechanics predicts correlation rates that differ from Einstein’s prediction. In 1972, John Clauser (with Stuart Freedman) carried out Bell’s proposed experiment and discovered that quantum mechanics was right. Apparently, “spooky actions at a distance” is a fact about reality. Later, Alain Aspect and Anton Zeilinger produced improved versions of the experiment and, in 2022, the three shared the Nobel Prize in Physics for their work.

Given these facts, you might think that the issue is settled—quantum mechanics is simply telling us that reality is “nonlocal” (contains “spooky actions at a distance”), so what’s the problem? The problem is that if instantaneous signaling (nonlocality) exists, then you can show that reality harbors a preferred reference frame. This is at odds with the relativity principle, i.e. the laws of physics are the same in all inertial reference frames (no preferred reference frame), which lies at the heart of Einstein’s theory of special relativity. In 1600, Galileo used the relativity principle to argue against the reigning belief that Earth is the center of the universe, thereby occupying a preferred reference frame, and, in 1687, Newton used Galileo’s argument to produce his laws of motion.

Physicists loathe the idea of abandoning the relativity principle and returning to a view of reality like that of geocentricism. So in order to save locality, some in foundations have proposed violations of statistical independence instead, e.g. causes from the future with effects in the present (retrocausality) or causal mechanisms that control how experimentalists choose measurement settings (superdeterminism). But most physicists believe that giving up statistical independence means giving up empirical science as we know it; consequently, there is no consensus solution to Einstein’s quantum riddle. Do we simply have to accept that reality is nonlocal or retrocausal or superdeterministic? Contrary to what appears to be the case, the answer is “no” and the alternative is quite ironic.

The solutions that violate locality or statistical independence assume that reality must be understood via causal mechanisms (“constructive efforts,” per Einstein). This is the exact same bias that led physicists to propose the preferred reference frame of the luminiferous ether in the late nineteenth century to explain the shocking fact that everyone measures the same value for the speed of light c, regardless of their different motions relative to the source. Trying to explain that experimental fact constructively led to a morass, much like today, in foundations and here is where the irony begins—Einstein abandoned his “constructive efforts” to solve that mystery in “principle” fashion. That is, instead of abandoning the relativity principle to explain the observer-independence of c constructively with the ether, he doubled down on the relativity principle. He said the observer-independence of c must be true because of the relativity principle! The argument is simple: Maxwell’s equations predict the value of c, so the relativity principle says c must have the same value in all inertial reference frames to include those in uniform relative motion. He then used the observer-independence of c to derive his theory of special relativity. Today, we still have no constructive alternative to this principle solution to the mystery of the observer-independence of c.

The next step in the ironic solution occurred when quantum information theorists abandoned “constructive efforts” in the exact same way to produce a principle account of quantum mechanics. In the quantum reconstruction program, quantum information theorists showed how quantum mechanics can be derived from an empirical fact called Information Invariance and Continuity, just like Einstein showed that special relativity can be derived from the empirical fact of the observer-independence of c. The ironic solution was completed when we showed how Information Invariance and Continuity entails the observer-independence of h (another constant of nature called Planck’s constant), regardless of the measurement direction relative to the source. Since h is a constant of nature per Planck’s radiation law, the relativity principle says it must be the same in all inertial reference frames to include those related by rotations in space. So, quantum information theorists have solved Einstein’s quantum riddle without invoking nonlocality, retrocausality, or superdeterminism by using Einstein’s beloved relativity principle to justify the observer-independence of h, just as Einstein did for the observer-independence of c.

Feature image credit: Jian Fan on iStock.

OUPblog - Academic insights for the thinking world.

Brighter than a trillion suns: an intense X-rated drama

Brighter than a trillion suns: an intense X-rated drama

You may be unaware of the celestial wonder known as OJ 287 but, as you will see, it is one of the most outlandish objects in the cosmos. Astronomers have known of periodic eruptions from OJ 287 since 1888 and in recent decades a mind-boggling explanation has emerged. It seems that the outbursts arise deep in the heart of a distant galaxy where two supermassive black holes are locked in a deadly embrace.

What is a black hole?

A black hole forms when a huge quantity of matter collapses under its own gravity to form an object whose gravitational attraction is so intense that nothing can escape, not even light. This fate awaits the most massive stars at the end of their lives.

Such stellar mass black holes may be a whopping five, ten, or even a hundred times the mass of the Sun. The first stellar mass black hole to be identified is known as Cygnus X-1. A black hole’s size is characterized by its event horizon. This is the sphere of no return: once inside all roads lead inexorably inwards. The radius of the event horizon of a 10 solar mass black hole is just 30 kilometres.

[Left: The red box in this image from the Digitized Sky Survey encloses the Cygnus X-1 system that contains a blue supergiant star and a black hole of around 15 solar masses.
Right: Artist’s visualization of the Cygnus X-1 system. (Image Credit: Cygnus X-1: NASA’s Chandra Adds to Black Hole Birth Announcement. Chandra X-ray Observatory, NASA.)]

Astronomers believe that at the centre of every galaxy there lurks a black hole on another scale entirely. These are the supermassive black holes whose mass may be millions or even billions of times that of the Sun. We do not, as yet, fully understand how they grow to be so enormous in the time available since the Big Bang.

Brighter than a trillion stars

Over time galaxies collide and merge, and this may bring their central supermassive black holes into close proximity. Indeed, OJ 287 is the most well-studied example of such a system where two colossal black holes dance around each other performing a celestial tango de la muerte. Astronomers estimate that the primary black hole is a staggering 18 billion solar masses, while its much smaller companion is a mere 150 million solar masses. This gives the primary’s event horizon a radius of over 50 billion kilometres. To put this into context, the distance between the Sun and the outermost planet Neptune is 4.5 billion kilometres. So, the primary black hole is a vast bottomless pit that would dwarf the entire solar system.

OJ 287 is the most well-studied example of a system where two colossal black holes perform a celestial tango de la muerte.

Surrounding this chasm is the black hole’s accretion disc—an incredibly hot swirling disc of plasma with a temperature of billions of degrees—so hot that it emits X-rays and gamma rays. As the secondary dances around its gigantic partner, it periodically crashes through this seething whirlpool of fire releasing a blast of radiation that is picked up by telescopes here on Earth, and this is how we know of this amazing system.

[Lankeswar Dey et al, ‘Authenticating the Presence of a Relativistic Massive Black Hole Binary in OJ 287 Using Its General Relativity Centenary Flare: Improved Orbital Parameters’, The Astrophysical Journal, Volume 866, issue 1, Page 3, Figure 2, October 2018, https://doi.org/10.3847/1538-4357/aadd95. © AAS. Reproduced with permission.]

These two-week-long flares are brighter than the combined light of an entire giant galaxy of a trillion stars. The radiation blast is produced mainly by hot plasma from the accretion disc spiralling into the secondary black hole. The OJ 287 system is 5 billion light years distant, so the light in these flares has been travelling our way since before the Earth formed. It is only because the flares are so bright that we can see them from such an incredible distance.

The clash of the cosmic titans

There are two flares every 12 years, the most recent in February 2022, as the secondary black hole plunges and re-emerges through the primary’s accretion disc. Like a cosmic duel in Lucifer’s inner sanctum, the two writhing supermassive black holes twist, twirl, and cavort around each other. Researchers led by Finnish astrophysicist Mauri Valtonen of Turku University and his colleague Achamveedu Gopakumar from the Tata Institute of Fundamental Research in Mumbai, India have used the precise timing of the flares to build a detailed picture of the orbit of the black holes based on our best theory of gravity—Einstein’s theory of general relativity. This enables them to predict when future flares will occur. The extreme nature of OJ 287 challenges our understanding of the fundamental laws of nature, offering tests for general relativity that have not been possible before. A wide range of astronomical instruments will be ready and waiting when the next blast is due to arrive. In the years ahead, we are sure to learn much more about this amazing system that illustrates just how weird the universe can be.

References: Mauri J Valtonen et al, ‘Refining the OJ 287 2022 impact flare arrival epoch’, Monthly Notices of the Royal Astronomical Society, Volume 521, Issue 4, June 2023, Pages 6143–6155, https://doi.org/10.1093/mnras/stad922

Feature image: Black Hole and a Disk of Glowing Plasma by Daniel Megias via iStock.

OUPblog - Academic insights for the thinking world.

Tuning in to the cosmic symphony: restarting LIGO

Tuning in to the cosmic symphony: restarting LIGO

In 2015 history was made when LIGO (Laser Interferometer Gravitational-Wave Observatory) detected the first ever gravitational wave signal. This was an incredible technological achievement and the beginning of a completely new way of investigating the cosmos.

The collision of two massive objects shakes the fabric of space, making it ring like a bell and producing ripples that travel unhindered through space. For several decades astronomers and physicists worked on the construction of LIGO with the goal of detecting these ripples. LIGO is the most sensitive instrument ever devised. It consists of two laboratories, one located in Hanford, Washington, and the other in Livingston, Louisiana. Each houses an L-shaped interferometer whose arms extend for 4 kilometres (2.5 miles). Within these arms, a powerful laser beam travels back and forth, bouncing between mirrors before recombining to form an interference pattern. As a gravitational wave passes by, the fabric of space is pulled and pushed and this alters the distance between the mirrors and these tiny disturbances change the interference pattern. LIGO’s sensitivity is truly astonishing. It can detect changes in distance of around one billionth of the size of an atom. Having two observatories is important; like listening in stereo, it helps to determine the direction from which the waves arrive. It also ensures that a signal came from deep space and not a local disturbance.

“LIGO has provided the most direct evidence that we have for black holes and their properties.”

By comparing the data captured by LIGO to computer models, physicists can determine how each gravitational wave signal was created. It is possible to deduce the masses of the colliding bodies, the rate at which they were spinning, the energy released in the collision and how far away they are. LIGO’s first signal arrived from the collision and merger of two black holes located around 1.3 billion light years away. In the subsequent five years, LIGO received close to one hundred signals. Almost all of them came from collisions between pairs of black holes. The most epic was the collision and merger of black holes with 85 and 66 times the mass of the Sun that produced a black hole of 142 solar masses. During this collision, a mind-boggling nine solar masses were converted into pure energy in the form of gravitational waves.

In 2017, the Italian gravitational wave observatory Virgo also achieved the exquisite sensitivity necessary for the detection of gravitational waves and joined the LIGO observatories in their quest for distant cosmic dramas. Later that year on 17 August one of the most spectacular duly arrived. This event, named GW170817, was the first detected signal to come from the merger of two neutron stars rather than two black holes. Neutron stars are bizarre objects formed from the collapsed cores of stars that have run out of nuclear fuel. They are just 20 kilometres in diameter but contain at least one and a half times the mass of the Sun. In many ways they are like gigantic atomic nuclei. This was the first time and, so far, the only time that the source of a gravitational wave signal has been located with optical instruments, heralding the dawn of multi-messenger astronomy. The combination of optical and gravitational data has greatly advanced our understanding of what happens when two neutron stars collide. It is like being able to both see the lightning and hear the thunderclap. These observations lent support to the idea that many of the heavier chemical elements such as gold are created and dispersed in neutron star collisions.

“LIGO offers wonderful new tests of our best theory of gravity, Einstein’s theory of general relativity.”

In 2020, LIGO’s operations were suspended to allow for a major upgrade of the system. Now, after a three-year hiatus, LIGO is back up and running. On 24 May LIGO started a new observing run with refined instruments. With its enhanced sensitivity, it is expected to detect a gravitational wave signal every two to three days. LIGO is the lynchpin of the LIGO-Virgo-KAGRA collaboration—a partnership with the world’s other two gravitational wave observatories: Virgo in Italy and KAGRA in Japan. The construction of a third LIGO detector in India has also recently been approved. This expansion of the global network of gravitational wave observatories will help to pinpoint the location of gravitational wave sources so that they can also be studied optically.

LIGO has provided the most direct evidence that we have for black holes and their properties, and offers wonderful new tests of our best theory of gravity, Einstein’s theory of general relativity. By observing and studying the mergers of black holes and neutron stars, scientists are gaining new insights into fundamental physics, the nature of gravity, and the evolution of the universe itself. The restart of LIGO and the global gravitational wave research network launches a new phase of deep space exploration. We can look forward to more incredible discoveries in the near future.

OUPblog - Academic insights for the thinking world.

Supporting researchers at every career stage

Supporting researchers at every career stage

Academia is a complex ecosystem with researchers at various stages of their careers striving to make meaningful contributions to their fields. In support of furthering knowledge, academic journals work with researchers to disseminate findings, engage with the scholarly community, and share academic advances.

Oxford University Press (OUP) publishes more than 500 high-quality trusted journals, two-thirds of which are published in partnership with societies, organizations, or institutions. The remaining third is a list of journals owned and operated by the Press. Fundamental to this list of owned journals is our mission to create world-class academic and educational resources and make them available as widely as possible, including expanding our fully open access options for authors. As a not-for-profit university press, our financial surplus is reinvested for the purpose of educational and scholarly objectives of the University and the Press, thereby fostering the continued growth of open access initiatives and supporting the scholarly community.

How do we support researchers in different career stages through our journals?

Early Career Researchers: nurturing talent

For early career researchers (ECRs), having their work published in a reputable journal is a crucial step in establishing their academic reputation. OUP journals provide several avenues of support including:

  • Mentoring and guidance: Some journals provide mentorship programs or editorial support to help young researchers navigate the publishing process.

Featuring Oxford Open Immunology and Oxford Open Energy:

  • Open access initiatives: 120 of the journals we publish are fully open access and the vast majority of the remaining journals offer authors open access options, making research freely available for a global audience to read, share, cite, and reuse. This helps early career researchers, and researchers of all stages in their career, gain visibility of their work and reach a wider readership.

Featuring our Oxford Open series:

Mid-career researchers: advancing expertise

As researchers progress in their careers, they require journals that can help them deepen their expertise and broaden their impact. OUP journals provide several avenues of support including:

  • Cutting-edge research: OUP journals prioritise publishing high-impact, innovative research, allowing mid-career researchers to stay updated with the latest advancements in their fields.

Featuring Exposome:

  • Editorial and reviewer roles: Many researchers at this stage are invited to serve as peer reviewers or editorial board members to further contribute their knowledge to the academic community and enhance their own expertise.

Featuring STEM CELLS Translational Medicine:

Established researchers: global recognition

For established researchers, maintaining a high level of visibility and recognition in the academic world is paramount. OUP journals provide several avenues of support including:

  • Prestige and impact in the field: OUP journals are known for their prestige and rankings in their relevant fields. Publishing in our journals can bolster an established researcher’s reputation.

Featuring Nucleic Acids Research:

  • Leadership opportunities: As a partner to academic research, all of OUP’s journals are edited by members of the academic community, longstanding experts in their own fields. Our journals therefore offer established researchers the opportunity to take on leadership roles within journal editorial boards as associate editors or editors-in-chief, helping to shape the direction of the journal and their fields.

Featuring Oxford Open Neuroscience:

OUP’s owned journals are more than just platforms for publishing research, they are invaluable partners in the academic journey of researchers at every career stage. From nurturing early career talent to supporting mid-career researchers in advancing their expertise and providing global recognition for established scholars, our journals contribute to the growth and success of the academic community. As the world of research continues to evolve, our journals will remain dedicated to supporting researchers around the world, ensuring knowledge is disseminated, shared, and celebrated.

Featured image by Pexels on Pixabay (public domain)

OUPblog - Academic insights for the thinking world.


Contact UsPast IssuesJoin This ListUnsubscribe

Safely Unsubscribe ArchivesPreferencesContactSubscribePrivacy