It’s possible that I shall make an ass of myself. But in that case one can always get out of it with a little dialectic. I have, of course, so worded my proposition as to be right either way (K.Marx, Letter to F.Engels on the Indian Mutiny)
Thursday, April 25, 2019
Extracting something from nothing: A bright glow from empty space
Particles travelling through empty space can emit bright flashes of gamma rays by interacting with the quantum vacuum, according to a new study by researchers at the University of Strathclyde. It has long been known that charged particles, such as electrons and protons, produce the electromagnetic equivalent of a sonic boom when their speeds exceed that of photons in the surrounding medium. This effect, known as Cherenkov emission, is responsible for the characteristic blue glow from water in a nuclear reactor, and is used to detect particles at the CERN Large Hadron Collider.
According to Einstein, nothing can travel faster than light in vacuum. Because of this, it is usually assumed that the Cherenkov emission cannot occur in vacuum. But according to quantum theory, the vacuum itself is packed full of "virtual particles", which move momentarily in and out of existence.
These ghostly particles are usually not observable but, in the presence of extremely strong electric and magnetic fields, they can turn the vacuum into an optical medium where the speed of light is slowed down so that high velocity charged particles can emit Cherenkov gamma rays. This is totally unexpected in a vacuum.
A group of physics researchers at Strathclyde have found that in extreme conditions, such as found at the focus of the world's most powerful lasers, and the huge magnetic fields around neutron stars, this 'polarised' vacuum can slow down gamma rays just enough for Cherenkov emission to occur. This means that the highest energy cosmic rays passing through the magnetic fields surrounding pulsars should predominantly emit Cherenkov radiation, vastly in excess of other types such as synchrotron radiation.The research has been published as an Editors' Suggestion in Physical Review Letters. It formed part of the EPSRC funded Lab in a Bubble project led by Professor Dino Jaroszynski, to investigate a suite of fundamental phenomena occurring in laser-plasma interactions, with applications in industry, security and medicine.
Professor Jaroszynski said: "The Lab in a Bubble project is providing a unique opportunity to use high power lasers to advance both fundamental knowledge and advanced technology for the benefit of society."This is a very exciting new prediction because it could provide answers to basic questions such as what is the origin of the gamma ray glow at the centre of galaxies? Also, it provides a new way of testing some of the most fundamental theories of science by pushing them to their limits.
"What is more, it will make a major contribution to the new High Field frontier of physics, made possible by the remarkable advances in laser technology which gained the award of the 2018 Nobel Prize in Physics."Dr. Adam Noble, who conceived the idea and led the theoretical research effort, said: We take it for granted that nothing can come out of empty space consisting of pure vacuum. But this is not quite true; modern quantum physics says otherwise, and there are some intriguing surprises.
"There is a huge international effort to push forward the limits of laser technology. While this is driven by the many practical applications of high power lasers, its success will depend on understanding all the fundamental processes involved in laser-matter interactions. These results reveal a new aspect of these processes."
Alexander Macleod, who also worked on the project as part of his Ph.D. project, said: "Quantum electrodynamics is one of the best tested theories in physics, with extraordinary agreement between theoretical predictions and experimental data. But this agreement has only been verified in the weak-field regime. Vacuum Cherenkov radiation offers a new way to test whether it survives in the strong-field limit."
Lab in a Bubble is a £4.5million Strathclyde-led, EPSRC-funded project for the production of bubble-sized 'laboratories' which could boost cancer treatment, medical imaging and industrial processes, in addition to enabling the investigation of fundamental physics problems.
Researchers in the international project aim to use high-powered lasers to conduct experiments in plasma bubbles so small that their diameters are equivalent to one tenth of the cross-section of a human hair. Plasma forms 99.999% of visible matter in the universe.
Mystery of the universe's expansion rate widens with new Hubble data
Astronomers using NASA's Hubble Space Telescope say they have crossed an important threshold in revealing a discrepancy between the two key techniques for measuring the universe's expansion rate. The recent study strengthens the case that new theories may be needed to explain the forces that have shaped the cosmos.
A brief recap: The universe is getting bigger every second. The space between galaxies is stretching, like dough rising in the oven. But how fast is the universe expanding? As Hubble and other telescopes seek to answer this question, they have run into an intriguing difference between what scientists predict and what they observe.
Hubble measurements suggest a faster expansion rate in the modern universe than expected, based on how the universe appeared more than 13 billion years ago. These measurements of the early universe come from the European Space Agency's Planck satellite. This discrepancy has been identified in scientific papers over the last several years, but it has been unclear whether differences in measurement techniques are to blame, or whether the difference could result from unlucky measurements.
The latest Hubble data lower the possibility that the discrepancy is only a fluke to 1 in 100,000. This is a significant gain from an earlier estimate, less than a year ago, of a chance of 1 in 3,000.
These most precise Hubble measurements to date bolster the idea that new physics may be needed to explain the mismatch.
"The Hubble tension between the early and late universe may be the most exciting development in cosmology in decades," said lead researcher and Nobel laureate Adam Riess of the Space Telescope Science Institute (STScI) and Johns Hopkins University, in Baltimore, Maryland. "This mismatch has been growing and has now reached a point that is really impossible to dismiss as a fluke. This disparity could not plausibly occur just by chance."
Tightening the bolts on the 'cosmic distance ladder'
Scientists use a "cosmic distance ladder" to determine how far away things are in the universe. This method depends on making accurate measurements of distances to nearby galaxies and then moving to galaxies farther and farther away, using their stars as milepost markers. Astronomers use these values, along with other measurements of the galaxies' light that reddens as it passes through a stretching universe, to calculate how fast the cosmos expands with time, a value known as the Hubble constant. Riess and his SH0ES (Supernovae H0 for the Equation of State) team have been on a quest since 2005 to refine those distance measurements with Hubble and fine-tune the Hubble constant.
In this new study, astronomers used Hubble to observe 70 pulsating stars called Cepheid variables in the Large Magellanic Cloud. The observations helped the astronomers "rebuild" the distance ladder by improving the comparison between those Cepheids and their more distant cousins in the galactic hosts of supernovas. Riess's team reduced the uncertainty in their Hubble constant value to 1.9% from an earlier estimate of 2.2%.
As the team's measurements have become more precise, their calculation of the Hubble constant has remained at odds with the expected value derived from observations of the early universe's expansion. Those measurements were made by Planck, which maps the cosmic microwave background, a relic afterglow from 380,000 years after the big bang.
The measurements have been thoroughly vetted, so astronomers cannot currently dismiss the gap between the two results as due to an error in any single measurement or method. Both values have been tested multiple ways.
"This is not just two experiments disagreeing," Riess explained. "We are measuring something fundamentally different. One is a measurement of how fast the universe is expanding today, as we see it. The other is a prediction based on the physics of the early universe and on measurements of how fast it ought to be expanding. If these values don't agree, there becomes a very strong likelihood that we're missing something in the cosmological model that connects the two eras."
How the new study was done
Astronomers have been using Cepheid variables as cosmic yardsticks to gauge nearby intergalactic distances for more than a century. But trying to harvest a bunch of these stars was so time-consuming as to be nearly unachievable. So, the team employed a clever new method, called DASH (Drift And Shift), using Hubble as a "point-and-shoot" camera to snap quick images of the extremely bright pulsating stars, which eliminates the time-consuming need for precise pointing.
"When Hubble uses precise pointing by locking onto guide stars, it can only observe one Cepheid per each 90-minute Hubble orbit around Earth. So, it would be very costly for the telescope to observe each Cepheid," explained team member Stefano Casertano, also of STScI and Johns Hopkins. "Instead, we searched for groups of Cepheids close enough to each other that we could move between them without recalibrating the telescope pointing. These Cepheids are so bright, we only need to observe them for two seconds. This technique is allowing us to observe a dozen Cepheids for the duration of one orbit. So, we stay on gyroscope control and keep 'DASHing' around very fast."
The Hubble astronomers then combined their result with another set of observations, made by the Araucaria Project, a collaboration between astronomers from institutions in Chile, the U.S., and Europe. This group made distance measurements to the Large Magellanic Cloud by observing the dimming of light as one star passes in front of its partner in eclipsing binary-star systems.
The combined measurements helped the SH0ES Team refine the Cepheids' true brightness. With this more accurate result, the team could then "tighten the bolts" of the rest of the distance ladder that extends deeper into space.
The new estimate of the Hubble constant is 74 kilometers (46 miles) per second per megaparsec. This means that for every 3.3 million light-years farther away a galaxy is from us, it appears to be moving 74 kilometers (46 miles) per second faster, as a result of the expansion of the universe. The number indicates that the universe is expanding at a 9% faster rate than the prediction of 67 kilometers (41.6 miles) per second per megaparsec, which comes from Planck's observations of the early universe, coupled with our present understanding of the universe.
So, what could explain this discrepancy?
One explanation for the mismatch involves an unexpected appearance of dark energy in the young universe, which is thought to now comprise 70% of the universe's contents. Proposed by astronomers at Johns Hopkins, the theory is dubbed "early dark energy," and suggests that the universe evolved like a three-act play.
Astronomers have already hypothesized that dark energy existed during the first seconds after the big bang and pushed matter throughout space, starting the initial expansion. Dark energy may also be the reason for the universe's accelerated expansion today. The new theory suggests that there was a third dark-energy episode not long after the big bang, which expanded the universe faster than astronomers had predicted. The existence of this "early dark energy" could account for the tension between the two Hubble constant values, Riess said.
Another idea is that the universe contains a new subatomic particle that travels close to the speed of light. Such speedy particles are collectively called "dark radiation" and include previously known particles like neutrinos, which are created in nuclear reactions and radioactive decays.
Yet another attractive possibility is that dark matter (an invisible form of matter not made up of protons, neutrons, and electrons) interacts more strongly with normal matter or radiation than previously assumed.
But the true explanation is still a mystery.
Riess doesn't have an answer to this vexing problem, but his team will continue to use Hubble to reduce the uncertainties in the Hubble constant. Their goal is to decrease the uncertainty to 1%, which should help astronomers identify the cause of the discrepancy.
The team's results have been accepted for publication in The Astrophysical Journal.
Maths shows the nature of 'tipping points' for climate and eco crises
Humans need to be wary of breaching a 'point of no return' that leads to ecological disaster such as loss of rainforests or irreversible climate change, according to the most detailed study of its kind.
The thin line separating the Earth's current climate from a frozen one – the so-called snowball state—has been explored in new research led by the University of Reading that combines mathematics with climate science
Researchers analysed how random events and human action could combine to reach a tipping point, where one natural state transitions to a very different one.
The findings, published today in the journal Physical Review Letters, can be applied to the Earth's climate, landscape features or ecosystems like a rainforest to aid our understanding of how they can be altered or destroyed after reaching a point of no return.
Valerio Lucarini, Professor of Statistical Mechanics at the University of Reading and lead author of the study, said: "Changes in climate or catastrophic declines in natural features like forests all happen in a fashion similar to a journey in a mountain region. These states are like two valleys divided by a mountain pass, which must be crossed in order to move between them.
"Pinpointing this dividing line has allowed us to better understand when we are likely to see transitions in the natural world. This helps outline a safe operating space, enabling us to tailor our behaviour to remain within this and to realise when a transition could occur. Cutting down trees, damaging ecosystems or altering the climate could all cause us to stray too close to a tipping point, risking dramatic and irreversible change."
The new research builds on a previous (2017) study published in Nonlinearity by the same authors, which used a dynamic method to identify the tipping point between two competing states. That study led to an unprecedented understanding of the global stability properties of the climate and was featured as a highlight of the year by the IOP Science journal that published it.
The new study aids our understanding of the Earth's climate tipping points. The Earth flipped multiple times between a warm and snowball state about 650 million years ago, preceding the beginning of multicellular life.
The team used random fluctuations to simulate an approach to such a tipping point, showing at what point a transition from one state to the other becomes likely.
This can be applied to natural features like the Amazon rainforest. The rainforest experiences fluctuations caused by fires, drought or human-caused deforestation, but is able to regenerate up to a certain point. The research could help us to judge the point at which a forest would become unable to absorb these events and begin an unstoppable decline, allowing us to act accordingly to preserve it.
The team now plan to apply their findings to a real-world climate transition that can be seen today, analysing the processes that lead to the start and end of the monsoon season in parts of the world, or those responsible for different circulation regimes in the Atlantic ocean.
Professor Lucarini said: "Crossing a tipping point relies on a combination of random events that accumulate to produce the transition.
"Human action might be insignificant when the tipping point is far away, but could be the final straw as we approach it. Understanding this context is crucial to judging when we might topple into a new state."
SUBCULTURAL THEORY, DRIFT AND PUBLICITY: HOW A CONTEMPORARY CULTURE OF ADOLESCENCE RELATES TO DELINQUENCY by Adam Monroe Stearn ABSTRACT OF DISSERTATION Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Criminology and Criminal Justice in the Graduate School of Arts and Sciences of Northeastern University, January, 2012 Abstract In order to understand adolescents, criminologists have looked to cultural theories of adolescence. These cultural theories emphasize adolescent norms and values and draw on the term subcultural to denote how delinquency can be explained among segments of youths. They tend to focus either on impoverished inner city youths or youths without any class affiliation. Few studies have examined the extent to which adolescent subcultures exist in the middle 7 class and what these subcultures might look like. The subcultural study of adolescence has also shifted from criminology to the realm of sociology resulting in the role of delinquency all but being ignored. Thus, theorists are left to wonder: The extent to which middle class subcultures exist, and what role delinquency plays in them? The current research addressed this question by focusing on both qualitative (content of personal web pages) and quantitative (survey questions) data. The website postings come from a current social networking site and provide the researcher with personal descriptions, written interactions with other youth, and descriptions of delinquency. The survey questions stem from a survey conducted among adolescents in a largely affluent community. Both data sets were drawn upon to relate adolescent subcultural identities. In addition, the analyses examined self 7 reported delinquency and the relationship between identity, delinquency, and experiences within the various life domains, such as the family unit, peer groups, and school. The results of these analyses suggest that the average adolescent residing in a middle class neighborhood identifies with multiple subcultures while at the same time stressing his or her individuality. In addition, the adolescent drifts in and out of these subcultural identities based on the life domain he or she is in. Finally, deviance—most commonly the consumption of alcohol and marijuana—is communicated by the subcultures’ members as was demonstrated by the behavior’s publicity.