Monday, September 06, 2021

 

Greenhouses Probably Won’t Work for Growing Crops on Mars Because of Cosmic Radiation

First Humans on Mars

This artist’s concept depicts astronauts and human habitats on Mars. NASA’s Mars 2020 rover will carry a number of technologies that could make Mars safer and easier to explore for humans. Credit: NASA

Mars is a lifeless wasteland for more than one reason. Not only are the temperatures and lack of water difficult for life to deal with, the lack of a magnetic field means radiation constantly pummels the surface. If humans ever plan to spend prolonged periods of time on the red planet, they’ll need to support an additional type of life – crops. However, it appears that even greenhouses on the surface won’t do enough to protect their plants from the deadly radiation of the Martian surface, at least according to a new paper published by researchers at Wageningen University and the Delft University of Technology.

Ideally, agriculture on the Maritan surface would consist of greenhouse domes and allow what limited sunlight hits the planet to make it through to the crops they house directly. However, current technology greenhouse glass is incapable of blocking the deadly gamma radiation that constantly irradiates Mars. Those gamma radiation levels, which are about 17 times higher on Mars than on Earth, are enough to affect crops grown in greenhouses on the surface significantly.


UT video discussing how to live with in situ resource utilization on Mars.

The researchers ran an experiment where they planted garden cress and rye and measured the crop output of a group irradiated with Martian levels of gamma radiation with those grown in a “normal” environment with only Earth-level radiation. The crops in the irradiated group ended up as dwarves, with brown leaves, and resulted in a significantly decreased harvest after 28 days of growth.

To mimic the gamma radiation environment, Nyncke Tack, an undergraduate researcher who performed much of the work for the project, used 5 separate cobalt-60 radiation sources. These were scattered evenly overhead of the test crops to create a “radiation plane” similar to the ever-present radiation field on Mars.


UT video about colonizing the inner solar system.

Other confounding factors, including adding beta and alpha radiation, could also contribute to crop deterioration, though solid objects more easily stop those types of radiation. The research team, who was not surprised by their findings, suggests building underground farms where the planet’s regolith blocks most if not all of that radiation. This would have the obvious disadvantage of losing access to sunlight, but would have the added benefit of being a much more controllable environment, with LEDs and temperature control filling in for environmental conditions on the surface.

To prove their theory, the team is next commandeering a Cold War-era bunker in the Netherlands to see if their same irradiation experiments affect crops grown inside if the irradiation is coming from outside. While not a direct analog for Martian regolith, it’s a novel approach to understanding how humans might eventually farm the sky.

Originally published on Universe Today.

 

Surprising Discovery of Light-Induced Shape Shifting of MXenes

Abstract Light Ripples

Ultrafast laser spectroscopy allows observing the motion of atoms at their natural time scales in the range of femtoseconds, the millionth of a billionth of a second. Electron microscopy, on the other hand, provides atomic spatial resolution. By combining electrons and photons in one instrument, the group of Professor Peter Baum at the University of Konstanz has developed some of the fastest electron microscopes for obtaining detailed insight into materials and their dynamics at ultimate resolutions in both space and time.

In their recent publication in ACS NANO, scientists from the Baum lab have applied this technique together with colleagues from ETH Zurich to study novel materials – two-dimensional molecularly defined sheets called MXenes – and made a surprising discovery. Using laser pulses, MXenes can be switched repeatedly between a flat and a rippled shape, opening up a wide spectrum of possible applications.

MXenes: novel two-dimensional materials

MXenes are two-dimensional sheets of transition metal carbides or nitrides in the form of few-atom-thick single layers. “MXenes are comparable to a molecule in one spatial dimension and to an extended solid in the other two,” Dr. Mikhail Volkov, first author of the recent study, describes the structure of MXenes. MXenes are synthesized by “peeling off” the thin layers of material from a precursor material – a process called exfoliation.

Light-Induced MXene Shape Shifting

Femtosecond light creates switchable nano-waves in MXenes and moves the materials’ atoms at a record-breaking speed – discovery made by physicists from Konstanz and Zurich. Credit: University of Konstanz

In contrast to most other single-layer materials, MXenes can be easily produced in large quantity, thanks to the discovery of a scalable and irreversible chemical exfoliation method. The chemical and physical properties of MXenes can be widely tuned by the choice of the transition metal, leading to widespread applications of MXenes in sensing, energy storage, light harvesting, and antibacterial action.

Nano-waves in MXenes formed by fast light

In their study, primary investigators Dr Mikhail Volkov from the University of Konstanz and Dr. Elena Willinger from ETH Zurich have found a new way to enhance the properties of MXenes by shining fast light pulses on them. Using ultrafast electron microscopy with atomic spatial resolution, they recorded a movie of MXenes interacting with femtosecond laser pulses, showing that the laser energy transfers to the atomic lattice in a record-breaking time of merely 230 femtoseconds.

Unexpectedly, the scientists also found that femtosecond laser light can be used to switch back and forth between the originally flat surface structure of the MXene and a nano-wave form of the material – a hill-and-valley “nano-landscape” with a periodicity that is more than fifty times finer than the laser wavelength. “We can control the nano-wave’s orientation with the polarization of the laser, which means the material has an optical memory on the nanoscale. Moreover, if the laser strikes again, the nano-waved MXene turns back into a plane and remains flat during illumination. The extremely small size of the nano-waves and the fast lattice reaction are also quite surprising, and a phenomenon called plasmon-phonon coupling is likely involved,” explains Volkov.

Nano-waves boosting material performance

“Nano-structuring in the form of waves also increases the surface-to-volume ratio of the materials, making them chemically more reactive. In addition, it enhances the local electro-magnetic fields, improving the coupling with light – a valuable property for sensing applications,” says Volkov. The scientists therefore expect the discovered nano-waved MXenes to show improved energy storage capacity and enhanced catalytic or antibiotic activity. “Finally, the possibility to switch the structure of MXenes between plane and wavy ‘on demand’ via a laser pulse opens up intriguing ways to use the materials in active plasmonic, chemical, and electric devices,” Volkov concludes.

Reference: “Photo-Switchable Nanoripples in Ti3C2Tx MXene” by Mikhail Volkov, Elena Willinger, Denis A. Kuznetsov, Christoph R. Müller, Alexey Fedorov and Peter Baum, 31 August 2021, ACS NANO.
DOI: 10.1021/acsnano.1c03635

Key facts:

  • Investigation of the light-induced behavior of MXenes using ultrafast electron microscopy by researchers from the University of Konstanz and ETH Zurich
  • Laser energy is transferred to the atomic lattice of MXenes at a record-breaking speed (in the femtosecond range).
  • Laser light creates a wavy surface structure in the otherwise flat material. Repeated laser stimulation can be used to switch back and forth between nano-wave and flat structure.
  • Optical switchability opens up a broad spectrum of possible applications
  • Funding: European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program

“Powerful Hints” –Quantum Beginning of Spacetime (Weekend Feature)

Big Bang Image

 

“We didn’t have birds in the very early universe; we have birds later on. We didn’t have time in the early universe, but we have time later on,” said Stephen Hawking’s colleague, physicist James Hartle, at the University of California, Santa Barbara about what came before the Big Bang.

“Asking what came before the Big Bang,” said Stephen Hawking, “would be like asking what lies south of the South Pole. There ought to be something very special about the boundary conditions of the universe, and what can be more special than the condition that there is no boundary?” observed Hawking in 1981, at a gathering of many of the world’s leading cosmologists at the Vatican’s Pontifical Academy of Sciences.

Beginning of Our Universe – One of the Big Open Questions

The beginning of our universe – if there is one – is one of the big open questions in theoretical physics. The Big Bang is one of science’s great mysteries, and it seems the plot has thickened thanks to new research that refutes prevailing theories about the birth of the universe. A classical description of the big bang implies a singularity: a point of infinite smallness, at which Einstein’s theory of gravity – general relativity – breaks down.

“We know a lot about how our universe evolved from a second or so after the Big Bang onward. Looking back even further, the Large Hadron Collider has taught us a great deal about what our universe was like when it was as young as a trillionth of a second old,” wrote cosmologist Dan Hooper at the University of Chicago in an email to The Daily Galaxy. “But when it comes to anything prior to that,” Hooper notes, “we are blindly extrapolating. We have some fairly good reasons to think that our universe underwent a period of hyperfast expansion — cosmic inflation — when it was very young. What happened before that is anyone’s guess.”

No-Boundary Proposal 

To tackle this problem, two proposals were put forward in the 1980s: the “no-boundary proposal” by Stephen Hawking and James Hartle, and Alexander Vilenkin’s theory known as “tunneling from nothing.” Each proposal attempted to describe a smoother beginning to spacetime, using quantum theory. Rather than the infinitely pointy needle of the classical big bang, the proposals described something closer to the rounded tip of a well-used pencil – curved, without an edge or tip.

The “no-boundary proposal,” which Hawking and Hartle fully formulated in a 1983 paper, reports Mike Zeng in Quanta, envisions the cosmos having the shape of a shuttlecock. Just as a shuttlecock has a diameter of zero at its bottommost point and gradually widens on the way up, the universe, according to the no-boundary proposal, smoothly expands from a point of zero size.

“Asking what came before the Big Bang is meaningless, according to the no-boundary proposal, because there is no notion of time available to refer to,” Hawking said in another lecture at the Pontifical Academy in 2016, a year and a half before his death.

“I don’t believe in a wavefunction of the universe, nor in full determinism,” wrote Swiss physicist Nicolas Gisin, author of Mathematical languages shape our understanding of time in physics in an email to The Daily Galaxy referring to Hartle and Hawking’s  formula, the so-called “wave function of the universe” that encompasses the entire past, present and future at once — making debatable the seeds of creation, a creator, or any phase transition from a time before. “Time passes, we all know that,” Gisin continued. “Indeterminism implies the creation of new information, though meaningless information. Such creation of new information doesn’t require any God nor intelligent design.”

While Hawking’s no-boundary view has spawned much research, new mathematical work suggests such a smooth beginning could not have given birth to the ordered universe we see today.

But two years ago, a paper by Neil Turok and Job Feldbrugge of the Perimeter Institute (Canada), and Jean-Luc Lehners of the Max Planck Institute for Gravitational Physics in Germany called the Hartle-Hawking proposal into question, pointing out mathematical inconsistencies in the Hawking “no boundary” and “tunneling” proposals.

The proposal is only viable if a universe that curves out of a dimensionless point in the way Hartle and Hawking imagined naturally grows into a universe like ours, writes Zeng. Hawking and Hartle argued that indeed it would — that universes with no boundaries will tend to be huge, breathtakingly smooth, impressively flat, and expanding, just like the actual cosmos. “The trouble with Stephen and Jim’s approach is it was ambiguous,” Turok said — “deeply ambiguous.”

Hartle and Hawking’s proposal presents a radical new vision of time, reports Zeng: “Each moment in the universe becomes a cross-section of a shuttlecock shaped cosmos; while we perceive the universe as expanding and evolving from one moment to the next, time really consists of correlations between the universe’s size in each cross-section and other properties — particularly its entropy, or disorder. Entropy increases from the cork to the feathers, aiming an emergent arrow of time. Near the shuttlecock’s rounded-off bottom, though, the correlations are less reliable; time ceases to exist and is replaced by pure space.”

The no-boundary proposal has fascinated and inspired physicists for nearly four decades. “It’s a stunningly beautiful and provocative idea,” said Neil Turok, a cosmologist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada, and a former collaborator of Hawking. The proposal represented a first guess at the quantum description of the cosmos — the wave function of the universe. Soon an entire field, quantum cosmology, sprang up as researchers devised alternative ideas about how the universe could have come from nothing.

 

Beautiful Proposals That Don’t Hold Up

Turok, Director and Niels Bohr Chair at the Perimeter Institute for Theoretical Physics in Ontario, says the previous models were “beautiful proposals seeking to describe a complete picture of the origin of spacetime,” but they don’t hold up to this new mathematical assessment. “Unfortunately, at the time those models were proposed, there was no adequately precise formulation of quantum gravity available to determine whether these proposals were mathematically meaningful.”

No Smooth Beginning for Spacetime

The new research, outlined in a paper called “No smooth beginning for spacetime,” demonstrates that a universe emerging smoothly from nothing would be “wild and fluctuating,” strongly contradicting observations, which show the universe to be extremely uniform across space.

“Hence the no-boundary proposal does not imply a large universe like the one we live in, but rather tiny curved universes that would collapse immediately,” said Lehners, a former Perimeter postdoc who leads the theoretical cosmology group at the Albert Einstein Institute.

Turok, Lehners, and Feldbrugge reached this result by revisiting the foundations of the field. They found a new way to use powerful mathematics developed over the past century to tackle one of physics’ most basic problems: how to connect quantum physics to gravity. The work builds on previous research Turok conducted with Steffen Gielen, a postdoc at the Canadian Institute for Theoretical Astrophysics and at Perimeter, in which they replaced the concept of the “classical big bang” with a “quantum big bounce.”

Turok, Lehners, and Feldbrugge are now trying to determine what mechanism could have kept large quantum fluctuations in check while allowing our large universe to unfold.

Rethink the Most Elementary Models of Quantum Gravity

The new research implies that “we either should look for another picture to understand the very early universe, or that we have to rethink the most elementary models of quantum gravity,” said Feldbrugge. “Uncovering this problem gives us a powerful hint. It is leading us closer to a new picture of the big bang,” concluded Turok. 

“People place huge faith in Stephen’s intuition,” Turok told Zeng. “For good reason — I mean, he probably had the best intuition of anyone on these topics. But he wasn’t always right.”

Read: “Physicists Debate Hawking’s Idea That the Universe Had No Beginning”

The Daily Galaxy, Maxwell Moe, astrophysicist, NASA Einstein Fellow, University of Arizona via The Perimeter Institute and Quanta

The New Thermodynamic Understanding of Clocks

Studies of the simplest possible clocks have revealed their fundamental limitations — as well as insights into the nature of time itself.



Pretty much anything can be a clock, but some clocks are more useful than others.


Corinne Reid for Quanta Magazine
Natalie Wolchover
Senior Writer/Editor


August 31, 2021

VIEW PDF/PRINT MODE

In 2013, a masters student in physics named Paul Erker went combing through textbooks and papers looking for an explanation of what a clock is. “Time is what a clock measures,” Albert Einstein famously quipped; Erker hoped a deeper understanding of clocks might inspire new insights about the nature of time.

But he found that physicists hadn’t bothered much about the fundamentals of timekeeping. They tended to take time information for granted. “I was very unsatisfied by the way the literature so far dealt with clocks,” Erker said recently.

The budding physicist started thinking for himself about what a clock is — what it takes to tell time. He had some initial ideas. Then in 2015, he moved to Barcelona for his doctorate. There, a whole cadre of physicists took up Erker’s question, led by a professor named Marcus Huber. Huber, Erker and their colleagues specialized in quantum information theory and quantum thermodynamics, disciplines concerning the flow of information and energy. They realized that these theoretical frameworks, which undergird emerging technologies like quantum computers and quantum engines, also provided the right language for describing clocks.

“It occurred to us that actually a clock is a thermal machine,” Huber explained over Zoom, his dark blond dreadlocks draped over a black T-shirt. Like an engine, a clock harnesses the flow of energy to do work, producing exhaust in the process. Engines use energy to propel; clocks use it to tick.


From left: Paul Erker, Nicolai Friis, Emanuel Schwarzhans, Maximilian Lock and Marcus Huber coauthored a recent paper on clock thermodynamics.



IQOQI Vienna

Over the past five years, through studies of the simplest conceivable clocks, the researchers have discovered the fundamental limits of timekeeping. They’ve mapped out new relationships between accuracy, information, complexity, energy and entropy — the quantity whose incessant rise in the universe is closely associated with the arrow of time.

These relationships were purely theoretical until this spring, when the experimental physicist Natalia Ares and her team at the University of Oxford reported measurements of a nanoscale clock that strongly support the new thermodynamic theory.

Nicole Yunger Halpern, a quantum thermodynamicist at Harvard University who was not involved in the recent clock work, called it “foundational.” She thinks the findings could lead to the design of optimally efficient, autonomous quantum clocks for controlling operations in future quantum computers and nanorobots.

The new perspective on clocks has already provided fresh fodder for discussions of time itself. “This line of work does grapple, in a fundamental way, with the role of time in quantum theory,” Yunger Halpern said.

Gerard Milburn, a quantum theorist at the University of Queensland in Australia who wrote a review paper last year about the research on clock thermodynamics, said, “I don’t think people appreciate just how fundamental it is.”
What a Clock Is

The first thing to note is that pretty much everything is a clock. Garbage announces the days with its worsening smell. Wrinkles mark the years. “You could tell time by measuring how cold your coffee has gotten on your coffee table,” said Huber, who is now at the Technical University of Vienna and the Institute for Quantum Optics and Quantum Information Vienna.

Early in their conversations in Barcelona, Huber, Erker and their colleagues realized that a clock is anything that undergoes irreversible changes: changes in which energy spreads out among more particles or into a broader area. Energy tends to dissipate — and entropy, a measure of its dissipation, tends to increase — simply because there are far, far more ways for energy to be spread out than for it to be highly concentrated. This numerical asymmetry, and the curious fact that energy started out ultra-concentrated at the beginning of the universe, are why energy now moves toward increasingly dispersed arrangements, one cooling coffee cup at a time.

Not only do energy’s strong spreading tendency and entropy’s resulting irreversible rise seem to account for time’s arrow, but according to Huber and company, it also accounts for clocks. “The irreversibility is really fundamental,” Huber said. “This shift in perspective is what we wanted to explore.”




MULTIMEDIA
Arrows of Time

MAY 4, 2020


Coffee doesn’t make a great clock. As with most irreversible processes, its interactions with the surrounding air happen stochastically. This means you have to average over long stretches of time, encompassing many random collisions between coffee and air molecules, in order to accurately estimate a time interval. This is why we don’t refer to coffee, or garbage or wrinkles, as clocks.

We reserve that name, the clock thermodynamicists realized, for objects whose timekeeping ability is enhanced by periodicity: some mechanism that spaces out the intervals between the moments when irreversible processes occur. A good clock doesn’t just change. It ticks.

The more regular the ticks, the more accurate the clock. In their first paper, published in Physical Review X in 2017, Erker, Huber and co-authors showed that better timekeeping comes at a cost: The greater a clock’s accuracy, the more energy it dissipates and the more entropy it produces in the course of ticking.

“A clock is a flow meter for entropy,” said Milburn.

They found that an ideal clock — one that ticks with perfect periodicity — would burn an infinite amount of energy and produce infinite entropy, which isn’t possible. Thus, the accuracy of clocks is fundamentally limited.

Indeed, in their paper, Erker and company studied the accuracy of the simplest clock they could think of: a quantum system consisting of three atoms. A “hot” atom connects to a heat source, a “cold” atom couples to the surrounding environment, and a third atom that’s linked to both of the others “ticks” by undergoing excitations and decays. Energy enters the system from the heat source, driving the ticks, and entropy is produced when waste energy gets released into the environment.




Samuel Velasco/Quanta Magazine

The researchers calculated that the ticks of this three-atom clock become more regular the more entropy the clock produces. This relationship between clock accuracy and entropy “intuitively made sense to us,” Huber said, in light of the known connection between entropy and information.

In precise terms, entropy is a measure of the number of possible arrangements that a system of particles can be in. These possibilities grow when energy is spread more evenly among more particles, which is why entropy rises as energy disperses. Moreover, in his 1948 paper that founded information theory, the American mathematician Claude Shannon showed that entropy also inversely tracks with information: The less information you have about, say, a data set, the higher its entropy, since there are more possible states the data can be in.

“There’s this deep connection between entropy and information,” Huber said, and so any limit on a clock’s entropy production should naturally correspond to a limit of information — including, he said, “information about the time that has passed.”

In another paper published in Physical Review X earlier this year, the theorists expanded on their three-atom clock model by adding complexity — essentially extra hot and cold atoms connected to the ticking atom. They showed that this additional complexity enables a clock to concentrate the probability of a tick happening into narrower and narrower windows of time, thereby increasing the regularity and accuracy of the clock.

In short, it’s the irreversible rise of entropy that makes timekeeping possible, while both periodicity and complexity enhance clock performance. But until 2019, it wasn’t clear how to verify the team’s equations, or what, if anything, simple quantum clocks had to do with the ones on our walls.
Measuring Ticks

At a conference dinner that year, Erker sat near Anna Pearson, a graduate student at Oxford who had given a talk he’d found interesting earlier that day. Pearson worked on studies of a 50-nanometer-thick vibrating membrane. In her talk, she remarked offhandedly that the membrane could be stimulated with white noise — a random mix of radio frequencies. The frequencies that resonated with the membrane drove its vibrations.

To Erker, the noise seemed like a heat source, and the vibrations like ticks of a clock. He suggested a collaboration.

Pearson’s supervisor, Ares, was enthusiastic. She’d already discussed with Milburn the possibility that the membrane could behave as a clock, but she hadn’t heard about the new thermodynamic relationships derived by the other theorists, including the fundamental limit on accuracy. “We said, ‘We can definitely measure that!’” Ares said. “‘We can measure the entropy production! We can measure the ticks!’”

The vibrating membrane isn’t a quantum system, but it’s small and simple enough to allow precise tracking of its motion and energy use. “We can tell from the energy dissipation in the circuit itself how much the entropy changes,” Ares said.

She and her team set out to test the key prediction from Erker and company’s 2017 paper: That there should be a linear relationship between entropy production and accuracy. It was unclear whether the relationship would hold for a larger, classical clock, like the vibrating membrane. But when the data rolled in, “we saw the first plots [and] we thought, wow, there is this linear relationship,” Huber said.

The regularity of the membrane clock’s vibrations directly tracked with how much energy entered the system and how much entropy it produced. The findings suggest that the thermodynamic equations the theorists derived may hold universally for timekeeping devices.



Natalia Ares measured the thermodynamic properties of a clock made from a tiny vibrating membrane, shown here surrounded by circuitry in her lab at the University of Oxford. Dave Fleming; Courtesy of Natalia Ares

Most clocks don’t approach these fundamental limits; they burn far more than the minimum energy to tell time. Even the world’s most accurate atomic clocks, like those operated at the JILA institute in Boulder, Colorado, “are far from the fundamental limit of minimum energy,” said Jun Ye, a physicist at JILA. But, Ye said, “we clockmakers are trying to use quantum information science to build more precise and accurate clocks,” and so fundamental limits may become important in the future. Yunger Halpern agrees, noting that efficient, autonomous clocks may eventually govern the timing of operations inside quantum computers, removing the need for external control.

Practicalities aside, Erker’s hope has stayed the same since his student days. “The ultimate goal would be to understand what time is,” he said.
A Smooth Order

One major aspect of the mystery of time is the fact that it doesn’t play the same role in quantum mechanics as other quantities, like position or momentum; physicists say there are no “time observables” — no exact, intrinsic time stamps on quantum particles that can be read off by measurements. Instead, time is a smoothly varying parameter in the equations of quantum mechanics, a reference against which to gauge the evolution of other observables.

Physicists have struggled to understand how the time of quantum mechanics can be reconciled with the notion of time as the fourth dimension in Einstein’s general theory of relativity, the current description of gravity. Modern attempts to reconcile quantum mechanics and general relativity often treat the four-dimensional space-time fabric of Einstein’s theory as emergent, a kind of hologram cooked up by more abstract quantum information. If so, both time and space ought to be approximate concepts.

RELATED:

Does Time Really Flow? New Clues Come From a Century-Old Approach to Math.

The Universal Law That Aims Time’s Arrow

A Defense of the Reality of Time

The clock studies are suggestive, in showing that time can only ever be measured imperfectly. The “big question,” said Huber, is whether the fundamental limit on the accuracy of clocks reflects a fundamental limit on the smooth flow of time itself — in other words, whether stochastic events like collisions of coffee and air molecules are what time ultimately is.

“What we’ve done is to show that even if time is a perfect, classical and smooth parameter governing time evolution of quantum systems,” Huber said, “we would only be able to track its passage” imperfectly, through stochastic, irreversible processes. This invites a question, he said: “Could it be that time is an illusion and smooth time is an emergent consequence of us trying to put events into a smooth order? It is certainly an intriguing possibility that is not easily dismissed.”
Scientists say a telescope on the Moon could advance physics — and they're hoping to build one

The Moon's lack of atmosphere and darkness could offers unique observations of the universe


By NICOLE KARLIS
PUBLISHED SEPTEMBER 5, 2021
View Of Earth's Moon Against The Sky At Night
 (Getty Images/Alexander Rieber/EyeEm)

Humans are reliant on the Moon for far more than most realize. The natural satellite that lights up the nighttime sky moderates Earth's tilt, creating a more stable and livable climate for us here on Earth. Without the Moon, there would be no seasons. And, the Moon also creates tides, which help move heat across the ocean from the equator to the poles.

In addition to the Moon's vital effects on Earth, this enchanting orb that has mesmerized humans since history began could play a critical role in furthering our understanding of the early universe, if only we can build an observatory there.

Interestingly, there is now a plan in development to do just that. In April 2020, the National Aeronautics and Space Administration (NASA) awarded the Lunar Crater Radio Telescope (LCRT) project $500,000 for further research and development. The premise of this project is that a massive radio telescope would be built by robots on the far side of the Moon in a 100-meter long, bowl-shaped crater with the mission of observing radio wavelengths that are 10 meters and longer.

One might wonder: why the Moon? Isn't this something that we can do here on Earth? The truth is there is only so much data we can gather about the universe from Earth, in part due to the own limitations of our planet when it comes to observing the night sky. Earth's (comparatively) dense atmosphere, light pollution and man-made electromagnetic radiation significantly hamper our ability to clearly observe the cosmos from our home planet.

In the case of radio telescopes, the Moon is an especially tantalizing choice for an observatory. On Earth, scientists are unable to observe cosmic radio waves that are longer than 10 meters because of the ionosphere — a layer of electrons, charged atoms and molecules, that surrounds Earth and protects us from harmful rays from the sun and other bad stuff in space. Earth's ionosphere essentially absorbs any radio wavelengths over 10 meters long. On the Moon, a lack of atmosphere and radiation could (on the far side) vastly improve observations.

"Because the ionosphere is such a strong source, even [by] putting a satellite around it we won't be able to observe any of those wavelengths . . . it basically drowns out all the signals [over 10 meters]," said Saptarshi Bandyopadhyay, a Robotics Technologist at NASA Jet Propulsion Laboratory and a the lead researcher on the LCRT project, in an interview with Salon. "So we need to go to a place where we are shielded from Earth, and the best place to go is to go to is the far side of the Moon.

An observatory on the far side of the Moon would have the added benefit of being perpetually shielded from electromagnetic noise from Earth. "The Moon is tidally locked, so only one side of the Moon faces us, and the other side of the Moon is always pointing away," Bandyopadhyay noted.

Bandyopadhyay argues there is an urgent need to better observe radio wavelengths over 10 meters, the kind that would have originated in the early days of our universe. Such a telescope might provide scientists with invaluable information about dark matter and dark energy.

These two substances mark one of the universe's most enduring mysteries. The existence of dark matter can be intuited by how it affects gravity, particularly the makeup and orbits of the largest-scale objects in the universe, galaxies. Yet no one knows exactly what dark matter it is, even though it makes up 27 percent of the universe's total mass and energy — far more than the 5 percent of the universe that "normal" matter, like planets and stars, comprises.

Dark energy, an ill-understood force that is responsible for the accelerating expansion of our universe, is estimated to comprise 68 percent of all matter and energy in the universe. JUST ANOTHER NAME FOR 'AETHYR'

"Right now, we have some ideas, some models of what happened at the time of the Big Bang, and then we have some idea of what the current universe looks like, where all the galaxies are, how they're moving away, and things like that, but they're not many large questions in the middle [that remain unanswered]," Bandyopadhyay said. "A good part of that region is not observable because we have never looked at the universe 10 meters or longer, and that's what we want to observe — we want to observe those 10 meters and longer wavelengths, so that we can understand things like, 'why is there dark energy and dark matter, what is the pattern, and then why is there so much more matter and so little antimatter in the universe?'"

Bandyopadhyay said scientists need to find answers to these questions before humanity makes "another giant leap in physics."

Such a leap in understanding of fundamental physics might be nearer than one might think. Bandyopadhyay noted that 100 years ago, scientists were just starting to understand nuclear energy. Perhaps dark energy could be used in unknown ways in the future — we just have to understand it first.

"We know the universe is made out of only 4% matter, and 95% of the universe is dark matter and dark energy, and we understand nothing about it," Bandyopadhyay said. "My personal thought is if we could at least observe those regions of the universe, where dark energy and dark matter is active, we might be able to piece together what dark energy and dark matter is."

"Maybe our grandchildren would be able to take advantage of dark matter for interstellar travel," he mused.

Bandyopadhyay said the thought is a little "science fiction"; but argues in the 1920s, people likely would have thought powering homes from nuclear plants would have been science fictional, too.

Of course, to assemble such a device on the Moon would not be easy. In the LCRT proposal, robots would build the massive radio telescope. In order to work well, its dish would have to be at least like 10 times longer than the longest wavelength they'd observe. Bandyopadhyay said the budget would need to be between $1 billion and $5 billion. Two space crafts would be needed: one to deliver the mesh wire of the telescope, a material change to adapt to operating on the Moon, and a second to deliver the DuAxel rovers which would build the dish over several days or weeks.

"It's going to be a long journey," Bandyopadhyay said. "I would be very surprised if we managed to launch before I retired and I'm a very young scientist right now, but if you see all the other missions that's the kind of time it takes."

There is precedent for building a radio telescope in a crater: the Arecibo Observatory in Puerto Rico, which collapsed due to neglect recently, operated for decades and provided valuable scientific data. As with the proposed LCRT, the Arecibo Observatory took advantage of the natural concavity of its resident crater to focus distant radio waves. However, unlike the proposed lunar observatory, the Arecibo Observatory was not constructed entirely by robots.

Notably, only one spacecraft has successfully soft landed on the Moon's far side, which was China's Chang'e 4. Still, the very possibility of putting a radio telescope on the Moon is closer than it has ever been before. Such an instrument could pave the way for different types of telescopes, including optical ones, to make home in other spots on the Moon, ultimately transforming humanity's view of the cosmos.

"Visual telescopes would also benefit from the lack of an atmosphere on the Moon," said Avi Loeb, the former chair of the astronomy department at Harvard University (2011-2020). "Atmospheric turbulence blurs and distorts images of sources in the sky when observing from Earth; X-rays cannot propagate through the Earth's atmosphere and can also be observed from the Moon, and finally, the Moon has no geological activity and so a LIGO-like gravitational wave detector would benefit greatly from the lack of seismic noise and the vacuum that is offered for free — eliminating the need for vacuum tubes as used in the terrestrial version."

NICOLE KARLIS is a staff writer at Salon. Tweet her @nicolekarlis.
“You Bloody Fool” Shouts First Confirmed Talking Duck


AUSTRALIAN MUSK DUCKS WERE MAINLY KNOWN FOR THEIR DISTINCTIVE SMELL IN MATING SEASON AND THE LOBE HANGING FROM THE MALES' BILLS. NOW, HOWEVER, ONE LITTLE RIPPER HAS TRANSFORMED IDEAS ABOUT VOCAL LEARNING IN BIRDS.

 IMAGE CREDIT: KEN GRIFFITHS/SHUTTERSTOCK.COM


By Stephen Luntz 06 SEP 2021, 11:50


“If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck,” the old saying goes – but something sounding like an Australian wildlife keeper might be a duck too.

A duck named Ripper has done something never before recorded among any waterfowl; imitate sounds. Although Ripper is sadly no longer with us, his quintessentially Australian voice lives on in audio files investigated in the journal Philosophical Transactions of the Royal Society B. They provide the first scientifically authenticated case of a duck capable of vocal learning, and could open up opportunities to investigate why only certain birds can learn in this way. 

Ripper was an Australian musk duck (Biziura lobate), a species where males perform displays to attract females and warn off rivals. Along with non-vocal “paddle-kick” and “plonk-kicks” these displays include so-called “whistle-kicks” where the duck’s feet hit the water accompanied by soft low-frequency sounds and louder whistles.

Instead of singing the song of his people, however, Ripper took to sounds including one seemingly inspired by the hinge on his cage closing, while another sounds like “You bloody foo..”. It is thought his keeper may have called him a “bloody fool” often enough it sank in.

IFLScience · Duck Saying "You Bloody Fool"



Many birds can learn to imitate sounds, sometimes including human speech. However, every species in which this has been reliably reported belongs to one of three clades: songbirds (including the extraordinary lyrebird), hummingbirds, and parrots. Other birds have innate calls unaffected by sounds they are exposed to. Occasional reports of vocal imitation in other species have never previously been independently verified.

First author of the study, Professor Carel ten Cate of Leiden University, told IFLScience that Ripper’s discovery could be highly valuable to understanding the origins of vocal learning.

“Some songbirds imitate more and better than others,” Professor ten Cate said. “We can look into why, but to understand how vocal learning started we need to know the ancestral trait. That evolved long ago under conditions we can’t determine.”

“Musk ducks must have evolved it much more recently,” ten Cate continued. “We can look at them and related species [that can’t learn vocally] and work out what the differences are.”

One notable clue lies in the fact Australian musk ducks get much longer and more intense maternal care than other waterfowl.

Ripper hatched in 1983 at the Tidbinbilla Nature Reserve, having been incubated by a bantam hen and then raised by hand.

While at CSIRO Wildlife Research, ornithologist Peter Fullagar visited Tidbinbilla regularly and heard from the staff there about Ripper’s capacities. He recorded the sounds Ripper made and placed them in the Australian Sound Archive. The existence of a talking duck was mentioned in books on Australian birds and a PhD thesis, but not really studied during Ripper’s life.

IFLScience · Duck Imitating Door


Two decades later, ten Cate was working on a review of vocal learning across bird species. After encountering passing references he tracked down the recordings, and eventually Fullagar. The pair collaborated, turning Ripper’s sound files into sonograms and comparing their shape to humans saying “You bloody fool” or “You bloody food” to confirm the match.

Unfortunately, however, Tidbinbilla was devastated in a bushfire, and many records of Ripper’s life were lost. His keeper has also died, leaving important questions unanswered. For example, ten Cate told IFLScience that we don’t know what female musk ducks thought of Ripper’s mild profanity.

Besides Ripper, Fullagar also recorded another Australian musk duck that showed less memorable vocal learning, imitating the sounds of Pacific ducks, many of which lived nearby.

Ten Cate has previously revealed females of another Australian bird, the budgerigar, find intelligence sexy. He agreed that unrelated Australian birds appear to have a particular facility for vocal learning and related skills, and said it was unclear if this was a coincidence or a product of some distinctive feature of the continent.

The work may inspire vocal researchers to raise other male B. lobate in captivity and see if they are told to duck off.


Duck species can imitate sounds

Duck species can imitate sounds
Male, Sandford, Tasmania, Australia. Credit: JJ Harrison (jjharrison.com.au/), CC BY-SA 3.0

That a parrot can copycat sounds is nothing new. But vocal learning is not common in animals. Researcher Carel ten Cate of the Institute of Biology Leiden (IBL) of Leiden University has now discovered a duck species that can imitate sounds. "It started with an obscure reference about an Australian musk duck and ended in a nice paper."

Being able to learn how to make particular sounds is a rare characteristic. This  occurs in humans as well as in some dolphins, whales, elephants and bats. But for most mammals, it does not seem to be in their nature. A barking cat, mooing mouse or singing giraffe: you won't be coming across them anytime soon.

Rare in birds too

However, some birds may be able to do this, Ten Cate tells. "Although also for this group, vocal learning is rare. We know that songbirds, parrots and hummingbirds can learn to make specific sounds. This includes many , but that is because vocal learning originated in the ancestral species of these groups." Therefore, researchers generally assume that vocal learning evolved in only three of the 35 orders in which all bird species are classified.

"You bloody foo"

With the discovery of imitating , Ten Cate introduces a new order into this elite group. He was compiling his knowledge on vocal learning on  into a review when he came upon an obscure reference about an Australian musk duck (Biziura lobata). The animal was reported to imitate a human voice, sounding like 'you bloody foo(l)".

The duck was also reported to be able to imitate other sounds, such as a slamming door. "This came as a big surprise. Because even though the bird was recorded 35 years ago, it remained unnoticed by researchers in the vocal learning field until now," Ten Cate elaborates. "That makes it a very special rediscovery."

He tried to trace the source of the recording, with success. It appeared to be an Australian birder who recorded the duck around 1987. "The man, Peter Fullagar, told me that the duck was hand reared and would have had heard the sound as a duckling," Ten Cate says. He analysed the recordings in detail and published them with Fullagar as co-author. Additionally, they discovered other cases of musk ducks that imitated noises, such as a snorting pony, the cough of a caretaker and a squeaking door.

Equal quality

The observations indisputably show that this duck species can imitate a surprising and divergent range of sounds. "It is the only  outside of earlier mentioned groups that shows this quality of imitation. And the level at which they can do this is similar to other imitating species."

In the , the duck branch split off early from the other bird groups. "To observe vocal learning in such a group makes this find extra remarkable," Ten Cate concludes. It is not yet clear why this particular species is capable of vocal learning.

Baby birds tune in from egg, study finds

More information: Vocal imitations and production learning by Australian musk ducks (Biziura lobata). Phil. Trans. R. Soc. B 20200243. doi.org/10.1098/rstb.2020.0243

Re-evaluating vocal production learning in non-oscine birds. Phil. Trans. R. Soc. B 20200249. doi.org/10.1098/rstb.2020.0249

Provided by Leiden University 

 

“Tipping Points” in Earth’s System Triggered Extreme Climate Change 55 Million Years Ago

Earth Weather Climate Change

Scientists have uncovered a fascinating new insight into what caused one of the most rapid and dramatic instances of climate change in the history of the Earth.

A team of researchers, led by Dr. Sev Kender from the University of Exeter, have made a pivotal breakthrough in the cause behind the Paleocene-Eocene Thermal Maximum (PETM) – an extreme global warming event that lasted for around 150 thousand years which saw significant temperature rises.  

Although previous studies have suggested volcanic activity contributed to the vast CO2 emissions that drove the rapid climate change, the trigger for the event is less clear.  

In the new study, the researchers have identified elevated levels of mercury just before and at the outset of the PETM – which could be caused by expansive volcanic activity – in samples taken from sedimentary cores in the North Sea.  

Crucially, the research of the rock samples also showed that in the early stages of the PETM, there was a significant drop in mercury levels – suggested at least one other carbon reservoir released significant greenhouse gases as the phenomenon took hold.  

The research indicates the existence of tipping points in the Earth’s System – which could trigger the release of additional carbon reservoirs that drove the Earth’s climate to unprecedented high temperatures.  

The pioneering research, which also includes experts from the British Geological Survey, the University of Oxford, Herriot-Watt University and the University of California at Riverside, could give a fresh understanding of how modern-day climate change will affect the Earth in the centuries to come.  

The research is published in Nature Communications on August 31th 2021.  

Dr. Kender, a co-author on the study from the Camborne School of Mines, based at the University of Exeter’s Penryn Campus in Cornwall said: ”Greenhouse gasses such a CO2 methane were released to the atmosphere at the start of the PETM in just a few thousand years.  

“We wanted to test the hypothesis that this unprecedented greenhouse gas release was triggered by large volcanic eruptions. As volcanoes also release large quantities of mercury, we measured the mercury and carbon in the sediment cores to detect any ancient volcanism.  

“The surprise was that we didn’t find a simple relationship of increased volcanism during the greenhouse gas release. We found volcanism occurred only at the beginning phase, and so another source of greenhouse gasses must have been released after the volcanism.” 

The PETM phenomenon, which is one of the most rapid periods of warming in the Earth’s history, occurred as Greenland pulled away from Europe.  

While the reasons behind how such vast quantities of CO2 were released to trigger this extensive period of warming lay hidden for many years, scientists have recently suggested that volcanic eruptions were the main driver. 

However, while carbon records and modeling have suggested vast amounts of volcanic carbon was released, it has not been possible to identify the trigger point for PETM – until now.  

In the new study, the researchers studied two new sedimentary cores from the North Sea which showed high levels of mercury present, relative to organic carbon levels.   

These samples showed numerous peaks in mercury levels both before, and at the outset of the PETM period – suggesting it was triggered by volcanic activity.  

However, the study also showed that there was at least one other carbon reservoir that was subsequently released as the PETM took hold, as mercury levels appear to decline in the second part of its onset.  

Dr. Kender added: “We were able to carry out this research as we have been working on exceptionally well preserved new core material with collaborators from the Geological Survey of Denmark and Greenland. The excellent preservation allowed detailed detection of both the carbon released to the atmosphere and the mercury. As the North Sea is close to the region of volcanism thought to have triggered the PETM, these cores were in an ideal position to detect the signals. 

“The volcanism that caused the warming was probably vast deep intruded sills producing thousands of hydrothermal vents on a scale far beyond anything seen today. Possible secondary sources of greenhouse gases were melting permafrost and sea floor methane hydrates, as a result of the initial volcanic warming.” 

Reference: “Paleocene/Eocene carbon feedbacks triggered by volcanic activity” by Sev Kender, Kara Bogus, Gunver K. Pedersen, Karen Dybkjær, Tamsin A. Mather, Erica Mariani, Andy Ridgwell, James B. Riding, Thomas Wagner, Stephen P. Hesselbo and Melanie J. Leng, 31 August 2021, Nature Communications.
DOI: 10.1038/s41467-021-25536-0

Canadian healthcare workers worry about safety as anti-vaccine messaging escalates


BY MONIKA GUL AND NIKITHA MARTINS
Posted Sep 5, 2021 

Hundreds of COVID-19 protesters gathered outside of Vancouver City Hall Wednesday. This was just one of many protests held across Canada at hospital sites to speak out against vaccines and vaccine passports. (Courtesy Twitter/imclaireallen)

SUMMARY


Frustrations over the pandemic reaching a boiling point, as healthcare workers increasingly worry about safety at work


Concerns have been raised after protestors against COVID regulations gathered at hospitals across the country


Protests in B.C. saw health care workers verbally abused, brought to tears and in one case -- physically assaulted




VANCOUVER (NEWS 1130) — With frustrations over the COVID-19 pandemic reaching a boiling point, doctors and nurses say they’re getting increasingly worried about safety at work.

This week, protesters in B.C., Ontario and Quebec took to busy streets and hospital entrances which impacted many frontline workers.

Days after the demonstration, healthcare workers have continued to speak out against how the disruptive rallies affected people seeking treatment and other hospital services.

Related Articles:

Heartbreaking stories from the frontline: B.C. health care workers brought to tears by COVID protesters

B.C. cancer patients forced to walk through mob of protesters to get to chemo appointments

Man counter-protesting anti-vaccine rally at Kelowna hospital spit on, shoved — has no regrets

Dr. Alika Lafontaine, the president-elect of the Canadian Medical Association, says healthcare workers and patients have been stressed throughout the pandemic, but “this stress level has reached a new level.”

“Across the country, we’ve been seeing patients and healthcare workers who are against mandatory vaccinations and other COVID restrictions marching in protest against the role that government is having in this and healthcare workers are now being dragged into this — where they are now the focus of a lot of aggression, and at times violence,” Lafontaine says.

In Victoria, health care workers were verbally abused and in one case — physically assaulted.

A similar situation also occurred in Kelowna, where nurses were brought to tears.

Related Video:


Lafontaine says, the bullying and harassment of healthcare workers who have worked tirelessly for months is wrong, unacceptable and has crossed a line.

“This type of escalation that’s now above just the frustration level, and now enters into territory where … the County Medical Association and other advocacy groups for healthcare workers across the country are now increasingly concerned about the safety of healthcare workers.”

Lafontaine hopes to remind people that despite if a patient is “upset with us or not, and vaccinated or not … at the end of the day, health care workers will continue to be here for you.” Which he says is all the more reason to ensure we “keep our healthcare workforce healthy and resilient. In order for us to continue to be providing the care that you need during this pandemic.”

“But the stresses on the system can be mitigated if people make better choices,” he says adding, unvaccinated patients have continued to drive the pandemic, leading to frustration among healthcare workers.

“We don’t only worry about them in the state that they’re in, but also that a lot of this here is preventable if people just get vaccinated and follow public health guidelines.”