Sunday, March 20, 2022

Hubble Spies a Stunning Spiral in Constellation Coma Berenices

Spiral Galaxy NGC 4571

Hubble Space Telescope’s Wide Field Camera 3 was used to capture this cosmic portrait that features a stunning view of the spiral galaxy NGC 4571, which lies approximately 60 million light-years from Earth in the constellation Coma Berenices. Credit: ESA/Hubble & NASA, J. Lee and the PHANGS-HST Team

This cosmic portrait — captured with the NASA/ESA Hubble Space Telescope’s Wide Field Camera 3 — shows a stunning view of the spiral galaxy NGC 4571, which lies approximately 60 million light-years from Earth in the constellation Coma Berenices. This constellation — whose name translates as Bernice’s Hair — was named after an Egyptian queen who lived more than 2200 years ago.

As majestic as spiral galaxies like NGC 4571 are, they are far from the largest structures known to astronomers. NGC 4571 is part of the Virgo cluster, which contains more than a thousand galaxies. This cluster is in turn part of the larger Virgo supercluster, which also encompasses the Local Group which contains our own galaxy, the Milky Way. Even larger than superclusters are galaxy filaments  — the largest known structures in the Universe.

This image comes from a large program of observations designed to produce a treasure trove of combined observations from two great observatories: Hubble and ALMA. ALMA, The Atacama Large Millimeter/submillimeter Array, is a vast telescope consisting of 66 high-precision antennas high in the Chilean Andes, which together observe at wavelengths between infrared and radio waves. This allows ALMA to detect the clouds of cool interstellar dust which give rise to new stars. Hubble’s razor-sharp observations at ultraviolet wavelengths, meanwhile, allows astronomers to pinpoint the location of hot, luminous, newly formed stars. Together, the ALMA and Hubble observations provide a vital repository of data to astronomers studying star formation, as well as laying the groundwork for future science with the NASA/ESA/CSA James Webb Space Telescope.

Astronomy & Astrophysics 101: Gravitational Lensing

GAL-CLUS-022058s

NASA/ESA Hubble Space Telescope image of GAL-CLUS-022058s, located in the southern hemisphere constellation of Fornax (The Furnace). Credit: ESA/Hubble & NASA, S. Jha, Acknowledgement: L. Shatz

Gravitational lensing occurs when a massive celestial body — such as a galaxy cluster — causes a sufficient curvature of spacetime for the path of light around it to be visibly bent, as if by a lens. The body causing the light to curve is accordingly called a gravitational lens.

According to Einstein’s general theory of relativity, time and space are fused together in a quantity known as spacetime. Within this theory, massive objects cause spacetime to curve, and gravity is simply the curvature of spacetime. As light travels through spacetime, the theory predicts that the path taken by the light will also be curved by an object’s mass. Gravitational lensing is a dramatic and observable example of Einstein’s theory in action. Extremely massive celestial bodies such as galaxy clusters cause spacetime to be significantly curved. In other words, they act as gravitational lenses. When light from a more distant light source passes by a gravitational lens, the path of the light is curved, and a distorted image of the distant object — maybe a ring or halo of light around the gravitational lens — can be observed.


Gravitational lensing occurs when a massive celestial body — such as a galaxy cluster — causes a sufficient curvature of spacetime for the path of light around it to be visibly bent, as if by a lens. The body causing the light to curve is accordingly called a gravitational lens. Credit: ESA/Hubble (M. Kornmesser & L. L. Christensen)

An important consequence of this lensing distortion is magnification, allowing us to observe objects that would otherwise be too far away and too faint to be seen. Hubble makes use of this magnification effect to study objects that would otherwise be beyond the sensitivity of its 2.4-meter-diameter primary mirror, showing us thereby the most distant galaxies humanity has ever encountered.


This Space Sparks Episode explores the concept of gravitational lensing. This effect is only visible in rare cases and only the best telescopes — including the NASA/ESA Hubble Space Telescope — can observe the results of gravitational lensing. The strong gravity of a massive object, such as a cluster of galaxies, warps the surrounding space, and light from distant objects traveling through that warped space is curved away from its straight-line path. This video highlights how Hubble’s sensitivity and high resolution allow it to see details in these faint, distorted images of distant galaxies.

Hubble’s sensitivity and high resolution allow it to see faint and distant gravitational lenses that cannot be detected with ground-based telescopes whose images are blurred by the Earth’s atmosphere. The gravitational lensing results in multiple images of the original galaxy each with a characteristically distorted arc-like shape or even into rings. Hubble was the first telescope to resolve details within these multiple arc-shaped features. Its sharp vision can reveal the shape and internal structure of the lensed background galaxies directly.

Word Bank Gravitational Lensing

Credit: ESA/Hubble & NASA, S. Jha, Acknowledgement: L. Shatz

An image released in 2020 as part of the ESA/Hubble Picture of the Week series of the object known as GAL-CLUS-022058s revealed the largest ring-shaped lensed image of a galaxy (known as an Einstein ring) ever discovered, also one of the most complete. The near exact alignment of the background galaxy with the central elliptical galaxy of the cluster warped and magnified the image of the background galaxy into an almost perfect ring.

Learn more about Hubble’s observations of gravitational lensing here.

International Year of Glass gets cracking in Geneva – Physics World

The International Year of Glass(IYOG2022) kicked off with a two-day opening ceremony at the Palace of Nations in Geneva, Switzerland. IYOG2022 will celebrate this versatile material that underpins many technologies that have transformed the modern world. Events throughout the year will also highlight why glass is critical in achieving the United Nations’ 2030 Agenda for Sustainable Development.

“Welcome to transparency, welcome to sustainability, welcome to the age of glass,” said IYOG2022 chair Alicia Durán, in her opening remarks. Durán, a physicist at the Spanish Research Council (CSIC) in Madrid, played a key role in building support for the project while serving as president of the International Commission on Glass(ICG) between 2018 and 2021.

During the past three years, Spain’s permanent mission at the UN headquarters in New York led the process for obtaining an official resolution, with expressed support from 18 other nations. The global glass industry and cultural institutions also backed IYOG22, which now has 2100 endorsements from 90 countries across five continents.

One of the aims for IYOG2022 is to highlight the role of glass in advancing civilization. Ambassadors from Turkey and Egypt spoke at the opening ceremony as both nations have rich histories in the origins of modern glassmaking. “Let us cherish the significance of this brilliant and versatile material in humanity’s past, present and future,” said Sadik Arslan, head of Turkey’s permanent mission at the UN in Geneva.

This year is the 100th anniversary of the discovery of Tutankhamun’s tomb in Egypt’s the Valley of the Kings. To mark the occasion, Egypt will inaugurate its new Grand Egyptian Museum just outside Cairo, which showcases ornamental glass from Ancient Egypt. To coincide, Egypt will host an IYOG2022 event “From Pharaohs to High Tech Glass” on 18–20 April.

All of this year’s major glass fairs will have a focus on IYOG2022. China, the world’s biggest producer and consumer of glass, will host China Glass 2022in Shanghai alongside a number of satellite events on 11–15 April. This year is also the centenary of the German Glass Technology Society(DGG), which will be celebrated on 2–8 July in Berlin at the ICG’s international congress.

Elsewhere, the US will host a National Day of Glass Eventon 3–5 April in Washington, DC while Mexico will host GLASSMANin Monterrey on 11–12 May and Russia has MIR STEKLAin Moscow on 6–9 June. IYOG2022’s closing ceremony will take place in Japan on 8–9 December.

Events will highlight how glass-based technologies can contribute to the UN’s 17 sustainable development goals. In renewable energy, glass is used for concentrated solar power, photovoltaics and the fibreglass of wind turbines. Glasswool is used for insulating houses, while new window technologies can make buildings efficient and light. Glass is non-toxic and infinitely recyclable, so it is also a key material for circular economies.

Optical fibres are the digital highways for the Internet and touch-sensitive glass screens have revolutionized how we communicate. In healthcare, non-reactive glass containers that can withstand ultracold temperatures have been essential for transporting COVID-19 vaccines. Bioglass has been used for half a century to assist bone healing, and recent advances have seen glass nanostrutures used for drug delivery and wound healing.

IYOG2022 organizers hope to inspire students through cultural events and initiatives, while addressing gender balance in science and the needs of developing countries. The Museum of Glass Art in Alcorcón, Madrid, is hosting an event in June called Women in Glass, Art and Science, which will highlight contributions of women from Ibero-America.

To close the opening ceremony in Geneva on Friday afternoon, the Japanese artist Kimiake Higuchispoke about her creations using te de verreIn this technique finely crushed glass is mixed with binding material and colouring agents to create a paste that is moulded and fired.

Lab gender roles not due to personal choice, finds study


Hands on A study has shown that male and female students have similar preferences for handling equipment, although this is not always reflected in lab sessions. (Courtesy: iStock/GCShutter)

Male and female preferences for carrying out certain tasks during experimental laboratory work are largely the same – and do not support stereotypical gender roles that are often seen in lab settings. That is according to a study carried out by Natasha Holmes from Cornell University and colleagues, who say the tasks that students choose to do in inquiry-based lab sessions could be due to biases and different levels of confidence among men and women.

The new study follows on from research published by the same team in 2020, which found that when students make their own decisions about experimental design in inquiry-based lab sessions, male students are more likely to handle equipment while female students spend more time taking notes and in communication roles. This gender disparity seemed to develop implicitly, as individuals were not allocated roles by instructors, and group members rarely discussed which tasks they would each be doing.

To find out if the students’ personal preferences for different tasks might be driving this trend, in the new study the researchers conducted interviews with undergraduate students followed by a survey. Out of 100 individuals, the researchers found that male and female preferences for each of the tasks were largely the same. Crucially, female students expressed a similar level of preference for handling the equipment as that of male students.

The 2020 paper found that this gender bias appears in inquiry-based lab sessions, but not in traditional lab sessions, which are more structured, with students being given instructions for how to carry out the experiments. This difference presents a conundrum for the researchers as they have previously found that inquiry-based labs boost students’ engagement and encourage them to take more “ownership” of their learning, compared with traditional labs.

“We think the gendered behaviours emerge during that subtle, collegial volunteering,” Holmes told Physics World. “We think the bias is related to students’ desire to be friendly and not wanting to argue with group mates who volunteer for certain roles, as well as male and female students having different levels of initial confidence to jump into a particular role.”

Holmes and colleagues are now focussing on how to retain the educational benefits of inquiry-based labs while reducing the likelihood of gender bias emerging. “Although this study ruled out an important hypothesis, we have a lot more questions now,” she says. “We’re planning to test out different instructional interventions to see what is most effective.”

Those include assigning roles to the students and instructing them to rotate during the session or between labs; having open discussions about how some students might be more comfortable jumping into the equipment roles; and having students write down in their experiment designs how they are all going to contribute. “We think that this will make them explicitly reflect on ways to get everyone involved and make effective use of their group members,” adds Holmes.


What made the last century’s great innovations possible?

Transforming how people live requires more than scientific discovery



In the early decades of the 20th century, automobiles, telephone service and radio (broadcast of the 1920 U.S. presidential election results from KDKA station in Pittsburgh is shown) were transforming life.
HULTON ARCHIVE/GETTY IMAGES

LONG READ

By Jon Gertner
MARCH 18, 2022 

In the early decades of the 20th century, a slew of technologies began altering daily life with seemingly unprecedented speed and breadth. Suddenly, consumers could enjoy affordable automobiles. Long-distance telephone service connected New York with San Francisco. Electric power and radio broadcasts came into homes. New methods for making synthetic fertilizer portended a revolution in agriculture. And on the horizon, airplanes promised a radical transformation in travel and commerce.

As the technology historian Thomas P. Hughes noted: “The remarkably prolific inventors of the late nineteenth century, such as [Thomas] Edison, persuaded us that we were involved in a second creation of the world.” By the 1920s, this world — more functional, more sophisticated and increasingly more comfortable — had come into being.

Public figures like Edison or, say, Henry Ford were often described as inventors. But a different word, one that caught on around the 1950s, seemed more apt in describing the technological ideas making way for modern life: innovation. While its origins go back some 500 years (at first it was used to describe a new legal and then religious idea), the word’s popularization was a post–World War II phenomenon.

The elevation of the term likely owes a debt to the Austrian-American economist Joseph Schumpeter, according to the late science historian Benoît Godin. In his academic writings, Schumpeter argued that vibrant economies were driven by innovators whose work replaced existing products or processes. “Innovation is the market introduction of a technical or organizational novelty, not just its invention,” Schumpeter wrote in 1911.

An invention like Fritz Haber’s process for making synthetic fertilizer, developed in 1909, was a dramatic step forward, for example. Yet what changed global agriculture was a broad industrial effort to transform that invention into an innovation — that is, to replace a popular technology with something better and cheaper on a national or global scale.

In the mid-century era, one of the leading champions of America’s innovation capabilities was Vannevar Bush, an MIT academic. In 1945, Bush worked on a landmark report — famously titled “Science, The Endless Frontier” — for President Harry Truman. The report advocated for a large federal role in funding scientific research. Though Bush didn’t actually use the word innovation in the report, his manifesto presented an objective for the U.S. scientific and industrial establishment: Grand innovative vistas lay ahead, especially in electronics, aeronautics and chemistry. And creating this future would depend on developing a feedstock of new scientific insights.

Vannevar Bush was one of the 20th century’s leading champions of American innovation. His landmark report, “Science, The Endless Frontier,” advocated for federal funding for scientific research.
MPI/GETTY IMAGES

Though innovation depended on a rich trove of discoveries and inventions, the innovative process often differed, both in its nature and complexity, from what occurred within scientific laboratories. An innovation often required larger teams and more interdisciplinary expertise than an invention. Because it was an effort that connected scientific research to market opportunities, it likewise aimed to have both society-wide scale and impact. As the radio, telephone and airplane had proved, the broad adoption of an innovative product ushered in an era of technological and social change.

To celebrate our 100th anniversary, we’re highlighting some of the biggest advances in science over the last century. To see more from the series, visit Century of Science.

Bringing inventions “to scale” in large markets was precisely the aim of big companies such as General Electric or American Telephone & Telegraph, which was then the national telephone monopoly. Indeed, at Bell Laboratories, which served as the research and development arm of AT&T, a talented engineer named Jack Morton began to think of innovation as “not just the discovery of new phenomena, nor the development of a new product or manufacturing technique, nor the creation of a new market. Rather, the process is all these things acting together in an integrated way toward a common industrial goal.”

Morton had a difficult job. The historical record suggests he was the first person in the world asked to figure out how to turn the transistor, discovered in December 1947, from an invention into a mass-produced innovation. He put tremendous energy into defining his task — a job that in essence focused on moving beyond science’s eureka moments and pushing the century’s technologies into new and unexplored regions.
From invention to innovation

In the 1940s, Vannevar Bush’s model for innovation was what’s now known as “linear.” He saw the wellspring of new scientific ideas, or what he termed “basic science,” as eventually moving in a more practical direction toward what he deemed “applied research.” In time, these applied scientific ideas — inventions, essentially — could move toward engineered products or processes. Ultimately, in finding large markets, they could become innovations.

In recent decades, Bush’s model has come to be seen as simplistic. The educator Donald Stokes, for instance, has pointed out that the line between basic and applied science can be indistinct. Bush’s paradigm can also work in reverse: New knowledge in the sciences can derive from technological tools and innovations, rather than the other way around. This is often the case with powerful new microscopes, for instance, which allow researchers to make observations and discoveries at tinier and tinier scales. More recently, other scholars of innovation have pointed to the powerful effect that end users and crowdsourcing can have on new products, sometimes improving them dramatically — as with software — by adding new ideas for their own use.

Above all, innovations have increasingly proved to be the sum parts of unrelated scientific discoveries and inventions; combining these elements at a propitious moment in time can result in technological alchemy. Economist Mariana Mazzucato, for instance, has pointed to the iPhone as an integrated wonder of myriad breakthroughs, including touch screens, GPS, cellular systems and the Internet, all developed at different times and with different purposes.

At least in the Cold War era, when military requests and large industrial labs drove much of the new technology, the linear model nevertheless succeeded well. Beyond AT&T and General Electric, corporate titans like General Motors, DuPont, Dow and IBM viewed their R&D labs, stocked with some of the country’s best scientists, as foundries where world-changing products of the future would be forged.

These corporate labs were immensely productive in terms of research and were especially good at producing new patents. But not all their scientific work was suitable for driving innovations. At Bell Labs, for instance, which funded a small laboratory in Holmdel, N.J., situated amid several hundred acres of open fields, a small team of researchers studied radio wave transmissions.

Karl Jansky, a young physicist, installed a moveable antenna on the grounds that revealed radio waves emanating from the center of the Milky Way. In doing so, he effectively founded the field of radio astronomy. And yet, he did not create anything useful for his employer, the phone company, which was more focused on improving and expanding telephone service. To Jansky’s disappointment, he was asked to direct his energies elsewhere; there seemed no market for what he was doing.

Above all, corporate managers needed to perceive an overlap between big ideas and big markets before they would dedicate funding and staff toward developing an innovation. Even then, the iterative work of creating a new product or process could be slow and plodding — more so than it may seem in retrospect. Bell Labs’ invention of the point-contact transistor, in December 1947, is a case in point. The first transistor was a startling moment of insight that led to a Nobel Prize. Yet in truth the world changed little from what was produced that year.

The three credited inventors — William Shockley, John Bardeen and William Brattain — had found a way to create a very fast switch or amplifier by running a current through a slightly impure slice of germanium. Their device promised to transform modern appliances, including those used by the phone company, into tiny, power-sipping electronics. And yet the earliest transistors were difficult to manufacture and impractical for many applications. (They were tried in bulky hearing aids, however.) What was required was a subsequent set of transistor-related inventions to transform the breakthrough into an innovation.

John Bardeen, William Shockley and Walter Brattain (shown from left to right) are credited with the invention of the transistor in 1947. But there were several hurdles to overcome before the transistor could transform electronics.
HULTON ARCHIVE/GETTY IMAGES

The first crucial step was the junction transistor, a tiny “sandwich” of various types of germanium, theorized by Shockley in 1948 and created by engineering colleagues soon after. The design proved manufacturable by the mid-1950s, thanks to efforts at Texas Instruments and other companies to transform it into a dependable product.

A second leap overcame the problems of germanium, which performed poorly under certain temperature and moisture conditions and was relatively rare. In March 1955, Morris Tanenbaum, a young chemist at Bell Labs, hit on a method using a slice of silicon. It was, crucially, not the world’s first silicon transistor — that distinction goes to a device created a year before. But Tanenbaum reflected that his design, unlike the others, was easily “manufacturable,” which defined its innovative potential. Indeed, he realized its value right away. In his lab notebook on the evening of his insight, he wrote: “This looks like the transistor we’ve been waiting for. It should be a cinch to make.”

Finally, several other giant steps were needed. One came in 1959, also at Bell Labs, when Mohamed Atalla and Dawon Kahng created the first silicon metal-oxide-semiconductor-field-effect-transistor — known as a MOSFET — which used a different architecture than either junction or point-contact transistors. Today, almost every transistor manufactured in the world, trillions each second, results from the MOSFET breakthrough. This advance allowed for the design of integrated circuits and chips implanted with billions of tiny devices. It allowed for powerful computers and moonshots. And it allowed for an entire world to be connected.


Getting there

The technological leaps of the 1900s — microelectronics, antibiotics, chemotherapy, liquid-fueled rockets, Earth-observing satellites, lasers, LED lights, disease-resistant seeds and so forth — derived from science. But these technologies also spent years being improved, tweaked, recombined and modified to make them achieve the scale and impact necessary for innovations.

Some scholars — the late Harvard professor Clayton Christensen, for instance, who in the 1990s studied the way new ideas “disrupt” entrenched industries — have pointed to how waves of technological change can follow predictable patterns. First, a potential innovation with a functional advantage finds a market niche; eventually, it expands its appeal to users, drops in cost and step by step pushes aside a well-established product or process. (Over time the transistor, for example, has mostly eliminated the need for vacuum tubes.)

But there has never been a comprehensive theory of innovation that cuts across all disciplines, or that can reliably predict the specific path by which we end up transforming new knowledge into social gains. Surprises happen. Within any field, structural obstacles, technical challenges or a scarcity of funding can stand in the way of development, so that some ideas (a treatment for melanoma, say) move to fruition and broad application faster than others (a treatment for pancreatic cancer).

There can likewise be vast differences in how innovation occurs in different fields. In energy, for example, which involves vast integrated systems and requires durable infrastructure, the environmental scientist and policy historian Vaclav Smil has noted, innovations can take far longer to achieve scale than in others. In software development, new products can be rolled out cheaply, and can reach a huge audience almost instantly.

At the very least, we can say with some certainty that almost all innovations, like most discoveries and inventions, result from hard work and good timing — a moment when the right people get together with the right knowledge to solve the right problem. In one of his essays on the subject, business theorist Peter Drucker pointed to the process by which business managers “convert society’s needs into opportunities” as the definition of innovation. And that may be as good an explanation as any.

Even innovations that seem fast — for instance, mRNA vaccines for COVID-19 — are often a capstone to many years of research and discovery. Indeed, it’s worth noting that the scientific groundwork preceding the vaccines’ rollout developed the methods that could later be used to solve a problem when the need became most acute. What’s more, the urgency of the situation presented an opportunity for three companies — Moderna and, in collaboration, Pfizer and BioNTech — to utilize a vaccine invention and bring it to scale within a year.

Innovations that seem fast, like vaccines for COVID-19, often rely on many years of scientific discovery, plus a societal need. MARIO TAMA/GETTY IMAGES
“The history of cultural progress is, almost without exception, a story of one door leading to another door,” the tech journalist Steven Johnson has written. We usually explore just one room at a time, and only after wandering around do we proceed to the next, he writes. Surely this is an apt way to think of our journey up to now. It might also lead us to ask: What doors will we open in future decades? What rooms will we explore?

On the one hand, we can be assured that the advent of mRNA vaccines portends applications for a range of other diseases in coming years. It seems more challenging to predict — and, perhaps, hazardous to underestimate — the human impact of biotechnology, such as CRISPR gene editing or synthetic DNA. And it seems equally hard to imagine with precision how a variety of novel digital products (robotics, for example, and artificial intelligence) will be integrated into societies of the future. Yet without question they will.

Erik Brynjolfsson of Stanford and Andrew McAfee of MIT have posited that new digital technologies mark the start of a “second machine age” that in turn represents “an inflection point in the history of our economies and societies.” What could result is an era of greater abundance and problem-solving, but also enormous challenges — for instance, as computers increasingly take on tasks that result in the replacement of human workers.

If this is our future, it won’t be the first time we’ve struggled with the blowback from new innovations, which often create new problems even as they solve old ones. New pesticides and herbicides, to take one example, allowed farmers to raise yields and ensure good harvests; they also devastated fragile ecosystems. Social media connected people all over the world; it also led to a tidal wave of propaganda and misinformation. Most crucially, the discovery of fossil fuels, along with the development of steam turbines and internal combustion engines, led us into an era of global wealth and commerce. But these innovations have bequeathed a legacy of CO2 emissions, a warming planet, diminished biodiversity and the possibility of impending environmental catastrophe.

The climate dilemma almost certainly presents the greatest challenge of the next 50 years. Some of the innovations needed for an energy transition — in solar and wind power, and in batteries and home heat pumps — already exist; what’s required are policies that allow for deployment on a rapid and more massive scale. But other ideas and inventions — in the fields of geothermal and tidal power, for instance, or next-generation nuclear plants, novel battery chemistries and carbon capture and utilization — will require years of development to drive costs down and performance up. The climate challenge is so large and varied, it seems safe to assume we will need every innovation we can possibly muster.
Tackling the problem of climate change will draw on existing innovations, such as solar power (a solar thermal power plant in Morocco is shown), and new ones.JERÓNIMO ALBA/ALAMY STOCK PHOTOPerhaps the largest unknown is whether success is assured. Even so, we can predict what a person looking back a century from now might think. They will note that we had a multitude of astonishing scientific breakthroughs in our favor at this moment in time — breakthroughs that pointed the way toward innovations and a cooler, safer, healthier planet. They will reflect that we had a range of extraordinary tools at our beck and call. They will see that we had great engineering prowess, and great wealth. And they will likely conclude that with all the problems at hand, even some that seemed fearsome and intractable, none should have proved unsolvable.

About
  Jon Gertner is a journalist based in New Jersey. He is author of The Idea Factory, about innovation at Bell Labs, and The Ice at the End of the World, about Greenland’s melting ice sheet.
Ancient seafarers built the Mediterranean’s largest known sacred pool

A big pool on a tiny island helped Phoenicians track the stars and their gods


Shown after excavations, a sacred pool built by Phoenicians around 2,550 years ago on a tiny Mediterranean island includes a replica of a statue of the god Ba’al at its center.

SAPIENZA UNIVERSITY OF ROME EXPEDITION TO MOTYA

By Bruce Bower

MARCH 16, 2022 

On a tiny island off Sicily’s west coast, a huge pool long ago displayed the star-studded reflections of the gods.

Scientists have long thought that an ancient rectangular basin, on the island of Motya, served as an artificial inner harbor, or perhaps a dry dock, for Phoenician mariners roughly 2,550 years ago. Instead, the water-filled structure is the largest known sacred pool from the ancient Mediterranean world, says archaeologist Lorenzo Nigro of Sapienza University of Rome.

Phoenicians, who adopted cultural influences from many Mediterranean societies on their sea travels, put the pool at the center of a religious compound in a port city also dubbed Motya, Nigro reports in the April Antiquity.

The pool and three nearby temples were aligned with the positions of specific stars and constellations on key days of the year, such as the summer and winter solstices, Nigro found. Each of those celestial bodies was associated with a particular Phoenician god.

At night, the reflecting surface of the pool, which was slightly longer and wider than an Olympic-sized swimming pool, was used to make astronomical observations by marking stars’ positions with poles, Nigro suspects. Discoveries of a navigation instrument’s pointer in one temple and the worn statue of an Egyptian god associated with astronomy found in a corner of the pool support that possibility.

It was an archaeologist who explored Motya around a century ago who first described the large pool as a harbor that connected to the sea by a channel. A similar harbor had previously been discovered at Carthage, a Phoenician city on North Africa’s coast.

But excavations and radiocarbon dating conducted at Motya since 2002 by Nigro, working with the Superintendence of Trapani in Sicily and the G. Whitaker Foundation in Palermo, have overturned that view.

“The pool could not have served as a harbor, as it was not connected to the sea,” Nigro says. He and his team temporarily drained the basin, showing that it is instead fed by natural springs. Only after Greek invaders conquered Motya in a battle that ended in 396 B.C. was a channel dug from the pool to a nearby lagoon, Nigro’s group found.

Phoenicians settled on Motya between 800 B.C. and 750 B.C. The sacred pool, including a pedestal in the center that originally supported a statue of the Phoenician god Ba’al, was built between 550 B.C. and 520 B.C., Nigro says. Two clues suggested that the pedestal had once held a statue of Ba’al. First, after draining the pool, Nigro’s team found a stone block with the remnants of a large, sculpted foot at the basin’s edge. And an inscription in a small pit at one corner of the pool includes a dedication to Ba’al, a primary Phoenician god.A block with a carved foot found on the edge of Motya’s sacred pool probably was part of a statue of a Phoenician god that originally stood on a pedestal at the pool’s center, researchers say.L. NIGRO/ANTIQUITY 2022

Gods worshipped by Phoenicians at Motya and elsewhere were closely identified with gods of other Mediterranean societies. For instance, Ba’al was a close counterpart of the divine hero Hercules in Greek mythology.

An ability to incorporate other people’s deities into their own religion “was probably one of the keys to Phoenicians’ success throughout the Mediterranean,” says archaeologist Susan Sherratt of the University of Sheffield in England, who did not participate in the new study.

Seafaring traders now called Phoenicians lived in eastern Mediterranean cities founded more than 3,000 years ago (SN: 1/25/06). Phoenicians established settlements from Cyprus to Spain’s Atlantic coast. Some researchers suspect that Phoenicians lacked a unifying cultural or ethnic identity.

Nigro disagrees. Phoenicians developed an influential writing system and spoke a common Semitic language, key markers of a common eastern Mediterranean culture, he contends. As these seafarers settled islands and coastal regions stretching west across the Mediterranean, they created hybrid cultures with native groups, Nigro suspects.

Motya excavations indicate that Phoenician newcomers created a distinctive West Phoenician culture via interactions with people already living there. Pottery and other artifacts indicate that groups from Greece, Crete and other Mediterranean regions periodically settled on the island starting as early as around 4,000 years ago. Metal objects and other cultural remains from various stages of Motya’s development display influences from all corners of the Mediterranean.

Though much remains unknown about political and social life at Motya, its Phoenician founders oversaw an experiment in cultural tolerance that lasted at least 400 years, Nigro says.

Questions or comments on this article? E-mail us at feedback@sciencenews.org

CITATIONS

L. Nigro. The sacred pool of Ba’al: a reinterpretation of the ‘Kothon’ at Motya. Antiquity. Vol. 96, April 2022. doi: 10.15184/aqy.2022.8.

J. Quinn. Were there Phoenicians? Ancient Near East Today. Vol. VI, July 2018.
How a scientist-artist transformed our view of the brain

‘The Brain in Search of Itself’ chronicles the life and work of anatomist Santiago Ramón y Cajal


Anatomist Santiago Ramón y Cajal, shown circa 1870, studied brain tissue under the microscope and saw intricate details of the cells that form the nervous system, observations that earned him a Nobel Prize.
S. RAMÓN Y CAJAL/WIKIMEDIA COMMONS

By Laura Sanders
MARCH 17, 2022 


The Brain in Search of Itself
Benjamin Ehrlich
Farrar, Straus and Giroux, $35

Spanish anatomist Santiago Ramón y Cajal is known as the father of modern neuroscience. Cajal was the first to see that the brain is built of discrete cells, the “butterflies of the soul,” as he put it, that hold our memories, thoughts and emotions.

With the same unflinching scrutiny that Cajal applied to cells, biographer Benjamin Ehrlich examines Cajal’s life. In The Brain in Search of Itself, Ehrlich sketches Cajal as he moved through his life, capturing moments both mundane and extraordinary.

Some of the portraits show Cajal as a young boy in the mid-19th century. He was born in the mountains of Spain. As a child, he yearned to be an artist despite his disapproving and domineering father. Other portraits show him as a barber-surgeon’s apprentice, a deeply insecure bodybuilder, a writer of romance stories, a photographer and a military physician suffering from malaria in Cuba.

The book is meticulously researched and utterly comprehensive, covering the time before Cajal’s birth to after his death in 1934 at age 82. Ehrlich pulls out significant moments that bring readers inside Cajal’s mind through his own writings in journals and books. These glimpses help situate Cajal’s scientific discoveries within the broader context of his life.

Arriving in a new town as a child, for instance, the young Cajal wore the wrong clothes and spoke the wrong dialect. Embarrassed, the sensitive child began to act out, fighting and bragging and skipping school. Around this time, Cajal developed an insatiable impulse to draw. “He scribbled constantly on every surface he could find — on scraps of paper and school textbooks, on gates, walls and doors — scrounging money to spend on paper and pencils, pausing on his jaunts through the countryside to sit on a hillside and sketch the scenery,” Ehrlich writes.

Cajal was always a deep observer, whether the subject was the stone wall in front of a church, an ant trying to make its way home or dazzlingly complicated brain tissue. He saw details that other people missed. This talent is what ultimately propelled him in the 1880s to his big discovery.

At the time, a prevailing concept of the brain, called the reticular theory, held that the tangle of brain fibers was one unitary whole organ, indivisible. Peering into a microscope at all sorts of nerve cells from all sorts of creatures, Cajal saw over and over again that these cells in fact had space between them, “free endings,” as he put it. “Independent nerve cells were everywhere,” Ehrlich writes. The brain, therefore, was made of many discrete cells, all with their own different shapes and jobs (SN: 11/25/17, p. 32).

Cajal’s observations ultimately gained traction with other scientists and earned him the 1906 Nobel Prize in physiology or medicine. He shared the prize with Camillo Golgi, the Italian physician who developed a stain that marked cells, called the black reaction. Golgi was a staunch proponent of the reticular theory, putting him at odds with Cajal, who used the black reaction to show discrete cells’ endings. The two men had not met before their trip to Stockholm to attend the awards ceremony.
Through his detailed drawings, Santiago Ramón y Cajal revealed a new view of the brain and its cells, such as these two Purkinje cells that Cajal observed in a pigeon’s cerebellum.
SCIENCE HISTORY IMAGES/ALAMY STOCK PHOTO

Cajal and Golgi’s irreconcilable ideas — and their hostility toward each other — came through clearly from speeches they gave after their prizes were awarded. “Whoever believed in the ‘so-called’ independence of nerve cells, [Golgi] sneered, had not observed the evidence closely enough,” Ehrlich writes. The next day, Cajal countered with a precise, forceful rebuttal, detailing his work on “nearly all the organs of the nervous system and on a large number of zoological species.” He added, “I have never [encountered] a single observed fact contrary to these assertions.”

Cajal’s fiercely defended insights came from careful observations, and his intuitive drawings of nerve cells did much to convince others that he was right (SN: 2/27/21, p. 32). But as the book makes clear, Cajal was not a mere automaton who copied exactly the object in front of him. Like any artist, he saw through the extraneous details of his subjects and captured their essence. “He did not copy images — he created them,” Ehrlich writes. Cajal’s insights “bear the unique stamp of his mind and his experience of the world in which he lived.”

This biography draws a vivid picture of that world.

Smoke from Australia’s intense fires in 2019 and 2020 damaged the ozone layer
Increasingly large blazes threaten to undo decades of work to help Earth’s protective layer


A towering cloud of smoke rises over the Green Wattle Creek bushfire on December 21, 2019, near the township of Yanderra in New South Wales, Australia.
HELITAK430/WIKIMEDIA COMMONS (CC BY-SA 4.0)

By Carolyn Gramling
MARCH 17, 2022

Towers of smoke that rose high into the stratosphere during Australia’s “black summer” fires in 2019 and 2020 destroyed some of Earth’s protective ozone layer, researchers report in the March 18 Science.

Chemist Peter Bernath of Old Dominion University in Norfolk, Va., and his colleagues analyzed data collected in the lower stratosphere during 2020 by a satellite instrument called the Atmospheric Chemistry Experiment. It measures how different particles in the atmosphere absorb light at different wavelengths. Such absorption patterns are like fingerprints, identifying what molecules are present in the particles.

The team’s analyses revealed that the particles of smoke, shot into the stratosphere by fire-fueled thunderstorms called pyrocumulonimbus clouds, contained a variety of mischief-making organic molecules (SN: 12/15/20). The molecules, the team reports, kicked off a series of chemical reactions that altered the balances of gases in Earth’s stratosphere to a degree never before observed in 15 years of satellite measurements. That shuffle included boosting levels of chlorine-containing molecules that ultimately ate away at the ozone.

Ozone concentrations in the stratosphere initially increased from January to March 2020, due to similar chemical reactions — sometimes with the contribution of wildfire smoke — that produce ozone pollution at ground level (SN: 12/8/21). But from April to December 2020, the ozone levels not only fell, but sank below the average ozone concentration from 2005 to 2019.

Earth’s ozone layer shields the planet from much of the sun’s ultraviolet radiation. Once depleted by human emissions of chlorofluorocarbons and other ozone-damaging substances, the layer has been showing signs of recovery thanks to the Montreal Protocol, an international agreement to reduce the atmospheric concentrations of those substances (SN: 2/10/21).

But the increasing frequency of large wildfires due to climate change — and their ozone-destroying potential — could become a setback for that rare climate success story, the researchers say (SN: 3/4/20).

Questions or comments on this article? E-mail us at feedback@sciencenews.org

CITATIONS

P. Bernath, C. Boone and J. Crouse. Wildfire smoke destroys stratospheric ozone. Science. Vol. 375, March 18, 2022, p. 1,292. doi: 10.1126/science.abm5611.

Agricultural Research Shows Global Cropland Could Almost Be Cut in Half

Agricultural Land Farm Fields Drone

In the context of trade-offs between land use and biodiversity, LMU geographers have simulated land saving potentials for agriculture.

With rising global demand for agricultural commodities for use as food, feed, and bioenergy, pressure on land is increasing. At the same time, land is an important resource for tackling the principal challenges of the 21st century – the loss of biodiversity and global climate change. One solution to this conflict could be to increase agricultural productivity and thus reduce the required cropland. In an interdisciplinary model-based study, LMU geographers Julia Schneider and Dr. Florian Zabel, together with researchers from the Universities of Basel and Hohenheim, have analyzed how much land area could be saved globally through more efficient production methods and what economic effects – for example, on prices and trade – this would have. As the authors reported in the journal PLOS ONE, their modeling showed that under optimized conditions up to almost half of current cropland could be saved. As a result of increased efficiency, the prices for agricultural products would fall in all regions and global agricultural production would increase by 2.8%.

“The starting point for our work was a current scientific debate as to whether it is better for protecting biodiversity to cultivate more extensively on more land or more intensively on less land, with all the respective pros and cons,” says Schneider. “In this context, we were interested in the actual potential to take land out of agricultural production and what economic effects the implementation of such land saving would have.” To answer this question, the scientists used a process-based biophysical crop model for 15 globally important food and energy crops to analyzed what land saving potential could be obtained by agricultural intensification. For their analysis, they assumed that the yield gap between current and potentially obtainable yields can be closed by 80 percent through more efficient farming methods – such as the efficient use of fertilizers and the optimization of sowing dates or pest and disease control – and that the overall volumes of agricultural products should correspond to today’s output.

Almost half the cropland would be sufficient

The authors come to the overall conclusion that under these conditions the current global cropland requirements could be reduced by between 37 and 48 percent. Regionally, the land saving potential varies: In Europe and North America, for example, there is little land saving potential, as farming is already heavily industrialized and the degree of intensification is very high. “Depending on the established farming system, the maximum possible yields are almost reached in some cases,” says co-author Zabel. “In regions such as Sub-Saharan Africa by contrast, current yields are mostly well below what would be possible based on the local environmental conditions and with optimized farming methods.” According to the model simulations, this is also the case in India and parts of Latin America, albeit to a somewhat lesser extent there than in Sub-Saharan Africa. More efficient production could therefore lead to large land saving potentials in these regions. Regarding individual crops, the researchers identified particularly large land saving potentials above all for grains such as sorghum and millet, which are currently mainly cultivated by smallholder farmers in regions with large yield gaps. However, for cash crops such as oil palm or sugar cane, which are already cultivated very intensively, the model showed little land saving potential.

As their next step, the scientists integrated the regional land saving potentials into an economic model developed by the Universities of Basel and Hohenheim, in order to investigate the economic effects of the cropland reduction. “This revealed that the more efficient use of land would lead to a fall in prices in all regions and for all crops,” says Schneider. In some regions, this could have a positive effect on food security. Yet, the simulations showed that the increased efficiency would in turn motivate the farmers in some regions to increase their production, causing the global production of agricultural goods to rise by 2.8 percent.

Strongest economic effects in regions with high pressure on land

There were big variations in the economic effects of land saving between the investigated regions. “Surprisingly, we discovered that the strongest economic effects – that is, the largest changes in prices, production, and trade flows – did not occur in the regions with the largest land saving potential, but in densely populated regions with high pressure on land, such as in Malaysia and Indonesia and parts of South America. In these countries, land is a particularly scarce and therefore an expensive resource and thus makes up a big part of the total production costs,” says Schneider. Through globalized agricultural markets and international trade, the effects of land saving could be experienced in spatially distant regions. Globally falling prices, for example, could lead to an increase in imports of around 30 percent in the Middle East and parts of North Africa, as they become cheaper than domestic production.

The calculated potentials for land saving could serve as a starting point to assess the potential for alternative usages of freed-up land, such as carbon sequestration through afforestation and reforestation to mitigate climate change. By quantifying the carbon sequestration potential on saved land through the recovery of natural vegetation, the researchers found that additionally between 114 Gt and 151 Gt CO2 could potentially be sequestered on the saved land. For comparison, annual global emissions are currently around 42 Gt CO2. Other options for alternative usages of the saved land could be the cultivation of bioenergy crops or the protection of biodiversity, e.g. by setting up nature reserves and similar measures. “Against the background of a growing global population and changing consumption and dietary patterns, the expansion of current cropland is still discussed as one strategy to increase agricultural production,” says Schneider. “Our study has shown that this needs to be discussed critically, as a more efficient usage of current cropland could help to reduce the pressure on land resources. Moreover, we see the importance of integrative and global research approaches, which enable to identify potential trade-offs and co-benefits between food security, climate change mitigation and the protection of biodiversity. They thus play a major role in reconciling important goals of the 21st century for a sustainable development.”

Reference: “Global cropland could be almost halved: Assessment of land saving potentials under different strategies and implications for agricultural markets” by Julia M. Schneider, Florian Zabel, Franziska Schünemann, Ruth Delzeit and Wolfram Mauser, 22 February 2022, PLOS ONE.
DOI: 10.1371/journal.pone.0263063